id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
424,420 | https://en.wikipedia.org/wiki/Accelerator%20physics | Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. As such, it can be described as the study of motion, manipulation and observation of relativistic charged particle beams and their interaction with accelerator structures by electromagnetic fields.
It is also related to other fields:
Microwave engineering (for acceleration/deflection structures in the radio frequency range).
Optics with an emphasis on geometrical optics (beam focusing and bending) and laser physics (laser-particle interaction).
Computer technology with an emphasis on digital signal processing; e.g., for automated manipulation of the particle beam.
Plasma physics, for the description of intense beams.
The experiments conducted with particle accelerators are not regarded as part of accelerator physics, but belong (according to the objectives of the experiments) to, e.g., particle physics, nuclear physics, condensed matter physics or materials physics. The types of experiments done at a particular accelerator facility are determined by characteristics of the generated particle beam such as average energy, particle type, intensity, and dimensions.
Acceleration and interaction of particles with RF structures
While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, this method has limits given by electrical breakdown at high voltages. Furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy that is applicable to the particles.
To circumvent this problem, linear particle accelerators operate using time-varying fields. To control this fields using hollow macroscopic structures through which the particles are passing (wavelength restrictions), the frequency of such acceleration fields is located in the radio frequency region of the electromagnetic spectrum.
The space around a particle beam is evacuated to prevent scattering with gas atoms, requiring it to be enclosed in a vacuum chamber (or beam pipe). Due to the strong electromagnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of a resistive impedance (i.e., the finite resistivity of the beam pipe material) or an inductive/capacitive impedance (due to the geometric changes in the beam pipe's cross section).
These impedances will induce wakefields (a strong warping of the electromagnetic field of the beam) that can interact with later particles. Since this interaction may have negative effects, it is studied to determine its magnitude, and to determine any actions that may be taken to mitigate it.
Beam dynamics
Due to the high velocity of the particles, and the resulting Lorentz force for magnetic fields, adjustments to the beam direction are mainly controlled by magnetostatic fields that deflect particles. In most accelerator concepts (excluding compact structures like the cyclotron or betatron), these are applied by dedicated electromagnets with different properties and functions. An important step in the development of these types of accelerators was the understanding of strong focusing. Dipole magnets are used to guide the beam through the structure, while quadrupole magnets are used for beam focusing, and sextupole magnets are used for correction of dispersion effects.
A particle on the exact design trajectory (or design orbit) of the accelerator only experiences dipole field components, while particles with transverse position deviation are re-focused to the design orbit. For preliminary calculations, neglecting all fields components higher than quadrupolar, an inhomogenic Hill differential equation
can be used as an approximation, with
a non-constant focusing force , including strong focusing and weak focusing effects
the relative deviation from the design beam impulse
the trajectory radius of curvature , and
the design path length ,
thus identifying the system as a parametric oscillator. Beam parameters for the accelerator can then be calculated using Ray transfer matrix analysis; e.g., a quadrupolar field is analogous to a lens in geometrical optics, having similar properties regarding beam focusing (but obeying Earnshaw's theorem).
The general equations of motion originate from relativistic Hamiltonian mechanics, in almost all cases using the Paraxial approximation. Even in the cases of strongly nonlinear magnetic fields, and without the paraxial approximation, a Lie transform may be used to construct an integrator with a high degree of accuracy.
Modeling Codes
There are many different software packages available for modeling the different aspects of accelerator physics.
One must model the elements that create the electric and magnetic fields, and then one must model the charged particle evolution within those fields.
Beam diagnostics
A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured.
A typical machine may use many different types of measurement device in order to measure different properties. These include (but are not limited to) Beam Position Monitors (BPMs) to measure the position of the bunch, screens (fluorescent screens, Optical Transition Radiation (OTR) devices) to image the profile of the bunch, wire-scanners to measure its cross-section, and toroids or ICTs to measure the bunch charge (i.e., the number of particles per bunch).
While many of these devices rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise. Not only is a full understanding of the physics of the operation of the device necessary, but it is also necessary to ensure that the device is capable of measuring the expected parameters of the machine under consideration.
Success of the full range of beam diagnostics often underpins the success of the machine as a whole.
Machine tolerances
Errors in the alignment of components, field strength, etc., are inevitable in machines of this scale, so it is important to consider the tolerances under which a machine may operate.
Engineers will provide the physicists with expected tolerances for the alignment and manufacture of each component to allow full physics simulations of the expected behaviour of the machine under these conditions. In many cases it will be found that the performance is degraded to an unacceptable level, requiring either re-engineering of the components, or the invention of algorithms that allow the machine performance to be 'tuned' back to the design level.
This may require many simulations of different error conditions in order to determine the relative success of each tuning algorithm, and to allow recommendations for the collection of algorithms to be deployed on the real machine.
See also
Particle accelerator
Significant publications for accelerator physics
Category:Accelerator physics
Category:Accelerator physicists
Category:Particle accelerators
References
External links
United States Particle Accelerator School
UCB/LBL Beam Physics site
BNL page on The Alternating Gradient Concept
Experimental particle physics | Accelerator physics | [
"Physics"
] | 1,353 | [
"Applied and interdisciplinary physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Accelerator physics"
] |
424,440 | https://en.wikipedia.org/wiki/H-theorem | In classical statistical mechanics, the H-theorem, introduced by Ludwig Boltzmann in 1872, describes the tendency of the quantity H (defined below) to decrease in a nearly-ideal gas of molecules. As this quantity H was meant to represent the entropy of thermodynamics, the H-theorem was an early demonstration of the power of statistical mechanics as it claimed to derive the second law of thermodynamics—a statement about fundamentally irreversible processes—from reversible microscopic mechanics. It is thought to prove the second law of thermodynamics, albeit under the assumption of low-entropy initial conditions.
The H-theorem is a natural consequence of the kinetic equation derived by Boltzmann that has come to be known as Boltzmann's equation. The H-theorem has led to considerable discussion about its actual implications, with major themes being:
What is entropy? In what sense does Boltzmann's quantity H correspond to the thermodynamic entropy?
Are the assumptions (especially the assumption of molecular chaos) behind Boltzmann's equation too strong? When are these assumptions violated?
Name and pronunciation
Boltzmann in his original publication writes the symbol E (as in entropy) for its statistical function. Years later, Samuel Hawksley Burbury, one of the critics of the theorem, wrote the function with the symbol H, a notation that was subsequently adopted by Boltzmann when referring to his "H-theorem". The notation has led to some confusion regarding the name of the theorem. Even though the statement is usually referred to as the "Aitch theorem", sometimes it is instead called the "Eta theorem", as the capital Greek letter Eta (Η) is indistinguishable from the capital version of Latin letter h (H). Discussions have been raised on how the symbol should be understood, but it remains unclear due to the lack of written sources from the time of the theorem. Studies of the typography and the work of J.W. Gibbs seem to favour the interpretation of H as Eta.
Definition and meaning of Boltzmann's H
The H value is determined from the function f(E, t) dE, which is the energy distribution function of molecules at time t. The value f(E, t) dE is the number of molecules that have kinetic energy between E and E + dE. H itself is defined as
For an isolated ideal gas (with fixed total energy and fixed total number of particles), the function H is at a minimum when the particles have a Maxwell–Boltzmann distribution; if the molecules of the ideal gas are distributed in some other way (say, all having the same kinetic energy), then the value of H will be higher. Boltzmann's H-theorem, described in the next section, shows that when collisions between molecules are allowed, such distributions are unstable and tend to irreversibly seek towards the minimum value of H (towards the Maxwell–Boltzmann distribution).
(Note on notation: Boltzmann originally used the letter E for quantity H; most of the literature after Boltzmann uses the letter H as here. Boltzmann also used the symbol x to refer to the kinetic energy of a particle.)
Boltzmann's H theorem
Boltzmann considered what happens during the collision between two particles. It is a basic fact of mechanics that in the elastic collision between two particles (such as hard spheres), the energy transferred between the particles varies depending on initial conditions (angle of collision, etc.).
Boltzmann made a key assumption known as the Stosszahlansatz (molecular chaos assumption), that during any collision event in the gas, the two particles participating in the collision have 1) independently chosen kinetic energies from the distribution, 2) independent velocity directions, 3) independent starting points. Under these assumptions, and given the mechanics of energy transfer, the energies of the particles after the collision will obey a certain new random distribution that can be computed.
Considering repeated uncorrelated collisions, between any and all of the molecules in the gas, Boltzmann constructed his kinetic equation (Boltzmann's equation). From this kinetic equation, a natural outcome is that the continual process of collision causes the quantity H to decrease until it has reached a minimum.
Impact
Although Boltzmann's H-theorem turned out not to be the absolute proof of the second law of thermodynamics as originally claimed (see Criticisms below), the H-theorem led Boltzmann in the last years of the 19th century to more and more probabilistic arguments about the nature of thermodynamics. The probabilistic view of thermodynamics culminated in 1902 with Josiah Willard Gibbs's statistical mechanics for fully general systems (not just gases), and the introduction of generalized statistical ensembles.
The kinetic equation and in particular Boltzmann's molecular chaos assumption inspired a whole family of Boltzmann equations that are still used today to model the motions of particles, such as the electrons in a semiconductor. In many cases the molecular chaos assumption is highly accurate, and the ability to discard complex correlations between particles makes calculations much simpler.
The process of thermalisation can be described using the H-theorem or the relaxation theorem.
Criticism and exceptions
There are several notable reasons described below why the H-theorem, at least in its original 1871 form, is not completely rigorous. As Boltzmann would eventually go on to admit, the arrow of time in the H-theorem is not in fact purely mechanical, but really a consequence of assumptions about initial conditions.
Loschmidt's paradox
Soon after Boltzmann published his H theorem, Johann Josef Loschmidt objected that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism. If the H decreases over time in one state, then there must be a matching reversed state where H increases over time (Loschmidt's paradox). The explanation is that Boltzmann's equation is based on the assumption of "molecular chaos", i.e., that it follows from, or at least is consistent with, the underlying kinetic model that the particles be considered independent and uncorrelated. It turns out that this assumption breaks time reversal symmetry in a subtle sense, and therefore begs the question. Once the particles are allowed to collide, their velocity directions and positions in fact do become correlated (however, these correlations are encoded in an extremely complex manner). This shows that an (ongoing) assumption of independence is not consistent with the underlying particle model.
Boltzmann's reply to Loschmidt was to concede the possibility of these states, but noting that these sorts of states were so rare and unusual as to be impossible in practice. Boltzmann would go on to sharpen this notion of the "rarity" of states, resulting in his entropy formula of 1877.
Spin echo
As a demonstration of Loschmidt's paradox, a modern counterexample (not to Boltzmann's original gas-related H-theorem, but to a closely related analogue) is the phenomenon of spin echo. In the spin echo effect, it is physically possible to induce time reversal in an interacting system of spins.
An analogue to Boltzmann's H for the spin system can be defined in terms of the distribution of spin states in the system. In the experiment, the spin system is initially perturbed into a non-equilibrium state (high H), and, as predicted by the H theorem the quantity H soon decreases to the equilibrium value. At some point, a carefully constructed electromagnetic pulse is applied that reverses the motions of all the spins. The spins then undo the time evolution from before the pulse, and after some time the H actually increases away from equilibrium (once the evolution has completely unwound, the H decreases once again to the minimum value). In some sense, the time reversed states noted by Loschmidt turned out to be not completely impractical.
Poincaré recurrence
In 1896, Ernst Zermelo noted a further problem with the H theorem, which was that if the system's H is at any time not a minimum, then by Poincaré recurrence, the non-minimal H must recur (though after some extremely long time). Boltzmann admitted that these recurring rises in H technically would occur, but pointed out that, over long times, the system spends only a tiny fraction of its time in one of these recurring states.
The second law of thermodynamics states that the entropy of an isolated system always increases to a maximum equilibrium value. This is strictly true only in the thermodynamic limit of an infinite number of particles. For a finite number of particles, there will always be entropy fluctuations. For example, in the fixed volume of the isolated system, the maximum entropy is obtained when half the particles are in one half of the volume, half in the other, but sometimes there will be temporarily a few more particles on one side than the other, and this will constitute a very small reduction in entropy. These entropy fluctuations are such that the longer one waits, the larger an entropy fluctuation one will probably see during that time, and the time one must wait for a given entropy fluctuation is always finite, even for a fluctuation to its minimum possible value. For example, one might have an extremely low entropy condition of all particles being in one half of the container. The gas will quickly attain its equilibrium value of entropy, but given enough time, this same situation will happen again. For practical systems, e.g. a gas in a 1-liter container at room temperature and atmospheric pressure, this time is truly enormous, many multiples of the age of the universe, and, practically speaking, one can ignore the possibility.
Fluctuations of H in small systems
Since H is a mechanically defined variable that is not conserved, then like any other such variable (pressure, etc.) it will show thermal fluctuations. This means that H regularly shows spontaneous increases from the minimum value. Technically this is not an exception to the H theorem, since the H theorem was only intended to apply for a gas with a very large number of particles. These fluctuations are only perceptible when the system is small and the time interval over which it is observed is not enormously large.
If H is interpreted as entropy as Boltzmann intended, then this can be seen as a manifestation of the fluctuation theorem.
Connection to information theory
H is a forerunner of Shannon's information entropy. Claude Shannon denoted his measure of information entropy H after the H-theorem. The article on Shannon's information entropy contains an
explanation of the discrete counterpart of the quantity H, known as the information entropy or information uncertainty (with a minus sign). By extending the discrete information entropy to the continuous information entropy, also called differential entropy, one obtains the expression in the equation from the section above, Definition and Meaning of Boltzmann's H, and thus a better feel for the meaning of H.
The H-theorem's connection between information and entropy plays a central role in a recent controversy called the Black hole information paradox.
Tolman's H-theorem
Richard C. Tolman's 1938 book The Principles of Statistical Mechanics dedicates a whole chapter to the study of Boltzmann's H theorem, and its extension in the generalized classical statistical mechanics of Gibbs. A further chapter is devoted to the quantum mechanical version of the H-theorem.
Classical mechanical
We let and be our generalized canonical coordinates for a set of particles. Then we consider a function that returns the probability density of particles, over the states in phase space. Note how this can be multiplied by a small region in phase space, denoted by , to yield the (average) expected number of particles in that region.
Tolman offers the following equations for the definition of the quantity H in Boltzmann's original H theorem.
Here we sum over the regions into which phase space is divided, indexed by . And in the limit for an infinitesimal phase space volume , we can write the sum as an integral.
H can also be written in terms of the number of molecules present in each of the cells.
An additional way to calculate the quantity H is:
where P is the probability of finding a system chosen at random from the specified microcanonical ensemble. It can finally be written as:
where G is the number of classical states.
The quantity H can also be defined as the integral over velocity space :
{| style="width:100%" border="0"
|-
| style="width:95%" |
| style= | (1)
|}
where P(v) is the probability distribution.
Using the Boltzmann equation one can prove that H can only decrease.
For a system of N statistically independent particles, H is related to the thermodynamic entropy S through:
So, according to the H-theorem, S can only increase.
Quantum mechanical
In quantum statistical mechanics (which is the quantum version of classical statistical mechanics), the H-function is the function:
where summation runs over all possible distinct states of the system, and pi is the probability that the system could be found in the i-th state.
This is closely related to the entropy formula of Gibbs,
and we shall (following e.g., Waldram (1985), p. 39) proceed using S rather than H.
First, differentiating with respect to time gives
(using the fact that Σ dpi/dt = 0, since Σ pi = 1, so the second term vanishes. We will see later that it will be useful to break this into two sums.)
Now Fermi's golden rule gives a master equation for the average rate of quantum jumps from state α to β; and from state β to α. (Of course, Fermi's golden rule itself makes certain approximations, and the introduction of this rule is what introduces irreversibility. It is essentially the quantum version of Boltzmann's Stosszahlansatz.) For an isolated system the jumps will make contributions
where the reversibility of the dynamics ensures that the same transition constant ναβ appears in both expressions.
So
The two differences terms in the summation always have the same sign. For example:
then
so overall the two negative signs will cancel.
Therefore,
for an isolated system.
The same mathematics is sometimes used to show that relative entropy is a Lyapunov function of a Markov process in detailed balance, and other chemistry contexts.
Gibbs' H-theorem
Josiah Willard Gibbs described another way in which the entropy of a microscopic system would tend to increase over time. Later writers have called this "Gibbs' H-theorem" as its conclusion resembles that of Boltzmann's. Gibbs himself never called it an H-theorem, and in fact his definition of entropy—and mechanism of increase—are very different from Boltzmann's. This section is included for historical completeness.
The setting of Gibbs' entropy production theorem is in ensemble statistical mechanics, and the entropy quantity is the Gibbs entropy (information entropy) defined in terms of the probability distribution for the entire state of the system. This is in contrast to Boltzmann's H defined in terms of the distribution of states of individual molecules, within a specific state of the system.
Gibbs considered the motion of an ensemble which initially starts out confined to a small region of phase space, meaning that the state of the system is known with fair precision though not quite exactly (low Gibbs entropy). The evolution of this ensemble over time proceeds according to Liouville's equation. For almost any kind of realistic system, the Liouville evolution tends to "stir" the ensemble over phase space, a process analogous to the mixing of a dye in an incompressible fluid. After some time, the ensemble appears to be spread out over phase space, although it is actually a finely striped pattern, with the total volume of the ensemble (and its Gibbs entropy) conserved. Liouville's equation is guaranteed to conserve Gibbs entropy since there is no random process acting on the system; in principle, the original ensemble can be recovered at any time by reversing the motion.
The critical point of the theorem is thus: If the fine structure in the stirred-up ensemble is very slightly blurred, for any reason, then the Gibbs entropy increases, and the ensemble becomes an equilibrium ensemble. As to why this blurring should occur in reality, there are a variety of suggested mechanisms. For example, one suggested mechanism is that the phase space is coarse-grained for some reason (analogous to the pixelization in the simulation of phase space shown in the figure). For any required finite degree of fineness the ensemble becomes "sensibly uniform" after a finite time. Or, if the system experiences a tiny uncontrolled interaction with its environment, the sharp coherence of the ensemble will be lost. Edwin Thompson Jaynes argued that the blurring is subjective in nature, simply corresponding to a loss of knowledge about the state of the system. In any case, however it occurs, the Gibbs entropy increase is irreversible provided the blurring cannot be reversed.
The exactly evolving entropy, which does not increase, is known as fine-grained entropy. The blurred entropy is known as coarse-grained entropy.
Leonard Susskind analogizes this distinction to the notion of the volume of a fibrous ball of cotton: On one hand the volume of the fibers themselves is constant, but in another sense there is a larger coarse-grained volume, corresponding to the outline of the ball.
Gibbs' entropy increase mechanism solves some of the technical difficulties found in Boltzmann's H-theorem: The Gibbs entropy does not fluctuate nor does it exhibit Poincare recurrence, and so the increase in Gibbs entropy, when it occurs, is therefore irreversible as expected from thermodynamics. The Gibbs mechanism also applies equally well to systems with very few degrees of freedom, such as the single-particle system shown in the figure. To the extent that one accepts that the ensemble becomes blurred, then, Gibbs' approach is a cleaner proof of the second law of thermodynamics.
Unfortunately, as pointed out early on in the development of quantum statistical mechanics by John von Neumann and others, this kind of argument does not carry over to quantum mechanics. In quantum mechanics, the ensemble cannot support an ever-finer mixing process, because of the finite dimensionality of the relevant portion of Hilbert space. Instead of converging closer and closer to the equilibrium ensemble (time-averaged ensemble) as in the classical case, the density matrix of the quantum system will constantly show evolution, even showing recurrences. Developing a quantum version of the H-theorem without appeal to the Stosszahlansatz is thus significantly more complicated.
See also
Loschmidt's paradox
Arrow of time
Second law of thermodynamics
Fluctuation theorem
Ehrenfest diffusion model
Notes
References
Non-equilibrium thermodynamics
Thermodynamic entropy
Philosophy of thermal and statistical physics
Physics theorems
Statistical mechanics theorems | H-theorem | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,972 | [
"Theorems in dynamical systems",
"Philosophy of thermal and statistical physics",
"Physical quantities",
"Equations of physics",
"Non-equilibrium thermodynamics",
"Statistical mechanics theorems",
"Thermodynamic entropy",
"Theorems in mathematical physics",
"Entropy",
"Thermodynamics",
"Dynamica... |
424,540 | https://en.wikipedia.org/wiki/Einstein%20field%20equations | In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
Mathematical form
The Einstein field equations (EFE) may be written in the form:
where is the Einstein tensor, is the metric tensor, is the stress–energy tensor, is the cosmological constant and is the Einstein gravitational constant.
The Einstein tensor is defined as
where is the Ricci curvature tensor, and is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
or
where is the Newtonian constant of gravitation and is the speed of light in vacuum.
The EFE can thus also be written as
In standard units, each term on the left has units of 1/length2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor , since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
Sign convention
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
The third sign above is related to the choice of convention for the Ricci tensor:
With these definitions Misner, Thorne, and Wheeler classify themselves as , whereas Weinberg (1972) is , Peebles (1980) and Efstathiou et al. (1990) are , Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are .
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
The sign of the cosmological term would change in both these versions if the metric sign convention is used rather than the MTW metric sign convention adopted here.
Equivalent formulations
Taking the trace with respect to the metric of both sides of the EFE one gets
where is the spacetime dimension. Solving for and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
In dimensions this reduces to
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace in the expression on the right with the Minkowski metric without significant loss of accuracy).
The cosmological constant
In the Einstein field equations
the term containing the cosmological constant was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned , remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
This tensor describes a vacuum state with an energy density and isotropic pressure that are fixed constants and given by
where it is assumed that has SI unit m and is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
Features
Conservation of energy and momentum
General relativity is consistent with the local conservation of energy and momentum expressed as
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
Nonlinearity
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is Schrödinger's equation of quantum mechanics, which is linear in the wavefunction.
The correspondence principle
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the slow-motion approximation. In fact, the constant appearing in the EFE is determined by making these two approximations.
Vacuum field equations
If the energy–momentum tensor is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
In the case of nonzero cosmological constant, the equations are
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, , are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
Einstein–Maxwell equations
If the energy–momentum tensor is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant , taken to be zero in conventional relativity theory):
Additionally, the covariant Maxwell equations are also applicable in free space:
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential such that
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
Solutions
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
The linearized EFE
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
Polynomial form
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein-Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
See also
Conformastatic spacetimes
Einstein–Hilbert action
Equivalence principle
Exact solutions in general relativity
General relativity resources
History of general relativity
Hamilton–Jacobi–Einstein equation
Mathematics of general relativity
Numerical relativity
Ricci calculus
Notes
References
See General relativity resources.
External links
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
External images
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia.
Albert Einstein
Equations of physics
General relativity
Partial differential equations | Einstein field equations | [
"Physics",
"Mathematics"
] | 2,743 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"General relativity",
"Theory of relativity"
] |
425,225 | https://en.wikipedia.org/wiki/Los%20Angeles%20Aqueduct | The Los Angeles Aqueduct system, comprising the Los Angeles Aqueduct (Owens Valley aqueduct) and the Second Los Angeles Aqueduct, is a water conveyance system, built and operated by the Los Angeles Department of Water and Power. The Owens Valley aqueduct was designed and built by the city's water department, at the time named The Bureau of Los Angeles Aqueduct, under the supervision of the department's Chief Engineer William Mulholland. The system delivers water from the Owens River in the eastern Sierra Nevada mountains to Los Angeles.
The aqueduct's construction was controversial from the start, as water diversions to Los Angeles eliminated the Owens Valley as a viable farming community. Clauses in the city's charter originally stated that the city could not sell or provide surplus water to any area outside the city, forcing adjacent communities to annex themselves into Los Angeles.
The aqueduct's infrastructure also included the completion of the St. Francis Dam in 1926 to provide storage in case of disruption to the system. The dam's collapse two years later killed at least 431 people, halted the rapid pace of annexation, and eventually led to the formation of the Metropolitan Water District of Southern California to build and operate the Colorado River Aqueduct to bring water from the Colorado River to Los Angeles County.
The continued operation of the Los Angeles Aqueduct has led to public debate, legislation, and court battles over its environmental impacts on Mono Lake and other ecosystems.
First Los Angeles Aqueduct
Construction
The aqueduct project began in 1905 when the voters of Los Angeles approved a bond for the 'purchase of lands and water and the inauguration of work on the aqueduct'. On June 12, 1907, a second bond was passed with a budget of to fund construction.
Construction began in 1908 and was divided into eleven divisions. The city acquired three limestone quarries, two tufa quarries and it constructed and operated a cement plant in Monolith, California, which could produce 1,200 barrels of Portland cement per day. Regrinding mills were also built and operated by the city at the tufa quarries. To move 14 million ton-miles of freight, the city contracted with Southern Pacific to build a 118 mile long rail system from the Monolith mills to Olancha.
The number of men who were on the payroll the first year was 2,629 and this number peaked at 6,060 in May 1909. In 1910, employment dropped to 1,150 due to financial reasons but rebounded later in the year. In 1911 and 1912, employment ranged from 2,800 to 3,800 workers. The number of laborers working on the aqueduct at its peak was 3,900. In 1913, the City of Los Angeles completed construction of the first Los Angeles Aqueduct.
Route
The aqueduct as originally constructed consisted of six storage reservoirs and of conduit. Beginning north of Blackrock (Inyo County), the aqueduct diverts the Owens River into an unlined canal to begin its journey south to the Lower San Fernando Reservoir. This reservoir was later renamed the Lower Van Norman Reservoir.
The original project consisted of of open unlined canal, of lined open canal, of covered concrete conduit, of concrete tunnels, steel siphons, of railroad track, two hydroelectric plants, three cement plants, of power lines, of telephone line, of roads and was later expanded with the construction of the Mono Extension and the Second Los Angeles Aqueduct.
The aqueduct uses gravity alone to move the water and also uses the water to generate electricity, which makes it cost-efficient to operate.
Reactions by impacted communities
The construction of the Los Angeles Aqueduct effectively eliminated the Owens Valley as a viable farming community and eventually devastated the Owens Lake ecosystem. A group labeled the "San Fernando Syndicate" – including Fred Eaton, Mulholland, Harrison Otis (the publisher of The Los Angeles Times), Henry Huntington (an executive of the Pacific Electric Railway), and other wealthy individuals – were a group of investors who bought land in the San Fernando Valley allegedly based on inside knowledge that the Los Angeles aqueduct would soon irrigate it and encourage development. Although there is disagreement over the actions of the "syndicate" as to whether they were a "diabolical" cabal or only a group that united the Los Angeles business community behind supporting the aqueduct, Eaton, Mulholland and others connected with the project have long been accused of using deceptive tactics and underhanded methods to obtain water rights and block the Bureau of Reclamation from building water infrastructure for the residents in Owens Valley, and creating a false sense of urgency around the completion of the aqueduct for Los Angeles residents.
By the 1920s, the aggressive pursuit of water rights and the diversion of the Owens River precipitated the outbreak of violence known as the California water wars. Farmers in Owens Valley, following a series of unmet deadlines from LADWP, attacked infrastructure, dynamiting the aqueduct numerous times, and opened sluice gates to divert the flow of water back into Owens Lake. The lake has never been refilled, and is now maintained with a minimum level of surface water to prevent the introduction of dangerous, toxic lake-floor dust into the local community.
St. Francis Dam failure
In 1917, The Bureau of Los Angeles Aqueduct sought to build a holding reservoir to regulate flow and provide hydroelectric power and storage in case of disruption to the aqueduct system. The initial site chosen was in Long Valley along the Owens River, but Eaton, who had bought up much of the valley in anticipation of the need for a reservoir, refused to sell the land at the price offered by Los Angeles. Mulholland then made the decision to move the reservoir to San Francisquito Canyon above what is now Santa Clarita, California. The resulting St. Francis Dam was completed in 1926 and created a reservoir capacity of 38,000 acre-feet (47,000,000 m3). On March 12, 1928, the dam catastrophically failed, sending a wall of water down the canyon, ultimately reaching the Pacific Ocean near Ventura and Oxnard, and killing at least 431 people. The resulting investigation and trial led to the retirement of William Mulholland as the head of the Los Angeles Bureau of Water Works and Supply in 1929. The dam failure is the worst man-made flood disaster in the US in the 20th century and the second largest single-event loss of life in California history after the 1906 San Francisco earthquake.
Mono Basin Extension
In an effort to find more water, the city of Los Angeles reached farther north. In 1930, Los Angeles voters passed a third bond to buy land in the Mono Basin and fund the Mono Basin extension. The extension diverted flows from Rush Creek, Lee Vining Creek, Walker Creek, and Parker Creek that would have flowed into Mono Lake. The construction of the Mono extension consisted of an intake at Lee Vining Creek, the Lee Vining conduit to the Grant Reservoir on Rush Creek, which would have a capacity of , the Mono Craters Tunnel to the Owens River, and a second reservoir, later named Crowley Lake with a capacity of in Long Valley at the head of the Owens River Gorge.
Completed in 1940, diversions began in 1941. The Mono Extension has a design capacity of of flow to the aqueduct. However, the flow was limited to due to the limited downstream capacity of the Los Angeles Aqueduct. Full appropriation of the water could not be met until the second aqueduct was completed in 1970.
The Mono Extension's impact on Mono Basin and litigation
From 1940 to 1970, water exports through the Mono Extension averaged per year and peaked at in 1974. Export licenses granted by the State Water Resources Control Board (SWRCB) in 1974 increased exports to per year. These export levels severely impacted the region's fish habitat, lake level, and air quality, which led to a series of lawsuits. The results of the litigation culminated with a SWRCB decision to restore fishery protection (stream) flows to specified minimums, and raise Mono Lake to above sea level. The agreement limited further exports from the Mono Basin to or less per year during the transition period.
Second Los Angeles Aqueduct
In 1956, the State Department of Water Resources reported that Los Angeles was exporting only of water of the available in the Owens Valley and Mono Basin. Three years later, the State Water Rights Board warned Los Angeles that they could lose rights to the water they were permitted for but not appropriating. Faced with the possible loss of future water supply, Los Angeles began the five-year construction of the aqueduct in 1965 at a cost US$89 million. Once the city received diversion permits, water exports jumped in 1970, adding 110,000 AF that year into the aqueduct system. By 1974, exports climbed to per year. Unlike the First Aqueduct which was built entirely by Public Works, the Second Los Angeles Aqueduct was primarily built on contract by various private construction firms including R.A. Wattson Co., Winston Bros., and the Griffith Co. The Los Angeles Department of Water and Power managed the project and performed some finishing construction on the Mojave conduit and Jawbone & Dove Spring pipelines.
Route
The aqueduct was designed to flow and begins at the Merritt Diversion Structure at the junction of the North and South Haiwee Reservoirs, south of Owens Lake, and runs roughly parallel to the first aqueduct. Water flows entirely by gravity from an elevation of at the Haiwee Reservoir through two power drops to an elevation of at the Upper Van Norman Reservoir.
The Second Aqueduct was not built as a single contiguous conduit. For design and construction purposes the aqueduct was divided into Northern and Southern sections and the two are connected by the San Francisquito Tunnels, which are part of the First Aqueduct.
The Northern Section carries water starting at the North Haiwee Reservoir through the Haiwee Bypass passing around the South Haiwee Reservoir. The flow then continues south through a series of pressure pipelines and concrete conduits where it connects with the First Aqueduct at the North Portal of the Elizabeth Tunnel near the Fairmont Reservoir.
The San Francisquito Tunnels (which include the Elizabeth Tunnel) have a flow capacity of and are large enough to handle the flow of both aqueducts. Once the combined flow reaches the penstocks above Power Plant #2, water is diverted into the Southern Section of the second aqueduct through the Drinkwater Tunnel to the Drinkwater Reservoir.
The last segment of pipe, known as the Saugus Pipeline, carries water south past Bouquet Canyon, Soledad Canyon and Placerita Canyon in the city of Santa Clarita. From there it roughly parallels Sierra Highway before it enters Magazine Canyon near the Terminal structure and Cascades. Water from the Terminal structure can then flow to either the Cascade or penstock to the Foothill Power Plant and into the Upper Van Norman Reservoir.
In addition to the construction in the Northern and Southern sections, improvements were also made to the lined canal between the Alabama Gates and the North Haiwee Reservoir in the Northern Section that consisted of adding sidewalls to both sides of the canal and the raising of overcrossings. This work increased the capacity of the lined canal from to cfs.
Second aqueduct's impact on the water system
The increased flows provided by the second aqueduct lasted only from 1971 through 1988. In 1974 the environmental consequences of the higher exports were first being recognized in the Mono Basin and Owens Valley. This was followed by a series of court ordered restrictions imposed on water exports, which resulted in Los Angeles losing water. In 2005, the Los Angeles Urban Water Management Report reported that 40–50% of the aqueduct's historical supply is now devoted to ecological resources in Mono and Inyo counties.
Influence on Los Angeles and the county
From 1909 to 1928, the city of Los Angeles grew from to . This was due largely to the aqueduct, and the city's charter which stated that the City of Los Angeles could not sell or provide surplus water to any area outside the city.
Outlying areas relied on wells and creeks for water and, as they dried up, the people in those areas realized that if they were going to be able to continue irrigating their farms and provide themselves domestic water, they would have to annex themselves to the City of Los Angeles.
Growth was so rapid that it appeared as if the city of Los Angeles would eventually assume the size of the entire county. William Mulholland continued adding capacity to the aqueduct, building the St. Francis Dam that would impound water creating the San Francisquito Reservoir, filed for additional water from the Colorado River, and began sending engineers and miners to clear the heading at the San Jacinto Tunnel that he knew was key to the construction of the Colorado River Aqueduct.
The aqueduct's water provided developers with the resources to quickly develop the San Fernando Valley and Los Angeles through World War II. Mulholland's role in the vision and completion of the aqueduct and the growth of Los Angeles into a large metropolis is recognized with the William Mulholland Memorial Fountain, built in 1940 at Riverside Drive and Los Feliz Boulevard in Los Feliz. Mulholland Drive and Mulholland Dam are both named after him.
Many more cities and unincorporated areas would likely have annexed into the city of Los Angeles if the St. Francis Dam had not collapsed. The catastrophic failure of the St. Francis Dam in 1928 killed an estimated 431 people, flooded parts of Santa Clarita, and devastated much of the Santa Clara River Valley in Ventura County.
The failure of the dam raised the question in a number of people's minds whether the city had engineering competence and capability to manage such a large project as the Colorado River Aqueduct despite the fact that they had built the Los Angeles Aqueduct. After the collapse, the pace of annexation came to a rapid halt when eleven nearby cities including Burbank, Glendale, Pasadena, Beverly Hills, San Marino, Santa Monica, Anaheim, Colton, Santa Ana, and San Bernardino decided to form the Metropolitan Water District with Los Angeles. The city's growth following the formation of the MWD would be limited to 27.65 square miles.
Farmers
In 1905, the city of Los Angeles began the process of acquiring water and land rights in the Owens Valley region in preparation for the construction of the Los Angeles Aqueduct, initially misleading farmers who were under the assumption that the purchases were intended for a local water project. Due to many farmers holding on to a very small portion of the total water in Owens Valley, no singular farmer had the means to influence or divert attention away from the desires of Los Angeles in wanting to acquire water and land rights in the region. Accordingly, farmers formed collective groups to increase their bargaining power, the most notable being the Owens Valley Irrigation District. Nevertheless, the city of Los Angeles bypassed such efforts from the collective farmer groups through engaging in checkerboarding, purchasing and acquiring surrounding land of an opposing farmer and essentially circumvent the need to buy their land altogether. By 1934, the city of Los Angeles had acquired 95% of the agricultural land in the Owens Valley.
Indigenous
According to archaeologists and anthropologists, the Paiute people, also known as Nüńwa Paya Hūp Ca’á Otūǔ’m (translates to “We are Water Ditch Coyote children"), settled in the Owens Valley region as early as 600 C.E., having long adopted and specialized in a hunting and harvesting economy.
As a result of mining and agricultural development in California in the mid-19th century following the annexation of the state as well as the subsequent influx of white settlers coming into the region, many Paiute people were relocated to what is now modern-day Porterville, California in 1863. The Paiute people that remained found it difficult to continue harvesting indigenous food sources as the ongoing siphoning of the Owens Valley water by the city of Los Angeles for the aqueduct project dramatically altered the supply of water in the surrounding region.
In 1937, the United States Federal Government signed into the law the Bankhead-Jones Farm Tenant Act, commonly known as the Land Exchange Act, which allowed Paiute people to trade 2,914 acres of allotted land to the city of Los Angeles in exchange for 1,392 acres of hospitable lands which became the Bishop, Big Pine, and Lone Pine Reservations located east from the Sierra Nevada Mountains. Nevertheless, the U.S. federal government was unable to secure for the Paiutes water rights from the city of Los Angeles, which insisted that they required a two-third vote from city residents to be able to transfer water. This left the Paiute people without adequate amounts of water to accommodate for their growing population.
In 1994, the Department of the Interior opened an ongoing investigation looking into the water rights issues between the city of Los Angeles and the Paiute people, led by the Owens Valley Indian Water Commission, a consortium comprising the Bishop, Big Pine, and Lone Pine Reservations.
Owens Valley Ecosystem and Agriculture
The impact of the Los Angeles Aqueduct Project to the Owens Valley region was immediate and detrimental to future agricultural work of local farmers. In 1923, in an effort to increase the water supply, the city of Los Angeles began purchasing vast parcels of land and commenced the drilling of new wells in the region, significantly lowering the level of groundwater in the Owens Valley, even affecting farmers who “did not sell to the city’s representatives.” By 1970, constant groundwater pumping by the city of Los Angeles had virtually dried up all the major springs in the Owens Valley, impacting the surrounding wetlands, springs, meadows, and marsh habitats.
Ecological Disruption
The consequent transfer of water out of the Owens Lake and Mono Lake decimated the natural ecology of the region, transforming what was a “lush terrain into desert.” Furthermore, alkaline sediment from the receding shorelines of both Owens Lake and Mono Lake resulting from increased water usage made their way to areas of human settlement by way of being lifted from dust storms, ultimately increasing the chances of respiratory illnesses and cancer. Some of the first victims to be impacted by these dust storms were Japanese Americans interned at the Manzanar War Relocation facilities during World War II.
Despite the ecological and environmental destruction of both Mono and Owens Lake from the construction of the Los Angeles Aqueduct, the Owens Valley and adjoining Mono Lake remain a sanctuary for many bird species that migrate to this region. The legal notion of a “Public Trust Doctrine” used by community members of Owens Valley has been successful in restoring regions of Mono Lake, Mono Highlands and the Owens Valley impacted by the Los Angeles Aqueduct, evident by the re-watering projects that have spurred revitalization of natural local ecosystems.
In 1991, the City of Los Angeles signed the Inyo-LA Long Term Water Agreement despite objections from community members and stakeholders. Concessions of the deal included mitigation projects like the Lower Owens River Project that sought to reduce further air contamination and health risks associated with the dust storms as well as future preservation of the ecology of the Owens Valley.
In 2001, the Los Angeles Department of Water and Power commenced studies on the environmental impact of dust storms, suggesting proposals in re-watering both Mono Lake and Lake Owens.
In 2006, work commenced on the Lower Owens River Project 15 years after it was signed, helping boost the number of fish in the river, restoring marsh habitats, and promoting recreational activities such as fishing and canoeing by re-watering 62 miles of riverbed.
In 2009, the Los Angeles Department of Water and Power resumed ongoing efforts to a proposed master plan in the preservation of the natural habitat of Lake Owens as part of its Owens Lake Dust Mitigation Program (OLDMP).
Future Water Needs
Los Angeles faces significant challenges in securing its water supply for the future due to climate change, population growth, and increasing competition for resources. The city's reliance on imported water from the Los Angeles Aqueduct (LAA), the Colorado River Aqueduct, and the California State Water Project is becoming increasingly strained. These sources are threatened by reduced Sierra Nevada snowpack, prolonged droughts, and legal disputes over water rights.
Efforts to address these challenges include transitioning to more sustainable, local sources such as reclaimed water, desalination, and rainwater harvesting. The city is also exploring integrated water management strategies, including groundwater treatment and water reuse. Additionally, demand management measures, such as incentives for water conservation and public awareness campaigns, are being implemented to reduce dependency on imported water. Innovations in water pricing and regulation are expected to play a vital role in managing future demand.
Water Scarcity
Water scarcity is a growing concern in Los Angeles, driven by factors such as climate change, population growth, and environmental obligations. Reduced Sierra Nevada snowpack and changing precipitation patterns have jeopardized the reliability of the Los Angeles Aqueduct, while prolonged droughts strain access to water from the Colorado River and other sources.
Los Angeles must also balance its water needs with environmental restoration efforts. Legal requirements mandate water retention in Owens Valley and Mono Lake to address ecological damage caused by aqueduct operations. At the same time, agriculture consumes the majority of water statewide, leading to conflicts between urban and rural water use. Despite these challenges, urban water conservation measures, including rainwater harvesting and improved irrigation practices, offer promising strategies for mitigating scarcity. Increased public awareness of water scarcity's historical and environmental roots has also led to advocacy for more sustainable water management practices, such as preserving the Los Angeles River.
In popular culture
Saugus High School derives the name of its daily newsletter, The Pipeline, from an exposed portion of the first aqueduct that passes southwest of the school's property.
San Francisquito Canyon and DWP Power House #1 are featured in Visiting... with Huell Howser Episode 424.
California Historical Landmark – Cascades
The Cascades, which was completed on November 5, 1913, is located near the intersection of Foothill Boulevard and Balboa Boulevard, four miles northwest of San Fernando. It was designated as a California Historical Landmark on July 28, 1958.
Gallery
See also
American Water Landmark
California Aqueduct
Colorado River Aqueduct
State Water Project
Owensmouth
References
Notes
Further reading
External links
LADWP: official Los Angeles Aqueduct website
UCLA: Los Angeles Aqueduct Digital Platform
Los Angeles Aqueduct Landscape Atlas
Mono Lake Committee Website
LADWP: History page on William Mulholland
Los Angeles Aqueduct Slideshow
The William Mulholland Memorial Fountain
Image of workers making repairs on a damaged section of the Los Angeles Aqueduct in No-Name Canyon, Inyo County vicinity, [about 1927]. Los Angeles Times Photographic Archive (Collection 1429). UCLA Library Special Collections, Charles E. Young Research Library, University of California, Los Angeles.
Aqueducts in California
Interbasin transfer
Water in California
Aqueduct
Aqueduct
History of the San Fernando Valley
History of Inyo County, California
History of Mono County, California
Owens Valley
Sierra Nevada (United States)
Transportation buildings and structures in Inyo County, California
Transportation buildings and structures in Kern County, California
Transportation buildings and structures in Los Angeles County, California
Transportation buildings and structures in Mono County, California
Buildings and structures in the San Fernando Valley
Historic American Buildings Survey in California
Historic American Engineering Record in California
Historic Civil Engineering Landmarks
Los Angeles Historic-Cultural Monuments
1913 establishments in California
Hydroelectric power plants in California | Los Angeles Aqueduct | [
"Engineering",
"Environmental_science"
] | 4,680 | [
"Hydrology",
"Civil engineering",
"Interbasin transfer",
"Historic Civil Engineering Landmarks"
] |
425,290 | https://en.wikipedia.org/wiki/Kilogram-force | The kilogram-force (kgf or kgF), or kilopond (kp, from ), is a non-standard gravitational metric unit of force. It is not accepted for use with the International System of Units (SI) and is deprecated for most uses. The kilogram-force is equal to the magnitude of the force exerted on one kilogram of mass in a gravitational field (standard gravity, a conventional value approximating the average magnitude of gravity on Earth). That is, it is the weight of a kilogram under standard gravity. One kilogram-force is defined as . Similarly, a gram-force is , and a milligram-force is .
History
The gram-force and kilogram-force were never well-defined units until the CGPM adopted a standard acceleration of gravity of 9.80665 m/s2 for this purpose in 1901, though they had been used in low-precision measurements of force before that time. Even then, the proposal to define kilogram-force as a standard unit of force was explicitly rejected. Instead, the newton was proposed in 1913 and accepted in 1948.
The kilogram-force has never been a part of the International System of Units (SI), which was introduced in 1960. The SI unit of force is the newton.
Prior to this, the units were widely used in much of the world. They are still in use for some purposes; for example, they are used to specify tension of bicycle spokes, draw weight of bows in archery, and tensile strength of electronics bond wire, for informal references to pressure (as the technically incorrect kilogram per square centimetre, omitting -force, the kilogram-force per square centimetre being the technical atmosphere, the value of which is very near those of both the bar and the standard atmosphere), and to define the "metric horsepower" (PS) as 75 metre-kiloponds per second. In addition, the kilogram force was the standard unit used for Vickers hardness testing.
In 1940s, Germany, the thrust of a rocket engine was measured in kilograms-force, in the Soviet Union it remained the primary unit for thrust in the Russian space program until at least the late 1980s. Dividing the thrust in kilograms-force on the mass of an engine or a rocket in kilograms conveniently gives the thrust to weight ratio, dividing the thrust on propellant consumption rate (mass flow rate) in kilograms per second gives the specific impulse in seconds.
The term "kilopond" has been declared obsolete.
Related units
The tonne-force, metric ton-force, megagram-force, and megapond (Mp) are each 1000 kilograms-force.
The decanewton or dekanewton (daN), exactly 10 N, is used in some fields as an approximation to the kilogram-force, because it is close to the 9.80665 N of 1 kgf.
The gram-force is of a kilogram-force.
See also
Metrology
Avoirdupois
References
Units of force
Non-SI metric units | Kilogram-force | [
"Physics",
"Mathematics"
] | 641 | [
"Force",
"Physical quantities",
"Non-SI metric units",
"Quantity",
"Units of force",
"Units of measurement"
] |
425,310 | https://en.wikipedia.org/wiki/Elastic%20modulus | An elastic modulus (also known as modulus of elasticity (MOE)) is the unit of measurement of an object's or substance's resistance to being deformed elastically (i.e., non-permanently) when a stress is applied to it.
Definition
The elastic modulus of an object is defined as the slope of its stress–strain curve in the elastic deformation region: A stiffer material will have a higher elastic modulus. An elastic modulus has the form:
where stress is the force causing the deformation divided by the area to which the force is applied and strain is the ratio of the change in some parameter caused by the deformation to the original value of the parameter.
Since strain is a dimensionless quantity, the units of will be the same as the units of stress.
Elastic constants and moduli
Elastic constants are specific parameters that quantify the stiffness of a material in response to applied stresses and are fundamental in defining the elastic properties of materials. These constants form the elements of the stiffness matrix in tensor notation, which relates stress to strain through linear equations in anisotropic materials. Commonly denoted as Cijkl, where i,j,k, and l are the coordinate directions, these constants are essential for understanding how materials deform under various loads.
Types of elastic modulus
Specifying how stress and strain are to be measured, including directions, allows for many types of elastic moduli to be defined. The four primary ones are:
Young's modulus (E) describes tensile and compressive elasticity, or the tendency of an object to deform along an axis when opposing forces are applied along that axis; it is defined as the ratio of tensile stress to tensile strain. It is often referred to simply as the elastic modulus.
The shear modulus or modulus of rigidity (G or Lamé second parameter) describes an object's tendency to shear (the deformation of shape at constant volume) when acted upon by opposing forces; it is defined as shear stress over shear strain. The shear modulus is part of the derivation of viscosity.
The bulk modulus (K) describes volumetric elasticity, or the tendency of an object to deform in all directions when uniformly loaded in all directions; it is defined as volumetric stress over volumetric strain, and is the inverse of compressibility. The bulk modulus is an extension of Young's modulus to three dimensions.
Flexural modulus (Eflex) describes the object's tendency to flex when acted upon by a moment.
Two other elastic moduli are Lamé's first parameter, λ, and P-wave modulus, M, as used in table of modulus comparisons given below references. Homogeneous and isotropic (similar in all directions) materials (solids) have their (linear) elastic properties fully described by two elastic moduli, and one may choose any pair. Given a pair of elastic moduli, all other elastic moduli can be calculated according to formulas in the table below at the end of page.
Inviscid fluids are special in that they cannot support shear stress, meaning that the shear modulus is always zero. This also implies that Young's modulus for this group is always zero.
In some texts, the modulus of elasticity is referred to as the elastic constant, while the inverse quantity is referred to as elastic modulus.
Density functional theory calculation
Density functional theory (DFT) provides reliable methods for determining several forms of elastic moduli that characterise distinct features of a material's reaction to mechanical stresses.Utilize DFT software such as VASP, Quantum ESPRESSO, or ABINIT. Overall, conduct tests to ensure that results are independent of computational parameters such as the density of the k-point mesh, the plane-wave cutoff energy, and the size of the simulation cell.
Young's modulus (E) - apply small, incremental changes in the lattice parameter along a specific axis and compute the corresponding stress response using DFT. Young’s modulus is then calculated as E=σ/ϵ, where σ is the stress and ϵ is the strain.
Initial structure: Start with a relaxed structure of the material. All atoms should be in a state of minimum energy (i.e., minimum energy state with zero forces on atoms) before any deformations are applied.
Incremental uniaxial strain: Apply small, incremental strains to the crystal lattice along a particular axis. This strain is usually uniaxial, meaning it stretches or compresses the lattice in one direction while keeping other dimensions constant or periodic.
Calculate stresses: For each strained configuration, run a DFT calculation to compute the resulting stress tensor. This involves solving the Kohn-Sham equations to find the ground state electron density and energy under the strained conditions
Stress-strain curve: Plot the calculated stress versus the applied strain to create a stress-strain curve. The slope of the initial, linear portion of this curve gives Young's modulus. Mathematically, Young's modulus E is calculated using the formula E=σ/ϵ, where σ is the stress and ϵ is the strain.
Shear modulus (G)
Initial structure: Start with a relaxed structure of the material. All atoms should be in a state of minimum energy with no residual forces. (i.e., minimum energy state with zero forces on atoms) before any deformations are applied.
Shear strain application: Apply small increments of shear strain to the material. Shear strains are typically off-diagonal components in the strain tensor, affecting the shape but not the volume of the crystal cell.
Stress calculation: For each configuration with applied shear strain, perform a DFT calculation to determine the resulting stress tensor.
Shear stress vs. shear strain curve: Plot the calculated shear stress against the applied shear strain for each increment.The slope of the stress-strain curve in its linear region provides the shear modulus, G=τ/γ, where τ is the shear stress and γ is the applied shear strain.
Bulk modulus (K)
Initial structure: Start with a relaxed structure of the material. It’s crucial that the material is fully optimized, ensuring that any changes in volume are purely due to applied pressure.
Volume changes: Incrementally change the volume of the crystal cell, either compressing or expanding it. This is typically done by uniformly scaling the lattice parameters.
Calculate pressure: For each altered volume, perform a DFT calculation to determine the pressure required to maintain that volume. DFT allows for the calculation of stress tensors which provide a direct measure of the internal pressure.
Pressure-volume curve: Plot the applied pressure against the resulting volume change. The bulk modulus can be calculated from the slope of this curve in the linear elastic region.The bulk modulus is defined as K=−VdV/dP, where V is the original volume, dP is the change in pressure, and dV is the change in volume.
See also
Bending stiffness
Dynamic modulus
Elastic limit
Elastic wave
Flexural modulus
Hooke's Law
Impulse excitation technique
Proportional limit
Stiffness
Tensile strength
Transverse isotropy
Elasticity tensor
References
Further reading
Elasticity (physics)
Deformation (mechanics)
Mechanical quantities | Elastic modulus | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,500 | [
"Physical phenomena",
"Mechanical quantities",
"Physical quantities",
"Elasticity (physics)",
"Deformation (mechanics)",
"Quantity",
"Materials science",
"Mechanics",
"Physical properties"
] |
425,779 | https://en.wikipedia.org/wiki/Compression%20%28physics%29 | In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration.
In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume.
Technically, a material is under a state of compression, at some specific point and along a specific direction , if the normal component of the stress vector across a surface with normal direction is directed opposite to . If the stress vector itself is opposite to , the material is said to be under normal compression or pure compressive stress along . In a solid, the amount of compression generally depends on the direction , and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain.
The inverse process of compression is called decompression, dilation, or expansion, in which the object enlarges or increases in volume.
In a mechanical wave, which is longitudinal, the medium is displaced in the wave's direction, resulting in areas of compression and rarefaction.
Effects
When put under compression (or any other type of stress), every material will suffer some deformation, even if imperceptible, that causes the average relative positions of its atoms and molecules to change. The deformation may be permanent, or may be reversed when the compression forces disappear. In the latter case, the deformation gives rise to reaction forces that oppose the compression forces, and may eventually balance them.
Liquids and gases cannot bear steady uniaxial or biaxial compression, they will deform promptly and permanently and will not offer any permanent reaction force. However they can bear isotropic compression, and may be compressed in other ways momentarily, for instance in a sound wave.
Every ordinary material will contract in volume when put under isotropic compression, contract in cross-section area when put under uniform biaxial compression, and contract in length when put into uniaxial compression. The deformation may not be uniform and may not be aligned with the compression forces. What happens in the directions where there is no compression depends on the material. Most materials will expand in those directions, but some special materials will remain unchanged or even contract. In general, the relation between the stress applied to a material and the resulting deformation is a central topic of continuum mechanics.
Uses
Compression of solids has many implications in materials science, physics and structural engineering, for compression yields noticeable amounts of stress and tension.
By inducing compression, mechanical properties such as compressive strength or modulus of elasticity, can be measured.
Compression machines range from very small table top systems to ones with over 53 MN capacity.
Gases are often stored and shipped in highly compressed form, to save space. Slightly compressed air or other gases are also used to fill balloons, rubber boats, and other inflatable structures. Compressed liquids are used in hydraulic equipment and in fracking.
In engines
Internal combustion engines
In internal combustion engines the explosive mixture gets compressed before it is ignited; the compression improves the efficiency of the engine. In the Otto cycle, for instance, the second stroke of the piston effects the compression of the charge which has been drawn into the cylinder by the first forward stroke.
Steam engines
The term is applied to the arrangement by which the exhaust valve of a steam engine is made to close, shutting a portion of the exhaust steam in the cylinder, before the stroke of the piston is quite complete. This steam being compressed as the stroke is completed, a cushion is formed against which the piston does work while its velocity is being rapidly reduced, and thus the stresses in the mechanism due to the inertia of the reciprocating parts are lessened. This compression, moreover, obviates the shock which would otherwise be caused by the admission of the fresh steam for the return stroke.
See also
Buckling
Container compression test
Compression member
Compressive strength
Longitudinal wave
P-wave
Rarefaction
Strength of materials
Résal effect
Plane strain compression test
References
Continuum mechanics
Mechanical engineering | Compression (physics) | [
"Physics",
"Engineering"
] | 1,022 | [
"Applied and interdisciplinary physics",
"Classical mechanics",
"Mechanical engineering",
"Continuum mechanics"
] |
425,850 | https://en.wikipedia.org/wiki/Thermodynamic%20system | A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics.
Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another.
Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy.
The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems.
Overview
Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.”
Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'.
Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article.
Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process.
History
The classification of thermodynamic systems arose with the development of thermodynamics as a science.
Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment.
At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium.
In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems).
Passive systems
If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium.
Active systems
If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment.
Systems in equilibrium
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic.
For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly.
The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium.
In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings.
Walls
A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct.
A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available.
The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings.
A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time.
The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used.
Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.
Surroundings
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions.
Closed system
In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
Adiabatic boundary – not allowing any heat exchange: A thermally isolated system
Rigid boundary – not allowing exchange of work: A mechanically isolated system
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
The first law of thermodynamics for energy transfers for closed system may be stated:
where denotes the internal energy of the system, heat added to the system, the work done by the system. For infinitesimal changes the first law for closed systems may stated:
If the work is due to a volume expansion by at a pressure then:
For a quasi-reversible heat transfer, the second law of thermodynamics reads:
where denotes the thermodynamic temperature and the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as:
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
where denotes the number of -type molecules, the number of atoms of element in molecule , and the total number of atoms of element in the system, which remains constant, since the system is closed. There is one such equation for each element in the system.
Isolated system
An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified.
The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system.
Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system.
Selective transfer of matter
For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes.
An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential.
A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number.
A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance it is usually denoted . The corresponding extensive variable can be the number of moles of the component substance in the system.
For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics.
Open system
In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where is a relaxation time of a corresponding variable. It is convenient to consider the initial value equal to zero.
The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables.
The increments of Gibbs free energy and entropy at and are determined as
The stationary states of the system exist due to exchange of both thermal energy () and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances that can be positive or negative; the quantity is chemical potential of substance .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables , while are thermodynamic forces.
This approach to the open system allows describing the growth and development of living objects in thermodynamic terms.
See also
Dynamical system
Energy system
Isolated system
Mechanical system
Physical system
Quantum system
Thermodynamic cycle
Thermodynamic process
Two-state quantum system
GENERIC formalism
References
Sources
Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier.
Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru)
Thermodynamic systems
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic processes | Thermodynamic system | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,253 | [
"Thermodynamic systems",
"Thermodynamic processes",
"Physical systems",
"Equilibrium chemistry",
"Thermodynamics",
"Dynamical systems"
] |
426,184 | https://en.wikipedia.org/wiki/Chrysotile | Chrysotile or white asbestos is the most commonly encountered form of asbestos, accounting for approximately 95% of the asbestos in the United States and a similar proportion in other countries. It is a soft, fibrous silicate mineral in the serpentine subgroup of phyllosilicates; as such, it is distinct from other asbestiform minerals in the amphibole group. Its idealized chemical formula is Mg(SiO)(OH). The material has physical properties which make it desirable for inclusion in building materials, but poses serious health risks when dispersed into air and inhaled.
Polytypes
Three polytypes of chrysotile are known. These are very difficult to distinguish in hand specimens, and polarized light microscopy must normally be used. Some older publications refer to chrysotile as a group of minerals—the three polytypes listed below, and sometimes pecoraite as well—but the 2006 recommendations of the International Mineralogical Association prefer to treat it as a single mineral with a certain variation in its naturally occurring forms.
Clinochrysotile is the most common of the three forms, found notably at Val-des-Sources, Quebec, Canada. Its two measurable refractive indices tend to be lower than those of the other two forms. The orthorhombic paratypes may be distinguished by the fact that, for orthochrysotile, the higher of the two observable refractive indices is measured parallel to the long axis of the fibres (as for clinochrysotile); whereas for parachrysotile the higher refractive index is measured perpendicular to the long axis of the fibres.
Physical properties
Bulk chrysotile has a hardness similar to a human fingernail and is easily crumbled to fibrous strands composed of smaller bundles of fibrils. Naturally-occurring fibre bundles range in length from several millimetres to more than ten centimetres, although industrially-processed chrysotile usually has shorter fibre bundles. The diameter of the fibre bundles is 0.1–1 μm, and the individual fibrils are even finer, 0.02–0.03 μm, each fibre bundle containing tens or hundreds of fibrils.
Chrysotile fibres have considerable tensile strength, and may be spun into thread and woven into cloth. They are also resistant to heat and are excellent thermal, electrical and acoustic insulators.
Chemical properties
The idealized chemical formula of chrysotile is Mg(SiO)(OH), although some of the magnesium ions may be replaced by iron or other cations. Substitution of the hydroxide ions for fluoride, oxide or chloride is also known, but rarer. A related, but much rarer, mineral is pecoraite, in which all the magnesium cations of chrysotile are substituted by nickel cations.
Chrysotile is resistant to even strong bases (asbestos is thus stable in high pH pore water of Portland cement), but when the fibres are attacked by acids, the magnesium ions are selectively dissolved, leaving a silica skeleton. It is thermally stable up to around , at which temperature it starts to dehydrate. Dehydration is complete at about , with the final products being forsterite (magnesium silicate), silica and water.
The global mass balance reaction of the chrysotile dehydration can be written as follows:
\overset{Chrysotile\ (serpentine)}{2Mg3Si2O5(OH)4} ->[{750^\circ C}] [{dehydration}] \overset{Forsterite}{3Mg2SiO4} + \overset{silica}{SiO2} + \overset{water}{4H2O}
The chrysotile (serpentine) dehydration reaction corresponds to the reverse of the forsterite (Mg-olivine) hydrolysis in the presence of dissolved silica (silicic acid).
Applications
Previously, in the 1990s it was used in asbestos-cement products (like pipes and sheets).
Magnesium sulfate (MgSO4) may be produced by treating chrysotile with sulfuric acid (H2SO4).
Safety concerns
Chrysotile has been included with other forms of asbestos in being classified as a human carcinogen by the International Agency for Research on Cancer (IARC) and by the U.S. Department of Health and Human Services. These state that "Asbestos exposure is associated with parenchymal asbestosis, asbestos-related pleural abnormalities, peritoneal mesothelioma, and lung cancer, and it may be associated with cancer at some extra-thoracic sites". In other scientific publications, epidemiologists have published peer-reviewed scientific papers establishing that chrysotile is the main cause of pleural mesothelioma.
Chrysotile has been recommended for inclusion in the Rotterdam Convention on Prior Informed Consent, an international treaty that restricts the global trade in hazardous materials. If listed, exports of chrysotile would only be permitted to countries that explicitly consent to imports. Canada, a major producer of the mineral, has been harshly criticized by the Canadian Medical Association for its opposition to including chrysotile in the convention.
According to EU Regulation 1907/2006 (REACH) the marketing and use of chrysotile, and of products containing chrysotile, are prohibited.
As of March 2024, the U.S. Environmental Protection Agency finalized regulations banning imports of chrysotile asbestos (effective immediately) due to its link to lung cancer and mesothelioma. However, the new rules can allow up to a dozen years to phase out the use of chrysotile asbestos in some manufacturing facilities. The long phase-out period was a result of a strong lobby by Olin Corporation, a major chemical manufacturer, as well as trade groups like the U.S. Chamber of Commerce and the American Chemistry Council. Chrysotile asbestos is now banned in more than 50 other countries.
Critics of safety regulations
1990s: Canada-European dispute GATT dispute
In May 1998, Canada requested consultations before the WTO and the European Commission concerning France's 1996 prohibition of the importation and sale of all forms of asbestos. Canada said that the French measures contravened provisions of the Agreements on Sanitary and Phytosanitary Measures and on Technical Barriers to Trade, and the GATT 1994. The EC claimed that safer substitute materials existed to take the place of asbestos. It stressed that the French measures were not discriminatory under the terms of international trade treaties, and were fully justified for public health reasons. The EC further claimed that in the July consultations, it had tried to convince Canada that the measures were justified, and that just as Canada broke off consultations, it (the EC) was in the process of submitting substantial scientific data in favour of the asbestos ban.
2000s: Canadian exports face mounting global criticism
In the late 1990s and early 2000s, the Government of Canada continued to claim that chrysotile was much less dangerous than other types of asbestos. Chrysotile continued to be used in new construction across Canada, in ways that are very similar to those for which chrysotile was exported. Similarly, Natural Resources Canada once stated that chrysotile, one of the fibres that make up asbestos, was not as dangerous as once thought. According to a fact sheet from 2003, "current knowledge and modern technology can successfully control the potential for health and environmental harm posed by chrysotile". The Chrysotile Institute, an association partially funded by the Canadian government, also prominently asserted that the use of chrysotile did not pose an environmental problem and the inherent risks in its use were limited to the workplace.
However, under increasing criticism by environmental groups, in May, 2012, the Canadian government stopped funding the Chrysotile Institute. As a result, the Chrysotile Institute has now closed.
The Canadian government continues to draw both domestic and international criticism for its stance on chrysotile, most recently in international meetings about the Rotterdam Convention hearings regarding chrysotile. The CFMEU pointed out that most exports go to developing countries. Canada has pressured countries, including Chile, and other UN member states to avoid chrysotile bans.
In September 2012, governments in Quebec and Canada ended official support for Canada's last asbestos mine in Asbestos, Quebec, now renamed as Val-des-Sources.
See also
Erionite
Antigorite
References
External links
The Chrysotile Institute
"Asbestos-containing Floor Tile and Mastic Abatement: Is there Enough Exposure to Cause Asbestos-related Disease?"
Deer William Alexander, Howie Robert Andrew, Zussman Jack, An introduction to the rock-forming minerals, , OCLC 183009096 pp. 344–352, 1992
Ledoux, RL (ed), Short course in mineralogical techniques of asbestos determination, Mineralogical Association of Canada, pp. 35–73, 185, 1979.
http://www.microlabgallery.com/ChrysotileFile.aspx Photomicrographs of parachrysotile and clinochrysotile
Nolan, RP, Langer AM, Ross M, Wicks FJ, Martin RF (eds), "The health effects of chrysotile asbestos", The Canadian Mineralogist, Special Publication 5, 2001.
Serpentine group
Magnesium minerals
Luminescent minerals
Asbestos
IARC Group 1 carcinogens
Monoclinic minerals
Minerals in space group 12
Orthorhombic minerals
Minerals in space group 36 | Chrysotile | [
"Chemistry",
"Environmental_science"
] | 2,020 | [
"Luminescence",
"Toxicology",
"Asbestos",
"Luminescent minerals"
] |
426,219 | https://en.wikipedia.org/wiki/Classical%20electromagnetism | Classical electromagnetism or classical electrodynamics is a branch of physics focused on the study of interactions between electric charges and currents using an extension of the classical Newtonian model. It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics which is a quantum field theory.
History
The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity. For example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. However, the theory of electromagnetism, as it is currently understood, grew out of Michael Faraday's experiments suggesting the existence of an electromagnetic field and James Clerk Maxwell's use of differential equations to describe it in his A Treatise on Electricity and Magnetism (1873). The development of electromagnetism in Europe included the development of methods to measure voltage, current, capacitance, and resistance. Detailed historical accounts are given by Wolfgang Pauli, E. T. Whittaker, Abraham Pais, and Bruce J. Hunt.
Lorentz force
The electromagnetic field exerts the following force (often called the Lorentz force) on charged particles:
where all boldfaced quantities are vectors: is the force that a particle with charge q experiences, is the electric field at the location of the particle, is the velocity of the particle, is the magnetic field at the location of the particle.
The above equation illustrates that the Lorentz force is the sum of two vectors. One is the cross product of the velocity and magnetic field vectors. Based on the properties of the cross product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the same direction as the electric field. The sum of these two vectors is the Lorentz force.
Although the equation appears to suggest that the electric and magnetic fields are independent, the equation can be rewritten in term of four-current (instead of charge) and a single electromagnetic tensor that represents the combined field ():
Electric field
The electric field E is defined such that, on a stationary charge:
where q0 is what is known as a test charge and is the force on that charge. The size of the charge does not really matter, as long as it is small enough not to influence the electric field by its mere presence. What is plain from this definition, though, is that the unit of is N/C (newtons per coulomb). This unit is equal to V/m (volts per meter); see below.
In electrostatics, where charges are not moving, around a distribution of point charges, the forces determined from Coulomb's law may be summed. The result after dividing by q0 is:
where n is the number of charges, qi is the amount of charge associated with the ith charge, ri is the position of the ith charge, r is the position where the electric field is being determined, and ε0 is the electric constant.
If the field is instead produced by a continuous distribution of charge, the summation becomes an integral:
where is the charge density and is the vector that points from the volume element to the point in space where E is being determined.
Both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the electric potential can help. Electric potential, also called voltage (the units for which are the volt), is defined by the line integral
where is the electric potential, and C is the path over which the integral is being taken.
Unfortunately, this definition has a caveat. From Maxwell's equations, it is clear that is not always zero, and hence the scalar potential alone is insufficient to define the electric field exactly. As a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met.
From the definition of charge, one can easily show that the electric potential of a point charge as a function of position is:
where q is the point charge's charge, r is the position at which the potential is being determined, and ri is the position of each point charge. The potential for a continuous distribution of charge is:
where is the charge density, and is the distance from the volume element to point in space where φ is being determined.
The scalar φ will add to other potentials as a scalar. This makes it relatively easy to break complex problems down into simple parts and add their potentials. Taking the definition of φ backwards, we see that the electric field is just the negative gradient (the del operator) of the potential. Or:
From this formula it is clear that E can be expressed in V/m (volts per meter).
Electromagnetic waves
A changing electromagnetic field propagates away from its origin in the form of a wave. These waves travel in vacuum at the speed of light and exist in a wide spectrum of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in order of increasing frequency): radio waves, microwaves, light (infrared, visible light and ultraviolet), x-rays and gamma rays. In the field of particle physics this electromagnetic radiation is the manifestation of the electromagnetic interaction between charged particles.
General field equations
As simple and satisfying as Coulomb's equation may be, it is not entirely correct in the context of classical electromagnetism. Problems arise because changes in charge distributions require a non-zero amount of time to be "felt" elsewhere (required by special relativity).
For the fields of general charge distributions, the retarded potentials can be computed and differentiated accordingly to yield Jefimenko's equations.
Retarded potentials can also be derived for point charges, and the equations are known as the Liénard–Wiechert potentials. The scalar potential is:
where is the point charge's charge and is the position. and are the position and velocity of the charge, respectively, as a function of retarded time. The vector potential is similar:
These can then be differentiated accordingly to obtain the complete field equations for a moving point particle.
Models
Branches of classical electromagnetism such as optics, electrical and electronic engineering consist of a collection of relevant mathematical models of different degrees of simplification and idealization to enhance the understanding of specific electrodynamics phenomena. An electrodynamics phenomenon is determined by the particular fields, specific densities of electric charges and currents, and the particular transmission medium. Since there are infinitely many of them, in modeling there is a need for some typical, representative
(a) electrical charges and currents, e.g. moving pointlike charges and electric and magnetic dipoles, electric currents in a conductor etc.;
(b) electromagnetic fields, e.g. voltages, the Liénard–Wiechert potentials, the monochromatic plane waves, optical rays, radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, gamma rays etc.;
(c) transmission media, e.g. electronic components, antennas, electromagnetic waveguides, flat mirrors, mirrors with curved surfaces convex lenses, concave lenses; resistors, inductors, capacitors, switches; wires, electric and optical cables, transmission lines, integrated circuits etc.; all of which have only few variable characteristics.
See also
Mathematical descriptions of the electromagnetic field
Weber electrodynamics
Wheeler–Feynman absorber theory
Further reading
Fundamental physical aspects of classical electrodynamics are presented in many textbooks. For the undergraduate level, textbooks like The Feynman Lectures on Physics, Electricity and Magnetism, and Introduction to Electrodynamics are considered as classic references and for the graduate level, textbooks like Classical Electricity and Magnetism, Classical Electrodynamics, and Course of Theoretical Physics are considered as classic references.
References
Electromagnetism
Electrodynamics | Classical electromagnetism | [
"Physics",
"Mathematics"
] | 1,685 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions",
"Electrodynamics",
"Dynamical systems"
] |
426,426 | https://en.wikipedia.org/wiki/Rain%20shadow | A rain shadow is an area of significantly reduced rainfall behind a mountainous region, on the side facing away from prevailing winds, known as its leeward side.
Evaporated moisture from water bodies (such as oceans and large lakes) is carried by the prevailing onshore breezes towards the drier and hotter inland areas. When encountering elevated landforms, the moist air is driven upslope towards the peak, where it expands, cools, and its moisture condenses and starts to precipitate. If the landforms are tall and wide enough, most of the humidity will be lost to precipitation over the windward side (also known as the rainward side) before ever making it past the top. As the air descends the leeward side of the landforms, it is compressed and heated, producing foehn winds that absorb moisture downslope and cast a broad "shadow" of dry climate region behind the mountain crests. This climate typically takes the form of shrub–steppe, xeric shrublands or even deserts.
The condition exists because warm moist air rises by orographic lifting to the top of a mountain range. As atmospheric pressure decreases with increasing altitude, the air has expanded and adiabatically cooled to the point that the air reaches its adiabatic dew point (which is not the same as its constant pressure dew point commonly reported in weather forecasts). At the adiabatic dew point, moisture condenses onto the mountain and it precipitates on the top and windward sides of the mountain. The air descends on the leeward side, but due to the precipitation it has lost much of its moisture. Typically, descending air also gets warmer because of adiabatic compression (as with foehn winds) down the leeward side of the mountain, which increases the amount of moisture that it can absorb and creates an arid region.
Notably affected regions
There are regular patterns of prevailing winds found in bands round Earth's equatorial region. The zone designated the trade winds is the zone between about 30° N and 30° S, blowing predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. The westerlies are the prevailing winds in the middle latitudes between 30 and 60 degrees latitude, blowing predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere. Some of the strongest westerly winds in the middle latitudes can come in the Roaring Forties of the Southern Hemisphere, between 30 and 50 degrees latitude.
Examples of notable rain shadowing include:
Africa
Northern Africa
The Sahara is made even drier because of a strong rain shadow effects caused by major mountain ranges (whose highest points can culminate up to more than 4,000 meters; 2½ miles high). To the northwest, the Atlas Mountains, covering the Mediterranean coast for Morocco, Algeria and Tunisia. On the windward side of the Atlas Mountains, the warm, moist winds blowing from the northwest off the Atlantic Ocean which contain a lot of water vapor, are forced to rise, lift up and expand over the mountain range. This causes them to cool down, which causes an excess of moisture to condense into high clouds and results in heavy precipitation over the mountain range. This is known as orographic rainfall and after this process, the air is dry because it has lost most of its moisture over the Atlas Mountains. On the leeward side, the cold, dry air starts to descend and to sink and compress, making the winds warm up. This warming causes the moisture to evaporate, making clouds disappear. This prevents rainfall formation and creates desert conditions in the Sahara.
Desert regions in the Horn of Africa (Ethiopia, Eritrea, Somalia and Djibouti) such as the Danakil Desert are all influenced by the air heating and drying produced by rain shadow effect of the Ethiopian Highlands.
Southern Africa
The windward side of the island of Madagascar, which sees easterly on-shore winds, is wet tropical, while the western and southern sides of the island lie in the rain shadow of the central highlands and are home to thorn forests and deserts. The same is true for the island of Réunion.
On Tristan da Cunha, Sandy Point on the east coast is warmer and drier than the rainy, windswept settlement of Edinburgh of the Seven Seas in the west.
In Western Cape Province, the Breede River Valley and the Karoo region lie in the rain shadow of the Cape Fold Mountains and are arid; whereas the wettest parts of the Cape Mountains can receive , Worcester receives only around and is useful only for grazing.
Asia
Central and Northern Asia
The Himalaya and connecting ranges also contribute to arid conditions in Central Asia including Mongolia's Gobi desert, as well as the semi-arid steppes of Mongolia and north-central to north western China.
The Verkhoyansk Range in eastern Siberia is the coldest place in the Northern Hemisphere, because the moist southeasterly winds from the Pacific Ocean lose their moisture over the coastal mountains well before reaching the Lena River valley, due to the intense Siberian High forming around the very cold continental air during the winter. One effect in the Sakha Republic (Yakutia) is that, in Yakutsk, Verkhoyansk, and Oymyakon, the average temperature in the coldest month is below . These regions are synonymous with extreme cold.
Eastern Asia
The Ordos Desert is rain shadowed by mountain chains including the Kara-naryn-ula, the Sheitenula, and the Yin Mountains, which link on to the south end of the Great Khingan Mountains.
The central region of Myanmar is in the rain shadow of the Arakan Mountains and is almost semi-arid with only of rain, versus up to on the Rakhine State coast.
The plains around Tokyo, Japan - known as Kanto plain - during winter experiences significantly less precipitation than the rest of the country by virtue of surrounding mountain ranges, including the Japanese Alps, blocking prevailing northwesterly winds originating in Siberia.
Southern Asia
The eastern side of the Sahyadri ranges on the Deccan Plateau including: Vidarbha, North Karnataka, Rayalaseema and western Tamil Nadu.
Gilgit and Chitral, Pakistan, are rainshadow areas.
The Thar Desert is bounded and rain shadowed by the Aravalli ranges to the southeast, the Himalaya to the northeast, and the Kirthar and Sulaiman ranges to the west.
The Central Highlands of Sri Lanka rain shadow the northeastern parts of the island, which experience much less severe summer monsoon rains and instead have precipitation peaks in autumn and winter.
Western Asia
The peaks of the Caucasus Mountains to the west and Hindukush and Pamir to the east rain shadow the Karakum and Kyzyl Kum deserts east of the Caspian Sea, as well as the semi-arid Kazakh Steppe. They also cause vast rainfall differences between coastal areas on the Black Sea such as Rize, Batumi and Sochi contrasted with the dry lowlands of Azerbaijan facing the Caspian Sea.
The semi-arid Anatolian Plateau is rain shadowed by mountain chains, including the Pontic Mountains in the north and the Taurus Mountains in the south.
The High Peaks of Mount Lebanon rain-shadow the northern parts of the Beqaa Valley and Anti-Lebanon Mountains.
The Judaean Desert, the Dead Sea and the western slopes of the Moab Mountains on the opposite (Jordanian) side are rain-shadowed by the Judaean Mountains.
The Dasht-i-Lut in Iran is in the rain shadow of the Elburz and Zagros Mountains and is one of the most lifeless areas on Earth.
The peaks of the Zagros Mountains rain-shadow the northern half of the West Azerbaijan province in Iranian Azerbaijan (above Urmia), as manifested by the province's dry winters relative to those in the windward part of the region (i.e. Kurdistan Region and Hakkâri Province in Turkey).
Much of the Mesaoria Plain of Cyprus is in the rain shadow of the Troodos Mountains and is semi-arid.
Europe
Central Europe
The Plains of Limagne and Forez in the northern Massif Central, France are also relatively rainshadowed (mostly the plain of Limagne, shadowed by the Chaîne des Puys (up to 2000 mm; 80" of rain a year on the summits and below 600mm; 20" at Clermont-Ferrand, which is one of the driest places in the country).
The Piedmont wine region of northern Italy is rainshadowed by the mountains that surround it on nearly every side: Asti receives only 527 mm (20¾") of precipitation per year, making it one of the driest places in mainland Italy.
Some valleys in the inner Alps are also strongly rainshadowed by the high surrounding mountains: the areas of Gap and Briançon in France, the district of Zernez in Switzerland.
The Kuyavia and the eastern part of the Greater Poland has an average rainfall of about 450 mm (18") because of rainshadowing by the slopes of the Kashubian Switzerland, making it one of the driest places in the North European Plain.
Northern Europe
The Pennines of Northern England, the mountains of Wales, the Lake District and the Highlands of Scotland create a rain shadow that includes most of the eastern United Kingdom, due to the prevailing south-westerly winds. Manchester and Glasgow, for example, receive around double the rainfall of Leeds and Edinburgh respectively (although there are no mountains between Edinburgh and Glasgow). The contrast is even stronger further north, where Aberdeen gets around a third of the rainfall of Fort William or Skye. In Devon, rainfall at Princetown on Dartmoor is almost three times the amount received to the east at locations such as Exeter and Teignmouth. The Fens of East Anglia receive similar rainfall amounts to Seville.
Iceland has plenty of microclimates courtesy of the mountainous terrain. Akureyri on a northerly fiord receives about a third of the precipitation that the island of Vestmannaeyjar off the south coast gets. The smaller island is in the pathway of Gulf Stream rain fronts with mountains lining the southern coast of the mainland.
The Scandinavian Mountains create a rain shadow for lowland areas east of the mountain chain and prevents the Oceanic climate from penetrating further east; thus Bergen and a place like Brekke in Sogn, west of the mountains, receive an annual precipitation of and , respectively, while Oslo receives only , and Skjåk Municipality, a municipality situated in a deep valley, receives only . Further east, the partial influence of the Scandinavian Mountains contribute to areas in east-central Sweden around Stockholm only receiving annually. In the north, the mountain range extending to the coast in around Narvik and Tromsø cause a lot higher precipitation there than in coastal areas further east facing north such as Alta or inland areas like Kiruna across the Swedish border.
The South Swedish highlands, although not rising more than , reduce precipitation and increase summer temperatures on the eastern side. Combined with the high pressure of the Baltic Sea, this leads to some of the driest climates in the humid zones of Northern Europe being found in the triangle between the coastal areas in the counties of Kalmar, Östergötland and Södermanland along with the offshore island of Gotland on the leeward side of the slopes. Coastal areas in this part of Sweden usually receive less precipitation than windward locations in Andalusia in the south of Spain.
Southern Europe
The Cantabrian Mountains form a sharp division between "Green Spain" to the north and the dry central plateau. The northern-facing slopes receive heavy rainfall from the Bay of Biscay, but the southern slopes are in rain shadow. The other most evident effect on the Iberian Peninsula occurs in the Almería, Murcia and Alicante areas, each with an average rainfall of 300 mm (12"), which are the driest spots in Europe (see Cabo de Gata) mostly a result of the mountain range running through their western side, which blocks the westerlies.
The Norte Region in Portugal has extreme differences in precipitation with values surpassing in the Peneda-Gerês National Park to values close to in the Douro Valley. Despite being only apart, Chaves has less than half the precipitation of Montalegre.
The eastern part of the Pyrenean mountains in the south of France (Cerdagne).
In the Northern Apennines of Italy, Mediterranean city La Spezia receives twice the rainfall of Adriatic city Rimini on the eastern side. This is also extended to the southern end of the Apennines that see vast rainfall differences between Naples with above on the Mediterranean side and Bari with about on the Adriatic side.
The valley of the Vardar River and south from Skopje to Athens is in the rain shadow of the Accursed Mountains and Pindus Mountains. On its windward side the Accursed Mountains has the highest rainfall in Europe at around with small glaciers even at mean annual temperatures well above , but the leeward side receives as little as .
Caribbean
Throughout the Greater Antilles, the southwestern sides are in the rain shadow of the trade winds and can receive as little as per year as against over on the northeastern, windward sides and over over some highland areas. This is most apparent in Cuba, where this phenomenon leads to the Cuban cactus scrub ecoregion, and the island of Hispaniola (which contains the Caribbean's highest mountain ranges), which results in xeric semi-arid shrublands throughout the Dominican Republic and Haiti.
North American mainland
On the largest scale, the entirety of the North American Interior Plains are shielded from the prevailing Westerlies carrying moist Pacific weather by the North American Cordillera. More pronounced effects are observed, however, in particular valley regions within the Cordillera, in the direct lee of specific mountain ranges. This includes much of the Basin and Range Province in the United States and Mexico.
The Pacific Coast Ranges create rain shadows near the West Coast:
The Dungeness Valley around Sequim and Port Angeles, Washington lies in the rain shadow of the Olympic Mountains. The area averages of rain per year. The rain shadow extends to the eastern Olympic Peninsula, Whidbey Island, parts of the San Juan Islands, and Victoria, British Columbia which receive between of precipitation each year. Seattle is also affected by the rain shadow, albeit to a much lesser effect. By contrast, Aberdeen, which is situated southwest of the Olympics, receives nearly of rain per year
The east slopes of the Coast Ranges in central and southern California cut off the southern San Joaquin Valley from enough precipitation to ensure desert-like conditions in areas around Bakersfield.
San Jose, and adjacent cities are usually drier than the rest of the San Francisco Bay Area because of the rain shadow cast by the highest part of the Santa Cruz Mountains.
The Sonoran Desert is bounded to the west by the Peninsular Ranges, but extends even along part of the east coast of the Gulf of California.
The Sierra Madre Occidental in Mexico are west of the Chihuahuan Desert.
Most rain shadows in the western United States are due to the Sierra Nevada mountains in California and Cascade Mountains, mostly in Oregon and Washington.
The Cascades create a rain-shadowed Columbia Basin area of Eastern Washington and valleys in British Columbia, Canada - most notably the Thompson and Nicola Valleys which can receive less than of rain in parts, and the Okanagan Valley (particularly the south, nearest to the US border) which receives anywhere from 12–17 inches of rain annually.
The endorheic Great Basin of Utah and Nevada is in the rain shadows of the Cascades and Sierra Nevada.
The Mojave Desert is rain-shadowed by the Sierra Nevada and the Transverse Ranges of southern California.
The Black Rock Desert is in the rain shadows of the Cascades and Sierra Nevada.
California's Owens Valley is rain-shadowed by the Sierra Nevada.
Death Valley in the United States, behind both the Pacific Coast Ranges of California and the Sierra Nevada range, is the driest place in North America and one of the driest places on the planet. This is also due to its location well below sea level which tends to cause high pressure and dry conditions to dominate due to the greater weight of the atmosphere above.
The Colorado Front Range is limited to precipitation that crosses over the Continental Divide. While many locations west of the Divide may receive as much as of precipitation per year, some places on the eastern side, notably the cities of Denver and Pueblo, Colorado, typically receive only about 12 to 19 inches. Thus, the Continental Divide acts as a barrier for precipitation. This effect applies only to storms traveling west-to-east. When low pressure systems skirt the Rocky Mountains and approach from the south, they can generate high precipitation on the eastern side and little or none on the western slope.
Further east:
The Shenandoah Valley of Virginia, wedged between the Ridge-and-Valley Appalachians and the Blue Ridge Mountains and partially shielded from moisture from the west and southeast, is much drier than the very humid remainder of Virginia and the American Southeast.
Asheville, North Carolina sits in the rain shadow of the Balsam, Smoky, and Blue Ridge Mountains. While the mountains surrounding Asheville contain the Appalachian Temperate Rainforests, with areas receiving over an annual average precipitation of , the city itself is the driest location in North Carolina, with an annual average precipitation of only .
Ashcroft, British Columbia, the only true desert in Canada, sits in the rain shadow of the Coast Mountains of Canada.
Yellowknife, the capital and most populous city in the Northwest Territories of Canada, is located in the rain shadow of the mountain ranges to the west of the city.
Oceania
Australia
In New South Wales and the Australian Capital Territory, Monaro is shielded by both the Snowy Mountains to the northwest and coastal ranges to the southeast. Consequently, parts of it are as dry as the wheat-growing lands of those states. For comparison, Cooma receives of rain annually, whereas Batlow, on the western side of the ranges, receives of precipitation. Furthermore, Australia's capital Canberra is also protected from the west by the Brindabellas which create a strong rain shadow in Canberra's valleys, where it receives an annual rainfall of , compared to Adjungbilly's . In the cool season, the Great Dividing Range also shields much of the southeast coast (i.e. Sydney, the Central Coast, the Hunter Valley, Illawarra, the South Coast) from south-westerly polar blasts that originate from the Southern Ocean.
In Queensland, the land west of Atherton Tableland in the Tablelands Region lies on a rain shadow and therefore would feature significantly lower annual rainfall averages than those in the Cairns Region. For comparison, Tully, which is on the eastern side of the tablelands, towards the coast, receives annual rainfall that exceeds , whereas Mareeba, which lies on the rain shadow of the Atherton Tableland, receives of rainfall annually.
In Tasmania, the central Midlands region is in a strong rain shadow and receives only about a fifth as much rainfall as the highlands to the west.
In Victoria, the western side of Port Phillip Bay is in the rain shadow of the Otway Ranges. The area between Geelong and Werribee is the driest part of southern Victoria: the crest of the Otway Ranges receives of rain per year and has myrtle beech rainforests much further west than anywhere else, whilst the area around Little River receives as little as annually, which is as little as Nhill or Longreach and supports only grassland. Also in Victoria, Omeo is shielded by the surrounding Victorian Alps, where it receives around of annual rain, whereas other places nearby exceed .
Western Australia's Wheatbelt and Great Southern regions are shielded by the Darling Range to the west: Mandurah, near the coast, receives about annually. Dwellingup, 40 km (25 miles) inland and in the heart of the ranges, receives over a year while Narrogin, further east, receives less than a year.
Pacific Islands
Hawaii also has rain shadows, with some areas being desert. Orographic lifting produces the world's second-highest annual precipitation record, , on the island of Kauai; the leeward side is understandably rain-shadowed. The entire island of Kahoolawe lies in the rain shadow of Maui's East Maui Volcano.
New Caledonia lies astride the Tropic of Capricorn, between 19° and 23° south latitude. The climate of the islands is tropical, and rainfall is brought by trade winds from the east. The western side of the Grande Terre lies in the rain shadow of the central mountains, and rainfall averages are significantly lower.
On the South Island of New Zealand is found one of the most remarkable rain shadows anywhere on Earth. The Southern Alps intercept moisture coming off the Tasman Sea, precipitating about 6,300 mm (250 in) to 8,900 mm (350 in) liquid water equivalent per year and creating large glaciers on the western side. To the east of the Southern Alps, scarcely 50 km (30 mi) from the snowy peaks, yearly rainfall drops to less than 760 mm (30 in) and some areas less than 380 mm (15 in). (see Nor'west arch for more on this subject).
South America
The Atacama Desert in Chile is the driest non-polar desert on Earth because it is blocked from moisture by the Andes Mountains to the east while the Humboldt Current causes persistent atmospheric stability.
Cuyo and Eastern Patagonia is rain shadowed from the prevailing westerly winds by the Andes range and is arid. The aridity of the lands next to eastern piedmont of the Andes decreases to the south due to a decrease in the height of the Andes with the consequence that the Patagonian Desert develop more fully at the Atlantic coast contributing to shaping the climatic pattern known as the Arid Diagonal. The Argentinian wine region of Cuyo and Northern Patagonia is almost completely dependent on irrigation, using water drawn from the many rivers that drain glacial ice from the Andes.
The Guajira Peninsula in northern Colombia is in the rain shadow of the Sierra Nevada de Santa Marta and despite its tropical latitude is almost arid, receiving almost no rainfall for seven to eight months of the year and being incapable of cultivation without irrigation.
See also
Lake-effect snow
Orographic precipitation
Wind shadow
References
External links
USA Today on rain shadows
Weather pages on rain shadows
Land surface effects on climate
Mountain meteorology
Hydrology | Rain shadow | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,589 | [
"Hydrology",
"Environmental engineering"
] |
4,556,850 | https://en.wikipedia.org/wiki/Propane%20torch | A propane torch is a tool normally used for the application of flame or heat which uses propane, a hydrocarbon gas, for its fuel and ambient air as its combustion medium. Propane is one of a group of by-products of the natural gas and petroleum industries known as liquefied petroleum gas (LPG). Propane and other fuel torches are most commonly used in the manufacturing, construction and metal-working industries.
Fuels
Propane is often the fuel of choice because of its low price, ease of storage and availability, hence the name "propane torch". The gasses MAPP gas and Map-pro are similar to propane, but burn hotter. They are usually found in a yellow canister, as opposed to propane's blue, black, or green. Alternative fuel gases can be harder to store and more dangerous for the user. For example, acetylene needs a porous material mixed with acetone in the tank for safety reasons and cannot be used above a certain pressure and withdrawal rate. Natural gas is a common fuel for household cooking and heating but cannot be stored in liquid form without cryogenic refrigeration.
Mechanism
Small air-only torches normally use the Venturi effect to create a pressure differential which causes air to enter the gas stream through precisely sized inlet holes or intakes, similar to how a car's carburetor works. The fuel and air mix sufficiently, but imperfectly, in the burner's tube before the flame front is reached. The flame also receives some further oxygen from the air surrounding it. Oxygen-fed torches use the high pressure of the stored oxygen to push the oxygen into a common tube with the fuel.
Uses
Propane torches are frequently employed to solder copper water pipes. They can also be used for some low temperature welding applications, as well as for brazing dissimilar metals together. They can also be used for annealing, for heating metals up in order to bend them more easily, bending glass, and for doing flame tests.
Complete and incomplete combustion
With oxygen/propane torches, the air/fuel ratio can be much lower. The stoichiometric equation for complete combustion of propane with 100% oxygen is:
C3H8 + 5 (O2) → 4 (H2O) + 3 (CO2)
In this case, the only products are CO2 and water. The balanced equation shows to use 1 mole of propane for every 5 moles of oxygen.
With air/fuel torches, since air contains about 21% oxygen, a very large ratio of air to fuel must be used to obtain the maximum flame temperature with air. If the propane does not receive enough oxygen, some of the carbon from the propane is left unburned. An example of incomplete combustion that uses 1 mole of propane for every 4 moles of oxygen:
C3H8 + 4 (O2) → 4 (H2O) + 2 (CO2) + 1 C
The extra carbon product will cause soot to form, and the less oxygen used, the more soot will form. There are other unbalanced ratios where incomplete combustion products such as carbon monoxide (CO) are formed, such as:
6 (C3H8) + 29 (O2) → 24 (H2O) + 16 (CO2) + 2 CO
Flame temperature
An air-fed torch's maximum adiabatic flame temperature is assumed to be around . However, a typical primary flame will only achieve to . Oxygen-fed torches can be much hotter at up to .
See also
Butane torch
Blowtorch
Thermal lance
References
Bibliography
External links
How to Silver Solder Steel with a Propane Torch
How To properly Heat Up Copper Pipe Using A Propane Torch
Burners
Metalworking tools
Welding
Torch | Propane torch | [
"Engineering"
] | 780 | [
"Welding",
"Mechanical engineering"
] |
4,557,352 | https://en.wikipedia.org/wiki/Dicarbon%20monoxide | Dicarbon monoxide () is a molecule that contains two carbon atoms and one oxygen atom. It is a linear molecule that, because of its simplicity, is of interest in a variety of areas. It is, however, so extremely reactive that it is not encountered in everyday life. It is classified as a carbene, cumulene and an oxocarbon.
Occurrence
Dicarbon monoxide is a product of the photolysis of carbon suboxide:
C3O2 → CO + C2O
It is stable enough to observe reactions with NO and NO2.
Called ketenylidene in organometallic chemistry, it is a ligand observed in metal carbonyl clusters, e.g. [OC2Co3(CO)9]+. Ketenylidenes are proposed as intermediates in the chain growth mechanism of the Fischer-Tropsch Process, which converts carbon monoxide and hydrogen to hydrocarbon fuels.
The organophosphorus compound (C6H5)3PCCO (CAS# 15596-07-3) contains the C2O functionality. Sometimes called Bestmann's Ylide, it is a yellow solid.
References
Carbenes
Oxocarbons | Dicarbon monoxide | [
"Chemistry"
] | 250 | [
"Organic compounds",
"Carbenes",
"Inorganic compounds",
"Inorganic compound stubs"
] |
4,557,961 | https://en.wikipedia.org/wiki/Alignment%20level | The alignment level in an audio signal chain or on an audio recording is a defined anchor point that represents a reasonable or typical level.
Analogue
In analogue systems, alignment level in broadcast chains is commonly 0 dBu (0.775 volts RMS) and in professional audio is commonly 0 VU (4 dBu, 1.228 volts RMS). Under normal situations, the 0 VU reference allows for a headroom of 18 dB or more above the reference level without significant distortion. This is largely due to the use of slow-responding VU meters in almost all analogue professional audio equipment, which, by their design and by specification, respond to an average level, not peak levels.
Digital
In digital systems alignment level commonly is at −18 dBFS (18 dB below digital full scale), in accordance with EBU recommendations. Digital equipment must use peak-reading metering systems to avoid severe digital distortion caused by the signal going beyond digital full scale. 24-bit original or master recordings commonly have an alignment level at −24 dBFS to allow extra headroom, which can then be reduced to match the available headroom of the final medium by audio level compression. FM broadcasts usually have only 9 dB of headroom, as recommended by the EBU, but digital broadcasts, which could operate with 18 dB of headroom, given their low noise floor even in difficult reception areas, currently operate in a state of confusion, with some transmitting at maximum level, while others operate at a much lower level, even though they carry material that has been compressed for compatibility with the lower dynamic range of FM transmissions.
EBU
In EBU documents alignment level defines −18 dBFS as the level of the alignment signal, a 1 kHz sine tone for analog applications and 997 Hz in digital applications.
Motivation
Using alignment level rather than maximum permitted level as the reference point allows more sensible headroom management throughout the audio signal chain; compression happens only where intended.
Loudness wars have resulted in increasing playback loudness. Loudness normalisation to a fixed alignment level can improve the experience when listening to mixed material.
See also
Audio normalization
Full scale
Nominal level
Transmission-level point
External links
EBU Recommendation R128 - Loudness normalisation and permitted maximum level of audio levels (2010)
EBU Recommendation R68-2000
EBU Recommendation R117-2006 (against loudness war)
AES Convention Paper 5538 On Levelling and Loudness Problems at Broadcast Studios
EBU Tech 3282-E on EBU RDAT Tape Levels
EBU R89-1997 on CD-R levels
Distortion to the People — TC Electronics
EBU Loudness Group
Audio engineering
Broadcast engineering
Sound production technology
Sound recording
Sound | Alignment level | [
"Engineering"
] | 544 | [
"Electronic engineering",
"Broadcast engineering",
"Audio engineering",
"Electrical engineering"
] |
4,558,674 | https://en.wikipedia.org/wiki/Bitext%20word%20alignment | Bitext word alignment or simply word alignment is the natural language processing task of identifying translation relationships among the words (or more rarely multiword units) in a bitext, resulting in a bipartite graph between the two sides of the bitext, with an arc between two words if and only if they are translations of one another. Word alignment is typically done after sentence alignment has already identified pairs of sentences that are translations of one another.
Bitext word alignment is an important supporting task for most methods of statistical machine translation. The parameters of statistical machine translation models are typically estimated by observing word-aligned bitexts, and conversely automatic word alignment is typically done by choosing that alignment which best fits a statistical machine translation model. Circular application of these two ideas results in an instance of the expectation-maximization algorithm.
This approach to training is an instance of unsupervised learning, in that the system is not given examples of the kind of output desired, but is trying to find values for the unobserved model and alignments which best explain the observed bitext. Recent work has begun to explore supervised methods which rely on presenting the system with a (usually small) number of manually aligned sentences. In addition to the benefit of the additional information provided by supervision, these models are typically also able to more easily take advantage of combining many features of the data, such as context, syntactic structure, part-of-speech, or translation lexicon information, which are difficult to integrate into the generative statistical models traditionally used.
Besides the training of machine translation systems, other applications of word alignment include translation lexicon induction, word sense discovery, word sense disambiguation and the cross-lingual projection of linguistic information.
Training
IBM Models
The IBM models are used in Statistical machine translation to train a translation model and an alignment model. They are an instance of the Expectation–maximization algorithm: in the expectation-step the translation probabilities within each sentence are computed, in the maximization step they are accumulated to global translation probabilities.
Features:
IBM Model 1: lexical alignment probabilities
IBM Model 2: absolute positions
IBM Model 3: fertilities (supports insertions)
IBM Model 4: relative positions
IBM Model 5: fixes deficiencies (ensures that no two words can be aligned to the same position)
HMM
Vogel et al. developed an approach featuring lexical translation probabilities and relative alignment by mapping the problem to a Hidden Markov model. The states and observations represent the source and target words respectively. The transition probabilities model the alignment probabilities. In training the translation and alignment probabilities can be obtained from and in the Forward-backward algorithm.
Software
GIZA++ (free software under GPL)
The most widely used alignment toolkit, implementing the famous IBM models with a variety of improvements
The Berkeley Word Aligner (free software under GPL)
Another widely used aligner implementing alignment by agreement, and discriminative models for alignment
Nile (free software under GPL)
A supervised word aligner that is able to use syntactic information on the source and target side
pialign (free software under the Common Public License)
An aligner that aligns both words and phrases using Bayesian learning and inversion transduction grammars
Natura Alignment Tools (NATools, free software under GPL)
UNL aligner (free software under Creative Commons Attribution 3.0 Unported License)
Geometric Mapping and Alignment (GMA) (free software under GPL)
HunAlign (free software under LGPL-2.1)
Anymalign (free software under GPL)
References
Machine translation | Bitext word alignment | [
"Technology"
] | 747 | [
"Machine translation",
"Natural language and computing"
] |
4,562,380 | https://en.wikipedia.org/wiki/Hydrogen%20telluride | Hydrogen telluride is the inorganic compound with the formula H2Te. A hydrogen chalcogenide and the simplest hydride of tellurium, it is a colorless gas. Although unstable in ambient air, the gas can exist long enough to be readily detected by the odour of rotting garlic at extremely low concentrations; or by the revolting odour of rotting leeks at somewhat higher concentrations. Most compounds with Te–H bonds (tellurols) are unstable with respect to loss of H2. H2Te is chemically and structurally similar to hydrogen selenide, both are acidic. The H–Te–H angle is about 90°. Volatile tellurium compounds often have unpleasant odours, reminiscent of decayed leeks or garlic.
Synthesis
Electrolytic methods have been developed.
H2Te can also be prepared by hydrolysis of the telluride derivatives of electropositive metals. The typical hydrolysis is that of aluminium telluride:
Al2Te3 + 6 H2O → 2 Al(OH)3 + 3 H2Te
Other salts of Te2− such as MgTe and sodium telluride can also be used. Na2Te can be made by the reaction of Na and Te in anhydrous ammonia. The intermediate in the hydrolysis, , can be isolated as salts as well. NaHTe can be made by reducing tellurium with .
Hydrogen telluride cannot be efficiently prepared from its constituent elements, in contrast to H2Se.
Properties
is an endothermic compound, degrading to the elements at room temperature:
→ + Te
Light accelerates the decomposition. It is unstable in air, being oxidized to water and elemental tellurium:
2 + → 2 + 2 Te
It is almost as acidic as phosphoric acid (Ka = 8.1×10−3), having a Ka value of about 2.3×10−3. It reacts with many metals to form tellurides.
See also
Dimethyl telluride
References
Hydrogen compounds
Triatomic molecules
Tellurides | Hydrogen telluride | [
"Physics",
"Chemistry"
] | 433 | [
"Molecules",
"Triatomic molecules",
"Matter"
] |
4,562,815 | https://en.wikipedia.org/wiki/Applied%20mechanics | Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering.
Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology.
Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics.
Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. In the application of the natural sciences, mechanics was said to be complemented by thermodynamics, the study of heat and more generally energy, and electromechanics, the study of electricity and magnetism.
Overview
Engineering problems are generally tackled with applied mechanics through the application of theories of classical mechanics and fluid mechanics. Because applied mechanics can be applied in engineering disciplines like civil engineering, mechanical engineering, aerospace engineering, materials engineering, and biomedical engineering, it is sometimes referred to as engineering mechanics.
Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines. In civil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, and earthquake engineering. In mechanical engineering, it can be applied in mechatronics and robotics, design and drafting, nanotechnology, machine elements, structural analysis, friction stir welding, and acoustical engineering. In aerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics. In materials engineering, applied mechanics’ concepts are used in thermoelasticity, elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics. Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control.
Brief history
The first science with a theoretical foundation based in mathematics was mechanics; the underlying principles of mechanics were first delineated by Isaac Newton in his 1687 book Philosophiæ Naturalis Principia Mathematica. One of the earliest works to define applied mechanics as its own discipline was the three volume Handbuch der Mechanik written by German physicist and engineer Franz Josef Gerstner. The first seminal work on applied mechanics to be published in English was A Manual of Applied Mechanics in 1858 by English mechanical engineer William Rankine. August Föppl, a German mechanical engineer and professor, published Vorlesungen über technische Mechanik in 1898 in which he introduced calculus to the study of applied mechanics.
Applied mechanics was established as a discipline separate from classical mechanics in the early 1920s with the publication of Journal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of the International Congress of Applied Mechanics. In 1921 Austrian scientist Richard von Mises started the Journal of Applied Mathematics and Mechanics (Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientist Ludwig Prandtl founded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik). During a 1922 conference on hydrodynamics and aerodynamics in Innsbruck, Austria, Theodore von Kármán, a Hungarian engineer, and Tullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics. In 1924 the first meeting of the International Congress of Applied Mechanics was held in Delft, the Netherlands attended by more than 200 scientist from around the world. Since this first meeting the congress has been held every four years, except during World War II; the name of the meeting was changed to International Congress of Theoretical and Applied Mechanics in 1960.
Due to the unpredictable political landscape in Europe after the First World War and upheaval of World War II many European scientist and engineers emigrated to the United States. Ukrainian engineer Stephan Timoshenko fled the Bolsheviks Red Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at the University of Michigan and Stanford University. Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded the Applied Mechanics Division of the American Society of Mechanical Engineers in 1927 and is considered “America’s Father of Engineering Mechanics.” In 1930 Theodore von Kármán left Germany and became the first director of the Aeronautical Laboratory at the California Institute of Technology; von Kármán would later co-found the Jet Propulsion Laboratory in 1944. With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950.
Branches
Dynamics
Dynamics, the study of the motion and movement of various objects, can be further divided into two branches, kinematics and kinetics. For classical mechanics, kinematics would be the analysis of moving bodies using time, velocities, displacement, and acceleration. Kinetics would be the study of moving bodies through the lens of the effects of forces and masses. In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids.
Statics
The study of statics is the study and describing of bodies at rest. Static analysis in classical mechanics can be broken down into two categories, non-deformable bodies and deformable bodies. When studying non-deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying deformable bodies, the examination of the structure and material strength is observed. In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account.
Relationship to classical mechanics
Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below.
Examples
Newtonian foundation
Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton's Principia (published in 1687). It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics. Depending on the type of force, type of matter, and the external forces, acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies.
Archimedes' principle
Archimedes' principle is a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid. If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid. Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid. The weight of the amount of displaced fluids will then be equal to the weight of the solid.
Major topics
This section based on the "AMR Subject Classification Scheme" from the journal Applied Mechanics Reviews.
Foundations and basic methods
Continuum mechanics
Finite element method
Finite difference method
Other computational methods
Experimental system analysis
Dynamics and vibration
Dynamics (mechanics)
Kinematics
Vibrations of solids (basic)
Vibrations (structural elements)
Vibrations (structures)
Wave motion in solids
Impact on solids
Waves in incompressible fluids
Waves in compressible fluids
Solid fluid interactions
Astronautics (celestial and orbital mechanics)
Explosions and ballistics
Acoustics
Automatic control
System theory and design
Optimal control system
System and control applications
Robotics
Manufacturing
Mechanics of solids
Elasticity
Viscoelasticity
Plasticity and viscoplasticity
Composite material mechanics
Cables, rope, beams, etc
Plates, shells, membranes, etc
Structural stability (buckling, postbuckling)
Electromagneto solid mechanics
Soil mechanics (basic)
Soil mechanics (applied)
Rock mechanics
Material processing
Fracture and damage processes
Fracture and damage mechanics
Experimental stress analysis
Material Testing
Structures (basic)
Structures (ground)
Structures (ocean and coastal)
Structures (mobile)
Structures (containment)
Friction and wear
Machine elements
Machine design
Fastening and joining
Mechanics of fluids
Rheology
Hydraulics
Incompressible flow
Compressible flow
Rarefied flow
Multiphase flow
Wall Layers (incl boundary layers)
Internal flow (pipe, channel, and couette)
Internal flow (inlets, nozzles, diffusers, and cascades)
Free shear layers (mixing layers, jets, wakes, cavities, and plumes)\
Flow stability
Turbulence
Electromagneto fluid and plasma dynamics
Hydromechanics
Aerodynamics
Machinery fluid dynamics
Lubrication
Flow measurements and visualization
Thermal sciences
Thermodynamics
Heat transfer (one phase convection)
Heat transfer (two phase convection)
Heat transfer (conduction)
Heat transfer (radiation and combined modes)
Heat transfer (devices and systems)
Thermodynamics of solids
Mass transfer (with and without heat transfer)
Combustion
Prime movers and propulsion systems
Earth sciences
Micromeritics
Porous media
Geomechanics
Earthquake mechanics
Hydrology, oceanology, and meteorology
Energy systems and environment
Fossil fuel systems
Nuclear systems
Geothermal systems
Solar energy systems
Wind energy systems
Ocean energy system
Energy distribution and storage
Environmental fluid mechanics
Hazardous waste containment and disposal
Biosciences
Biomechanics
Human factor engineering
Rehabilitation engineering
Sports mechanics
Applications
Electrical Engineering
Civil engineering
Mechanical Engineering
Nuclear engineering
Architectural engineering
Chemical engineering
Petroleum engineering
Publications
Journal of Applied Mathematics and Mechanics
Newsletters of the Applied Mechanics Division
Journal of Applied Mechanics
Applied Mechanics Reviews
Applied Mechanics
Quarterly Journal of Mechanics and Applied Mathematics
Journal of Applied Mathematics and Mechanics (PMM)
Gesellschaft für Angewandte Mathematik und Mechanik
Acta Mechanica Sinica
See also
Biomechanics
Geomechanics
Mechanicians
Mechanics
Physics
Principle of moments
Structural analysis
Kinetics (physics)
Kinematics
Dynamics (physics)
Statics
References
Further reading
J.P. Den Hartog, Strength of Materials, Dover, New York, 1949.
F.P. Beer, E.R. Johnston, J.T. DeWolf, Mechanics of Materials, McGraw-Hill, New York, 1981.
S.P. Timoshenko, History of Strength of Materials, Dover, New York, 1953.
J.E. Gordon, The New Science of Strong Materials, Princeton, 1984.
H. Petroski, To Engineer Is Human, St. Martins, 1985.
T.A. McMahon and J.T. Bonner, On Size and Life, Scientific American Library, W.H. Freeman, 1983.
M. F. Ashby, Materials Selection in Design, Pergamon, 1992.
A.H. Cottrell, Mechanical Properties of Matter, Wiley, New York, 1964.
S.A. Wainwright, W.D. Biggs, J.D. Organisms, Edward Arnold, 1976.
S. Vogel, Comparative Biomechanics, Princeton, 2003.
J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associates, 2001.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 2: Dynamics, John Wiley & Sons., New York, 1986.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 1: Statics, John Wiley & Sons., New York, 1986.
External links
Video and web lectures
Engineering Mechanics Video Lectures and Web Notes
Applied Mechanics Video Lectures By Prof.SK. Gupta, Department of Applied Mechanics, IIT Delhi
Mechanics
.
Structural engineering | Applied mechanics | [
"Physics",
"Engineering"
] | 2,684 | [
"Structural engineering",
"Construction",
"Civil engineering",
"Mechanics",
"Mechanical engineering",
"Engineering mechanics"
] |
4,562,875 | https://en.wikipedia.org/wiki/Motion%20planning | Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.
For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot).
Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, architectural design, robotic surgery, and the study of biological molecules.
Concepts
A basic motion planning problem is to compute a continuous path that connects a start configuration S and a goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-dimensional) configuration space.
Configuration space
A configuration describes the pose of the robot, and the configuration space C is the set of all possible configurations. For example:
If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane, and a configuration can be represented using two parameters (x, y).
If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimensional. However, C is the special Euclidean group SE(2) = R2 SO(2) (where SO(2) is the special orthogonal group of 2D rotations), and a configuration can be represented using 3 parameters (x, y, θ).
If the robot is a solid 3D shape that can translate and rotate, the workspace is 3-dimensional, but C is the special Euclidean group SE(3) = R3 SO(3), and a configuration requires 6 parameters: (x, y, z) for translation, and Euler angles (α, β, γ).
If the robot is a fixed-base manipulator with N revolute joints (and no closed-loops), C is N-dimensional.
Free space
The set of configurations that avoids collision with obstacles is called the free space Cfree. The complement of Cfree in C is called the obstacle or forbidden region.
Often, it is prohibitively difficult to explicitly compute the shape of Cfree. However, testing whether a given configuration is in Cfree is efficient. First, forward kinematics determine the position of the robot's geometry, and collision detection tests if the robot's geometry collides with the environment's geometry.
Target space
Target space is a subspace of free space which denotes where we want the robot to move to. In global motion planning, target space is observable by the robot's sensors. However, in local motion planning, the robot cannot observe the target space in some states. To solve this problem, the robot goes through several virtual target spaces, each of which is located within the observable area (around the robot). A virtual target space is called a sub-goal.
Obstacle space
Obstacle space is a space that the robot can not move to. Obstacle space is not opposite of free space.
Algorithms
Low-dimensional problems can be solved with grid-based algorithms that overlay a grid on top of configuration space, or geometric algorithms that compute the shape and connectivity of Cfree.
Exact motion planning for high-dimensional systems under complex constraints is computationally intractable. Potential-field algorithms are efficient, but fall prey to local minima (an exception is the harmonic potential fields). Sampling-based algorithms avoid the problem of local minima, and solve many problems quite quickly.
They are unable to determine that no path exists, but they have a probability of failure that decreases to zero as more time is spent.
Sampling-based algorithms are currently considered state-of-the-art for motion planning in high-dimensional spaces, and have been applied to problems which have dozens or even hundreds of dimensions (robotic manipulators, biological molecules, animated digital characters, and legged robots).
Grid-based search
Grid-based approaches overlay a grid on configuration space and assume each configuration is identified with a grid point. At each grid point, the robot is allowed to move to adjacent grid points as long as the line between them is completely contained within Cfree (this is tested with collision detection). This discretizes the set of actions, and search algorithms (like A*) are used to find a path from the start to the goal.
These approaches require setting a grid resolution. Search is faster with coarser grids, but the algorithm will fail to find paths through narrow portions of Cfree. Furthermore, the number of points on the grid grows exponentially in the configuration space dimension, which make them inappropriate for high-dimensional problems.
Traditional grid-based approaches produce paths whose heading changes are constrained to multiples of a given base angle, often resulting in suboptimal paths. Any-angle path planning approaches find shorter paths by propagating information along grid edges (to search fast) without constraining their paths to grid edges (to find short paths).
Grid-based approaches often need to search repeatedly, for example, when the knowledge of the robot about the configuration space changes or the configuration space itself changes during path following. Incremental heuristic search algorithms replan fast by using experience with the previous similar path-planning problems to speed up their search for the current one.
Interval-based search
These approaches are similar to grid-based search approaches except that they generate a paving covering entirely the configuration space instead of a grid. The paving is decomposed into two subpavings X−,X+ made with boxes such that X− ⊂ Cfree ⊂ X+. Characterizing Cfree amounts to solve a set inversion problem. Interval analysis could thus be used when Cfree cannot be described by linear inequalities in order to have a guaranteed enclosure.
The robot is thus allowed to move freely in X−, and cannot go outside X+. To both subpavings, a neighbor graph is built and paths can be found using algorithms such as Dijkstra or A*. When a path is feasible in X−, it is also feasible in Cfree. When no path exists in X+ from one initial configuration to the goal, we have the guarantee that no feasible path exists in Cfree. As for the grid-based approach, the interval approach is inappropriate for high-dimensional problems, due to the fact that the number of boxes to be generated grows exponentially with respect to the dimension of configuration space.
An illustration is provided by the three figures on the right where a hook with two degrees of freedom has to move from the left to the right, avoiding two horizontal small segments.
Nicolas Delanoue has shown that the decomposition with subpavings using interval analysis also makes it possible to characterize the topology of Cfree such as counting its number of connected components.
Geometric algorithms
Point robots among polygonal obstacles
Visibility graph
Cell decomposition
Voronoi diagram
Translating objects among obstacles
Minkowski sum
Finding the way out of a building
farthest ray trace
Given a bundle of rays around the current position attributed with their length hitting a wall, the robot moves into the direction of the longest ray unless a door is identified. Such an algorithm was used for modeling emergency egress from buildings.
Artificial potential fields
One approach is to treat the robot's configuration as a point in a potential field that combines attraction to the goal, and repulsion from obstacles. The resulting trajectory is output as the path. This approach has advantages in that the trajectory is produced with little computation. However, they can become trapped in local minima of the potential field and fail to find a path, or can find a non-optimal path. The artificial potential fields can be treated as continuum equations similar to electrostatic potential fields (treating the robot like a point charge), or motion through the field can be discretized using a set of linguistic rules. A navigation function or a probabilistic navigation function are sorts of artificial potential functions which have the quality of not having minimum points except the target point.
Sampling-based algorithms
Sampling-based algorithms represent the configuration space with a roadmap of sampled configurations.
A basic algorithm samples N configurations in C, and retains those in Cfree to use as milestones. A roadmap is then constructed that connects two milestones P and Q if the line segment PQ is completely in Cfree. Again, collision detection is used to test inclusion in Cfree. To find a path that connects S and G, they are added to the roadmap. If a path in the roadmap links S and G, the planner succeeds, and returns that path. If not, the reason is not definitive: either there is no path in Cfree, or the planner did not sample enough milestones.
These algorithms work well for high-dimensional configuration spaces, because unlike combinatorial algorithms, their running time is not (explicitly) exponentially dependent on the dimension of C. They are also (generally) substantially easier to implement. They are probabilistically complete, meaning the probability that they will produce a solution approaches 1 as more time is spent. However, they cannot determine if no solution exists.
Given basic visibility conditions on Cfree, it has been proven that as the number of configurations N grows higher, the probability that the above algorithm finds a solution approaches 1 exponentially. Visibility is not explicitly dependent on the dimension of C; it is possible to have a high-dimensional space with "good" visibility or a low-dimensional space with "poor" visibility. The experimental success of sample-based methods suggests that most commonly seen spaces have good visibility.
There are many variants of this basic scheme:
It is typically much faster to only test segments between nearby pairs of milestones, rather than all pairs.
Nonuniform sampling distributions attempt to place more milestones in areas that improve the connectivity of the roadmap.
Quasirandom samples typically produce a better covering of configuration space than pseudorandom ones, though some recent work argues that the effect of the source of randomness is minimal compared to the effect of the sampling distribution.
Employs local-sampling by performing a directional Markov chain Monte Carlo random walk with some local proposal distribution.
It is possible to substantially reduce the number of milestones needed to solve a given problem by allowing curved eye sights (for example by crawling on the obstacles that block the way between two milestones).
If only one or a few planning queries are needed, it is not always necessary to construct a roadmap of the entire space. Tree-growing variants are typically faster for this case (single-query planning). Roadmaps are still useful if many queries are to be made on the same space (multi-query planning)
List of notable algorithms
A*
D*
Rapidly-exploring random tree
Probabilistic roadmap
Completeness and performance
A motion planner is said to be complete if the planner in finite time either produces a solution or correctly reports that there is none. Most complete algorithms are geometry-based. The performance of a complete planner is assessed by its computational complexity. When proving this property mathematically, one has to make sure, that it happens in finite time and not just in the asymptotic limit. This is especially problematic, if there occur infinite sequences (that converge only in the limiting case) during a specific proving technique, since then, theoretically, the algorithm will never stop. Intuitive "tricks" (often based on induction) are typically mistakenly thought to converge, which they do only for the infinite limit. In other words, the solution exists, but the planner will never report it. This property therefore is related to Turing completeness and serves in most cases as a theoretical underpinning/guidance. Planners based on a brute force approach are always complete, but are only realizable for finite and discrete setups.
In practice, the termination of the algorithm can always be guaranteed by using a counter, that allows only for a maximum number of iterations and then always stops with or without solution. For realtime systems, this is typically achieved by using a watchdog timer, that will simply kill the process. The watchdog has to be independent of all processes (typically realized by low level interrupt routines). The asymptotic case described in the previous paragraph, however, will not be reached in this way. It will report the best one it has found so far (which is better than nothing) or none, but cannot correctly report that there is none. All realizations including a watchdog are always incomplete (except all cases can be evaluated in finite time).
Completeness can only be provided by a very rigorous mathematical correctness proof (often aided by tools and graph based methods) and should only be done by specialized experts if the application includes safety content. On the other hand, disproving completeness is easy, since one just needs to find one infinite loop or one wrong result returned. Formal Verification/Correctness of algorithms is a research field on its own. The correct setup of these test cases is a highly sophisticated task.
Resolution completeness is the property that the planner is guaranteed to find a path if the resolution of an underlying grid is fine enough. Most resolution complete planners are grid-based or interval-based. The computational complexity of resolution complete planners is dependent on the number of points in the underlying grid, which is O(1/hd), where h is the resolution (the length of one side of a grid cell) and d is the configuration space dimension.
Probabilistic completeness is the property that as more "work" is performed, the probability that the planner fails to find a path, if one exists, asymptotically approaches zero. Several sample-based methods are probabilistically complete. The performance of a probabilistically complete planner is measured by the rate of convergence. For practical applications, one usually uses this property, since it allows setting up the time-out for the watchdog based on an average convergence time.
Incomplete planners do not always produce a feasible path when one exists (see first paragraph). Sometimes incomplete planners do work well in practice, since they always stop after a guarantied time and allow other routines to take over.
Problem variants
Many algorithms have been developed to handle variants of this basic problem.
Differential constraints
Holonomic
Manipulator arms (with dynamics)
Nonholonomic
Drones
Cars
Unicycles
Planes
Acceleration bounded systems
Moving obstacles (time cannot go backward)
Bevel-tip steerable needle
Differential drive robots
Optimality constraints
Hybrid systems
Hybrid systems are those that mix discrete and continuous behavior. Examples of such systems are:
Robotic manipulation
Mechanical assembly
Legged robot locomotion
Reconfigurable robots
Uncertainty
Motion uncertainty
Missing information
Active sensing
Sensorless planning
Networked control systems
Environmental constraints
Maps of dynamics
Applications
Robot navigation
Automation
The driverless car
Robotic surgery
Digital character animation
Protein folding
Safety and accessibility in computer-aided architectural design
See also
Moving sofa problem - mathematical problem of finding the largest two-dimensional shape that can be maneuvered around a corner
Gimbal lock – similar traditional issue in mechanical engineering
Kinodynamic planning
Mountain climbing problem
OMPL - The Open Motion Planning Library
Pathfinding
Pebble motion problems – multi-robot motion planning
Shortest path problem
Velocity obstacle
References
Further reading
Planning Algorithms, Steven M. LaValle, 2006, Cambridge University Press, .
Principles of Robot Motion: Theory, Algorithms, and Implementation, H. Choset, W. Burgard, S. Hutchinson, G. Kantor, L. E. Kavraki, K. Lynch, and S. Thrun, MIT Press, April 2005.
Chapter 13: Robot Motion Planning: pp. 267–290.
External links
"Open Robotics Automation Virtual Environment", http://openrave.org/
Jean-Claude Latombe talks about his work with robots and motion planning, 5 April 2000
"Open Motion Planning Library (OMPL)", http://ompl.kavrakilab.org
"Motion Strategy Library", http://msl.cs.uiuc.edu/msl/
"Motion Planning Kit", https://ai.stanford.edu/~mitul/mpk
"Simox", http://simox.sourceforge.net
"Robot Motion Planning and Control", http://www.laas.fr/%7Ejpl/book.html
Robot kinematics
Theoretical computer science
Automated planning and scheduling | Motion planning | [
"Mathematics",
"Engineering"
] | 3,521 | [
"Theoretical computer science",
"Applied mathematics",
"Robotics engineering",
"Robot kinematics"
] |
6,000,466 | https://en.wikipedia.org/wiki/Tisserand%27s%20parameter | Tisserand's parameter (or Tisserand's invariant) is a number calculated from several orbital elements (semi-major axis, orbital eccentricity, and inclination) of a relatively small object and a larger "perturbing body". It is used to distinguish different kinds of orbits. The term is named after French astronomer Félix Tisserand who derived it and applies to restricted three-body problems in which the three objects all differ greatly in mass.
Definition
For a small body with semi-major axis , orbital eccentricity , and orbital inclination , relative to the orbit of a perturbing larger body with semimajor axis , the parameter is defined as follows:
Tisserand invariant conservation
In the three-body problem, the quasi-conservation of Tisserand's invariant is derived as the limit of the Jacobi integral away from the main two bodies (usually the star and planet). Numerical simulations show that the Tisserand invariant of orbit-crossing bodies is conserved in the three-body problem on Gigayear timescales.
Applications
The Tisserand parameter's conservation was originally used by Tisserand to determine whether or not an observed orbiting body is the same as one previously observed. This is usually known as the Tisserand's criterion.
Orbit classification
The value of the Tisserand parameter with respect to the planet that most perturbs a small body in the solar system can be used to delineate groups of objects that may have similar origins.
TJ, Tisserand's parameter with respect to Jupiter as perturbing body, is frequently used to distinguish asteroids (typically ) from Jupiter-family comets (typically ).
The minor planet group of damocloids are defined by a Jupiter Tisserand's parameter of 2 or less ().
TN, Tisserand's parameter with respect to Neptune, has been suggested to distinguish near-scattered (affected by Neptune) from extended-scattered trans-Neptunian objects (not affected by Neptune; e.g. 90377 Sedna).
TN, Tisserand's parameter with respect to Neptune may also be used to distinguish Neptune-crossing trans-neptunian objects that may be injected onto retrograde and polar Centaur orbits () and those that may be injected onto prograde Centaur orbits ().
Other uses
The quasi-conservation of Tisserand's parameter constrains the orbits attainable using gravity assist for outer Solar System exploration.
Tisserand's parameter could be used to infer the presence of an intermediate-mass black hole at the center of the Milky Way using the motions of orbiting stars.
Related notions
The parameter is derived from one of the so-called Delaunay standard variables, used to study the perturbed Hamiltonian in a three-body system. Ignoring higher-order perturbation terms, the following value is conserved:
Consequently, perturbations may lead to the resonance between the orbital inclination and eccentricity, known as Kozai resonance. Near-circular, highly inclined orbits can thus become very eccentric in exchange for lower inclination. For example, such a mechanism can produce sungrazing comets, because a large eccentricity with a constant semimajor axis results in a small perihelion.
See also
Tisserand's relation for the derivation and the detailed assumptions
References
External links
David Jewitt's page on Tisserand's parameter
Tisserand criterion
Orbits
Equations of astronomy | Tisserand's parameter | [
"Physics",
"Astronomy"
] | 712 | [
"Concepts in astronomy",
"Equations of astronomy"
] |
6,005,402 | https://en.wikipedia.org/wiki/Amorphous%20magnet | In physics, amorphous magnet refers to a magnet made from amorphous solids. Below a certain temperature, these magnets present permanent magnetic phases produced by randomly located magnetic moments. Three common types of amorphous magnetic phases are asperomagnetism, speromagnetism and sperimagnetism, which correspond to ferromagnetism, antiferromagnetism and ferrimagnetism, respectively, of crystalline solids. Spin glass models can present these amorphous types of magnetism. Due to random frustration, amorphous magnets possess many nearly degenerate ground states.
The terms for the amorphous magnetic phases were coined by Michael Coey in 1970s. The Greek root spero/speri () means 'to scatter'.
Phases
Single species
Asperomagnetism
Asperomagnetism is the equivalent of ferromagnetism for a disordered system with random magnetic moments. It is defined by short range correlations of locked magnetic moments within small noncrystalline regions, with average long range correlations. Speromagnets possess a permanent net magnetic moment.
An example of a asperomagnets is amorphous YFe3 and DyNi3.
Speromagnetism
Speromagnetism is the equivalent of antiferromagnetism for a disordered system with random magnetic moments. It is defined by short range correlations of locked magnetic moments within small noncrystalline regions, without average long range correlations. Speromagnets do not have a net magnetic moment.
An example of a solid presenting speromagnetism is amorphous YFe2 and can be detected using Mössbauer spectroscopy.
Multiple species
Sperimagnetism
Sperimagnetism is the equivalent of ferrimagnetism for a disordered system with two or more species of magnetic moments, with at least one species locked in random magnetic moments. Sperimagnets possess a permanent net magnetic moment. When all species are the same, this phase is equivalent to asperomagnetism.
Notes
References
Quantum phases
Magnetic ordering
Amorphous solids | Amorphous magnet | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 440 | [
"Quantum phases",
"Unsolved problems in physics",
"Quantum mechanics",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Amorphous solids",
"Matter"
] |
6,005,413 | https://en.wikipedia.org/wiki/Helimagnetism | Helimagnetism is a form of magnetic ordering where spins of neighbouring magnetic moments arrange themselves in a spiral or helical pattern, with a characteristic turn angle of somewhere between 0 and 180 degrees. It results from the competition between ferromagnetic and antiferromagnetic exchange interactions. It is possible to view ferromagnetism and antiferromagnetism as helimagnetic structures with characteristic turn angles of 0 and 180 degrees respectively. Helimagnetic order breaks spatial inversion symmetry, as it can be either left-handed or right-handed in nature.
Strictly speaking, helimagnets have no permanent magnetic moment, and as such are sometimes considered a complicated type of antiferromagnet. This distinguishes helimagnets from conical magnets, (e.g. Holmium below 20 K) which have spiral modulation in addition to a permanent magnetic moment. Helimagnets can be characterized by the distance it takes for the spiral to complete one turn. In analogy to the pitch of screw thread, the period of repetition is known as the "pitch" of the helimagnet. If the spiral's period is some rational multiple of the crystal's unit cell, the structure is commensurate, like the structure originally proposed for MnO2. On the other hand, if the multiple is irrational, the magnetism is incommensurate, like the updated MnO2 structure.
Helimagnetism was first proposed in 1959, as an explanation of the magnetic structure of manganese dioxide. Initially applied to neutron diffraction, it has since been observed more directly by Lorentz electron microscopy. Some helimagnetic structures are reported to be stable up to room temperature. Like how ordinary ferromagnets have domain walls that separate individual magnetic domains, helimagnets have their own classes of domain walls which are characterized by topological charge.
Many helimagnets have a chiral cubic structure, such as the FeSi (B20) crystal structure type. In these materials, the combination of ferromagnetic exchange and the Dzyaloshinskii–Moriya interaction leads to helixes with relatively long periods. Since the crystal structure is noncentrosymetric even in the paramagnetic state, the magnetic transition to a helimagnetic state does not break inversion symmetry, and the direction of the spiral is locked to the crystal structure.
On the other hand, helimagnetism in other materials can also be based on frustrated magnetism or the RKKY interaction. The result is that centrosymmetric structures like the MnP-type (B31) compounds can also exhibit double-helix type helimagnetism where both left and right handed spirals coexist. For these itinerant helimagnets, the direction of the helicity can be controlled by applied electric currents and magnetic fields.
See also
Antisymmetric exchange
Magnetic skyrmion
Ferromagnetic resonance
References
Magnetic ordering
Liquid helium | Helimagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 616 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
6,006,525 | https://en.wikipedia.org/wiki/Niobium%20nitride | Niobium nitride is a compound of niobium and nitrogen (nitride) with the chemical formula NbN. At low temperatures (about 16 K) NbN becomes a superconductor, and is used in detectors for infrared light.
Uses
Niobium nitride's main use is as a superconductor.
Detectors based on it can detect a single photon in the 1-10 micrometer section of the infrared spectrum, which is important for astronomy and telecommunications. It can detect changes up to 25 gigahertz.
Superconducting NbN nanowires can be used in particle detectors with high magnetic fields.
Niobium nitride is also used in absorbing anti-reflective coatings.
In 2015, it was reported that Panasonic Corp. has developed a photocatalyst based on niobium nitride that can absorb 57% of sunlight to support the decomposition of water to produce hydrogen gas as fuel for electrochemical fuel cells.
References
Niobium(III) compounds
Nitrides
Superconductors
Rock salt crystal structure | Niobium nitride | [
"Chemistry",
"Materials_science"
] | 224 | [
"Superconductivity",
"Superconductors"
] |
18,250,710 | https://en.wikipedia.org/wiki/Benexate | Benexate (BEX) is an anti-ulcer agent used in the treatment of acid-related disorders. It is unique in its inability to form salts that are both non-bitter and soluble.
Medical uses
Benexate is approved from treatment of gastric ulcer in Japan.
Mechanism of action
The mechanism of action of benexate involves promotion of prostaglandin synthesis, protein secretion, and blood flow stimulation in the gastrointestinal tract.
See also
Famotidine
Powder diffraction
Sugar substitute
Crystal engineering
References
Further reading
Drugs acting on the gastrointestinal system and metabolism
Guanidines
Salicylate esters
Salicylyl esters
Cyclohexanes | Benexate | [
"Chemistry"
] | 148 | [
"Guanidines",
"Functional groups"
] |
18,250,901 | https://en.wikipedia.org/wiki/Mepitiostane | Mepitiostane, sold under the brand name Thioderon, is an orally active antiestrogen and anabolic–androgenic steroid (AAS) of the dihydrotestosterone (DHT) group which is marketed in Japan as an antineoplastic agent for the treatment of breast cancer. It is a prodrug of epitiostanol. The drug was patented and described in 1968.
Medical uses
Mepitiostane is used as an antiestrogen and antineoplastic agent in the treatment of breast cancer. It is also used as an AAS in the treatment of anemia of renal failure. A series of case reports have found it to be effective in the treatment of an estrogen receptor (ER)-dependent meningiomas as well.
Side effects
Mepitiostane shows a high rate of virilizing side effects such as acne, hirsutism, and voice changes in women.
Pharmacology
Pharmacodynamics
Mepitiostane is described as similar to tamoxifen as an antiestrogen, and through its active form epitiostanol, binds directly to and antagonizes the ER. It is also an AAS.
Pharmacokinetics
Mepitiostane is converted into epitiostanol in the body.
Chemistry
Mepitiostane, also known as epitiostanol 17β-(1-methoxy)cyclopentyl ether, is a synthetic androstane steroid and a derivative of DHT. It is the C17β (1-methoxy)cyclopentyl ether of epitiostanol, which itself is 2α,3α-epithio-DHT or 2α,3α-epithio-5α-androstan-17β-ol. A related AAS is methylepitiostanol (17α-methylepitiostanol), which is an orally active variant of epitiostanol similarly to mepitiostane, though also has a risk of hepatotoxicity.
Society and culture
Generic names
Mepitiostane is the generic name of the drug and its and .
References
Androgen ethers
Anabolic–androgenic steroids
Androstanes
Antiestrogens
Cyclopentanes
Hormonal antineoplastic drugs
Prodrugs
Episulfides | Mepitiostane | [
"Chemistry"
] | 517 | [
"Chemicals in medicine",
"Prodrugs"
] |
18,250,963 | https://en.wikipedia.org/wiki/Sterilant%20gas%20monitoring | Sterilant gas monitoring is the detection of hazardous gases used by health care and other facilities to sterilize medical supplies that cannot be sterilized by heat or steam methods. The current FDA approved sterilant gases are ethylene oxide, hydrogen peroxide and ozone. Other liquid sterilants, such as peracetic acid, may also be used for sterilization and may raise similar occupational health issues. Sterilization means the complete destruction of all biological life (including viruses and sporoidal forms of bacteria), and sterilization efficacy is typically considered adequate if less than one in a million microbes remain viable.
Hazards of sterilant gases
Since sterilant gases are selected to destroy a wide range of biological life forms, any gas which is suitable for sterilization will present a hazard to personnel exposed to it. NIOSH's IDLH (immediately dangerous to life and health) values for the three sterilant gases are 800 ppm (ethylene oxide), 75 ppm (hydrogen peroxide) and 5 ppm (ozone). For comparison, the IDLH of cyanide gas (hydrogen cyanide) is 50 ppm. The OSHA PEL (permissible exposure limit) will be considerably lower than this; 1 ppm for ethylene oxide, or 5 ppm for a 15 minute short-term exposure limit. Thus exposure to even low levels of sterilant gas should not be treated casually and most facilities go to great lengths to adequately protect their employees. In addition to toxicity, ethylene oxide is flammable (from above 3%) and ozone is damaging to equipment not designed to resist it.
Sterilizer manufacturers go to great lengths to make their products as safe as possible but, as with any mechanical device, they can and sometimes do fail and leaks have been reported. The odor threshold for these gases is above the PELs and for ethylene oxide it is 500 ppm, approaching that of the IDLH. Odor is thus inadequate as a monitoring technique. Continuous gas monitors are used as part of an overall safety program to provide a prompt alert to nearby workers in the event that there is a leak of the sterilant gas.
Monitoring equipment
The monitor alarms are typically set to warn if the concentrations exceed the OSHA permissible exposure limits (PELs), 1.0 ppm for ethylene oxide and 1.0 and 0.1 ppm for hydrogen peroxide and ozone respectively. The PELs are calculated as 8 hour time weighted average values (i.e. the average exposure over a typical shift).
References
Sterilization (microbiology)
Occupational safety and health | Sterilant gas monitoring | [
"Chemistry",
"Biology"
] | 550 | [
"Microbiology techniques",
"Sterilization (microbiology)"
] |
18,253,221 | https://en.wikipedia.org/wiki/Brun%E2%80%93Titchmarsh%20theorem | In analytic number theory, the Brun–Titchmarsh theorem, named after Viggo Brun and Edward Charles Titchmarsh, is an upper bound on the distribution of prime numbers in arithmetic progression.
Statement
Let count the number of primes p congruent to a modulo q with p ≤ x. Then
for all q < x.
History
The result was proven by sieve methods by Montgomery and Vaughan; an earlier result of Brun and Titchmarsh obtained a weaker version of this inequality with an additional multiplicative factor of .
Improvements
If q is relatively small, e.g., , then there exists a better bound:
This is due to Y. Motohashi (1973). He used a bilinear structure in the error term in the Selberg sieve, discovered by himself. Later this idea of exploiting structures in sieving errors developed into a major method in Analytic Number Theory, due to H. Iwaniec's extension to combinatorial sieve.
Comparison with Dirichlet's theorem
By contrast, Dirichlet's theorem on arithmetic progressions gives an asymptotic result, which may be expressed in the form
but this can only be proved to hold for the more restricted range q < (log x)c for constant c: this is the Siegel–Walfisz theorem.
References
.
Theorems in analytic number theory
Theorems about prime numbers | Brun–Titchmarsh theorem | [
"Mathematics"
] | 293 | [
"Theorems in mathematical analysis",
"Theorems in number theory",
"Theorems in analytic number theory",
"Theorems about prime numbers"
] |
18,253,454 | https://en.wikipedia.org/wiki/Theory%20of%20conjoint%20measurement | The theory of conjoint measurement (also known as conjoint measurement or additive conjoint measurement) is a general, formal theory of continuous quantity. It was independently discovered by the French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey .
The theory concerns the situation where at least two natural attributes, A and X, non-interactively relate to a third attribute, P. It is not required that A, X or P are known to be quantities. Via specific relations between the levels of P, it can be established that P, A and X are continuous quantities. Hence the theory of conjoint measurement can be used to quantify attributes in empirical circumstances where it is not possible to combine the levels of the attributes using a side-by-side operation or concatenation. The quantification of psychological attributes such as attitudes, cognitive abilities and utility is therefore logically plausible. This means that the scientific measurement of psychological attributes is possible. That is, like physical quantities, a magnitude of a psychological quantity may possibly be expressed as the product of a real number and a unit magnitude.
Application of the theory of conjoint measurement in psychology, however, has been limited. It has been argued that this is due to the high level of formal mathematics involved (e.g., ) and that the theory cannot account for the "noisy" data typically discovered in psychological research (e.g., ). It has been argued that the Rasch model is a stochastic variant of the theory of conjoint measurement (e.g., ; ; ; ; ; ), however, this has been disputed (e.g., Karabatsos, 2001; Kyngdon, 2008). Order restricted methods for conducting probabilistic tests of the cancellation axioms of conjoint measurement have been developed in the past decade (e.g., Karabatsos, 2001; Davis-Stober, 2009).
The theory of conjoint measurement is (different but) related to conjoint analysis, which is a statistical-experiments methodology employed in marketing to estimate the parameters of additive utility functions. Different multi-attribute stimuli are presented to respondents, and different methods are used to measure their preferences about the presented stimuli. The coefficients of the utility function are estimated using alternative regression-based tools.
Historical overview
In the 1930s, the British Association for the Advancement of Science established the Ferguson Committee to investigate the possibility of psychological attributes being measured scientifically. The British physicist and measurement theorist Norman Robert Campbell was an influential member of the committee. In its Final Report (Ferguson, et al., 1940), Campbell and the Committee concluded that because psychological attributes were not capable of sustaining concatenation operations, such attributes could not be continuous quantities. Therefore, they could not be measured scientifically. This had important ramifications for psychology, the most significant of these being the creation in 1946 of the operational theory of measurement by Harvard psychologist Stanley Smith Stevens. Stevens' non-scientific theory of measurement is widely held as definitive in psychology and the behavioural sciences generally .
Whilst the German mathematician Otto Hölder (1901) anticipated features of the theory of conjoint measurement, it was not until the publication of Luce & Tukey's seminal 1964 paper that the theory received its first complete exposition. Luce & Tukey's presentation was algebraic and is therefore considered more general than Debreu's (1960) topological work, the latter being a special case of the former . In the first article of the inaugural issue of the Journal of Mathematical Psychology, proved that via the theory of conjoint measurement, attributes not capable of concatenation could be quantified. N.R. Campbell and the Ferguson Committee were thus proven wrong. That a given psychological attribute is a continuous quantity is a logically coherent and empirically testable hypothesis.
Appearing in the next issue of the same journal were important papers by Dana Scott (1964), who proposed a hierarchy of cancellation conditions for the indirect testing of the solvability and Archimedean axioms, and David Krantz (1964) who connected the Luce & Tukey work to that of Hölder (1901).
Work soon focused on extending the theory of conjoint measurement to involve more than just two attributes. and Amos Tversky (1967) developed what became known as polynomial conjoint measurement, with providing a schema with which to construct conjoint measurement structures of three or more attributes. Later, the theory of conjoint measurement (in its two variable, polynomial and n-component forms) received a thorough and highly technical treatment with the publication of the first volume of Foundations of Measurement, which Krantz, Luce, Tversky and philosopher Patrick Suppes cowrote .
Shortly after the publication of Krantz, et al., (1971), work focused upon developing an "error theory" for the theory of conjoint measurement. Studies were conducted into the number of conjoint arrays that supported only single cancellation and both single and double cancellation (; ). Later enumeration studies focused on polynomial conjoint measurement (; ). These studies found that it is highly unlikely that the axioms of the theory of conjoint measurement are satisfied at random, provided that more than three levels of at least one of the component attributes has been identified.
Joel Michell (1988) later identified that the "no test" class of tests of the double cancellation axiom was empty. Any instance of double cancellation is thus either an acceptance or a rejection of the axiom. Michell also wrote at this time a non-technical introduction to the theory of conjoint measurement which also contained a schema for deriving higher order cancellation conditions based upon Scott's (1964) work. Using Michell's schema, Ben Richards (Kyngdon & Richards, 2007) discovered that some instances of the triple cancellation axiom are "incoherent" as they contradict the single cancellation axiom. Moreover, he identified many instances of the triple cancellation which are trivially true if double cancellation is supported.
The axioms of the theory of conjoint measurement are not stochastic; and given the ordinal constraints placed on data by the cancellation axioms, order restricted inference methodology must be used . George Karabatsos and his associates (Karabatsos, 2001; ) developed a Bayesian Markov chain Monte Carlo methodology for psychometric applications. Karabatsos & Ullrich 2002 demonstrated how this framework could be extended to polynomial conjoint structures. Karabatsos (2005) generalised this work with his multinomial Dirichlet framework, which enabled the probabilistic testing of many non-stochastic theories of mathematical psychology. More recently, Clintin Davis-Stober (2009) developed a frequentist framework for order restricted inference that can also be used to test the cancellation axioms.
Perhaps the most notable (Kyngdon, 2011) use of the theory of conjoint measurement was in the prospect theory proposed by the Israeli – American psychologists Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1979). Prospect theory was a theory of decision making under risk and uncertainty which accounted for choice behaviour such as the Allais Paradox. David Krantz wrote the formal proof to prospect theory using the theory of conjoint measurement. In 2002, Kahneman received the Nobel Memorial Prize in Economics for prospect theory (Birnbaum, 2008).
Measurement and quantification
The classical / standard definition of measurement
In physics and metrology, the standard definition of measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind (de Boer, 1994/95; Emerson, 2008). For example, the statement "Peter's hallway is 4 m long" expresses a measurement of an hitherto unknown length magnitude (the hallway's length) as the ratio of the unit (the metre in this case) to the length of the hallway. The number 4 is a real number in the strict mathematical sense of this term.
For some other quantities, invariant are ratios between attribute differences. Consider temperature, for example. In the familiar everyday instances, temperature is measured using instruments calibrated in either the Fahrenheit or Celsius scales. What are really being measured with such instruments are the magnitudes of temperature differences. For example, Anders Celsius defined the unit of the Celsius scale to be 1/100 of the difference in temperature between the freezing and boiling points of water at sea level. A midday temperature measurement of 20 degrees Celsius is simply the difference of the midday temperature and the temperature of the freezing water divided by the difference of the Celsius unit and the temperature of the freezing water.
Formally expressed, a scientific measurement is:
where Q is the magnitude of the quantity, r is a real number and [Q] is a unit magnitude of the same kind.
Extensive and intensive quantity
Length is a quantity for which natural concatenation operations exist. That is, we can combine in a side-by-side fashion lengths of rigid steel rods, for example, such that the additive relations between lengths is readily observed. If we have four 1 m lengths of such rods, we can place them end to end to produce a length of 4 m. Quantities capable of concatenation are known as extensive quantities and include mass, time, electrical resistance and plane angle. These are known as base quantities in physics and metrology.
Temperature is a quantity for which there is an absence of concatenation operations. We cannot pour a volume of water of temperature 40 °C into another bucket of water at 20 °C and expect to have a volume of water with a temperature of 60 °C. Temperature is therefore an intensive quantity.
Psychological attributes, like temperature, are considered to be intensive as no way of concatenating such attributes has been found. But this is not to say that such attributes are not quantifiable. The theory of conjoint measurement provides a theoretical means of doing this.
Theory
Consider two natural attributes A, and X. It is not known that either A or X is a continuous quantity, or that both of them are. Let a, b, and c represent three independent, identifiable levels of A; and let x, y and z represent three independent, identifiable levels of X. A third attribute, P, consists of the nine ordered pairs of levels of A and X. That is, (a, x), (b, y),..., (c, z) (see Figure 1). The quantification of A, X and P depends upon the behaviour of the relation holding upon the levels of P. These relations are presented as axioms in the theory of conjoint measurement.
Single cancellation or independence axiom
The single cancellation axiom is as follows. The relation upon P satisfies single cancellation if and only if for all a and b in A, and x in X, (a, x) > (b, x) is implied for every w in X such that (a, w) > (b, w). Similarly, for all x and y in X and a in A, (a, x) > (a, y) is implied for every d in A such that (d, x) > (d, y). What this means is that if any two levels, a, b, are ordered, then this order holds irrespective of each and every level of X. The same holds for any two levels, x and y of X with respect to each and every level of A.
Single cancellation is so-called because a single common factor of two levels of P cancel out to leave the same ordinal relationship holding on the remaining elements. For example, a cancels out of the inequality (a, x) > (a, y) as it is common to both sides, leaving x > y. Krantz, et al., (1971) originally called this axiom independence, as the ordinal relation between two levels of an attribute is independent of any and all levels of the other attribute. However, given that the term independence causes confusion with statistical concepts of independence, single cancellation is the preferable term. Figure One is a graphical representation of one instance of single cancellation.
Satisfaction of the single cancellation axiom is necessary, but not sufficient, for the quantification of attributes A and X. It only demonstrates that the levels of A, X and P are ordered. Informally, single cancellation does not sufficiently constrain the order upon the levels of P to quantify A and X. For example, consider the ordered pairs (a, x), (b, x) and (b, y). If single cancellation holds then (a, x) > (b, x) and (b, x) > (b, y). Hence via transitivity (a, x) > (b, y). The relation between these latter two ordered pairs, informally a left-leaning diagonal, is determined by the satisfaction of the single cancellation axiom, as are all the "left leaning diagonal" relations upon P.
Double cancellation axiom
Single cancellation does not determine the order of the "right-leaning diagonal" relations upon P. Even though by transitivity and single cancellation it was established that (a, x) > (b, y), the relationship between (a, y) and (b, x) remains undetermined. It could be that either (b, x) > (a, y) or (a, y) > (b, x) and such ambiguity cannot remain unresolved.
The double cancellation axiom concerns a class of such relations upon P in which the common terms of two antecedent inequalities cancel out to produce a third inequality. Consider the instance of double cancellation graphically represented by Figure Two. The antecedent inequalities of this particular instance of double cancellation are:
and
Given that:
is true if and only if and
is true if and only if , it follows that:
Cancelling the common terms results in:
Hence double cancellation can only obtain when A and X are quantities.
Double cancellation is satisfied if and only if the consequent inequality does not contradict the antecedent inequalities. For example, if the consequent inequality above was:
or alternatively,
then double cancellation would be violated and it could not be concluded that A and X are quantities.
Double cancellation concerns the behaviour of the "right leaning diagonal" relations on P as these are not logically entailed by single cancellation. discovered that when the levels of A and X approach infinity, then the number of right leaning diagonal relations is half of the number of total relations upon P. Hence if A and X are quantities, half of the number of relations upon P are due to ordinal relations upon A and X and half are due to additive relations upon A and X .
The number of instances of double cancellation is contingent upon the number of levels identified for both A and X. If there are n levels of A and m of X, then the number of instances of double cancellation is n! × m!. Therefore, if n = m = 3, then 3! × 3! = 6 × 6 = 36 instances in total of double cancellation. However, all but 6 of these instances are trivially true if single cancellation is true, and if any one of these 6 instances is true, then all of them are true. One such instance is that shown in Figure Two. calls this a Luce–Tukey instance of double cancellation.
If single cancellation has been tested upon a set of data first and is established, then only the Luce–Tukey instances of double cancellation need to be tested. For n levels of A and m of X, the number of Luce–Tukey double cancellation instances is . For example, if n = m = 4, then there are 16 such instances. If n = m = 5 then there are 100. The greater the number of levels in both A and X, the less probable it is that the cancellation axioms are satisfied at random (; ) and the more stringent test of quantity the application of conjoint measurement becomes.
Solvability and Archimedean axioms
The single and double cancellation axioms by themselves are not sufficient to establish continuous quantity. Other conditions must also be introduced to ensure continuity. These are the solvability and Archimedean conditions.
Solvability means that for any three elements of a, b, x and y, the fourth exists such that the equation a x = b y is solved, hence the name of the condition. Solvability essentially is the requirement that each level P has an element in A and an element in X. Solvability reveals something about the levels of A and X — they are either dense like the real numbers or equally spaced like the integers .
The Archimedean condition is as follows. Let I be a set of consecutive integers, either finite or infinite, positive or negative. The levels of A form a standard sequence if and only if there exists x and y in X where x ≠ y and for all integers i and i + 1 in I:
What this basically means is that if x is greater than y, for example, there are levels of A which can be found which makes two relevant ordered pairs, the levels of P, equal.
The Archimedean condition argues that there is no infinitely greatest level of P and so hence there is no greatest level of either A or X. This condition is a definition of continuity given by the ancient Greek mathematician Archimedes whom wrote that "Further, of unequal lines, unequal surfaces, and unequal solids, the greater exceeds the less by such a magnitude as, when added to itself, can be made to exceed any assigned magnitude among those which are comparable with one another " (On the Sphere and Cylinder, Book I, Assumption 5). Archimedes recognised that for any two magnitudes of a continuous quantity, one being lesser than the other, the lesser could be multiplied by a whole number such that it equalled the greater magnitude. Euclid stated the Archimedean condition as an axiom in Book V of the Elements, in which Euclid presented his theory of continuous quantity and measurement.
As they involve infinitistic concepts, the solvability and Archimedean axioms are not amenable to direct testing in any finite empirical situation. But this does not entail that these axioms cannot be empirically tested at all. Scott's (1964) finite set of cancellation conditions can be used to indirectly test these axioms; the extent of such testing being empirically determined. For example, if both A and X possess three levels, the highest order cancellation axiom within Scott's (1964) hierarchy that indirectly tests solvability and Archimedeaness is double cancellation. With four levels it is triple cancellation (Figure 3). If such tests are satisfied, the construction of standard sequences in differences upon A and X are possible. Hence these attributes may be dense as per the real numbers or equally spaced as per the integers . In other words, A and X are continuous quantities.
Relation to the scientific definition of measurement
Satisfaction of the conditions of conjoint measurement means that measurements of the levels of A and X can be expressed as either ratios between magnitudes or ratios between magnitude differences. It is most commonly interpreted as the latter, given that most behavioural scientists consider that their tests and surveys "measure" attributes on so-called "interval scales" . That is, they believe tests do not identify absolute zero levels of psychological attributes.
Formally, if P, A and X form an additive conjoint structure, then there exist functions from A and X into the real numbers such that for a and b in A and x and y in X:
If and are two other real valued functions satisfying the above expression, there exist and real valued constants satisfying:
That is, and are measurements of A and X unique up to affine transformation (i.e. each is an interval scale in Stevens’ (1946) parlance). The mathematical proof of this result is given in .
This means that the levels of A and X are magnitude differences measured relative to some kind of unit difference. Each level of P is a difference between the levels of A and X. However, it is not clear from the literature as to how a unit could be defined within an additive conjoint context. proposed a scaling method for conjoint structures but he also did not discuss the unit.
The theory of conjoint measurement, however, is not restricted to the quantification of differences. If each level of P is a product of a level of A and a level of X, then P is another different quantity whose measurement is expressed as a magnitude of A per unit magnitude of X. For example, A consists of masses and X consists of volumes, then P consists of densities measured as mass per unit of volume. In such cases, it would appear that one level of A and one level of X must be identified as a tentative unit prior to the application of conjoint measurement.
If each level of P is the sum of a level of A and a level of X, then P is the same quantity as A and X. For example, A and X are lengths so hence must be P. All three must therefore be expressed in the same unit. In such cases, it would appear that a level of either A or X must be tentatively identified as the unit. Hence it would seem that application of conjoint measurement requires some prior descriptive theory of the relevant natural system.
Applications of conjoint measurement
Empirical applications of the theory of conjoint measurement have been sparse (; ).
Several empirical evaluations of the double cancellation have been conducted. Among these, evaluated the axiom to the psychophysics of binaural loudness. They found the double cancellation axiom was rejected. conducted a similar investigation and replicated Levelt, et al.''' (1972) findings. observed that the evaluation of double
cancellation involves considerable redundancy that complicates its empirical testing. Therefore, evaluated instead the equivalent Thomsen condition axiom, which avoids this redundancy, and found the property supported in binaural loudness. , summarized the literature to that date, including the observation that the evaluation of the Thomsen Condition also involves an empirical challenge that they find remedied by the conjoint commutativity axiom, which they show to be equivalent to the Thomsen Condition. found conjoint commutativity supported for binaural loudness and brightness.
applied the theory to L. L. Thurstone's (1927) theory of paired comparisons, multidimensional scaling and Coombs' (1964) theory of unidimensional unfolding. He found support of the cancellation axioms only with Coombs' (1964) theory. However, the statistical techniques employed by Michell (1990) in testing Thurstone's theory and multidimensional scaling did not take into consideration the ordinal constraints imposed by the cancellation axioms .
, Kyngdon (2006), Michell (1994) and tested the cancellation axioms of upon the interstimulus midpoint orders obtained by the use of Coombs' (1964) theory of unidimensional unfolding. Coombs' theory in all three studies was applied to a set of six statements. These authors found that the axioms were satisfied, however, these were applications biased towards a positive result. With six stimuli, the probability of an interstimulus midpoint order satisfying the double cancellation axioms at random is .5874 (Michell, 1994). This is not an unlikely event. Kyngdon & Richards (2007) employed eight statements and found the interstimulus midpoint orders rejected the double cancellation condition.
applied conjoint measurement to item response data to a convict parole questionnaire and to intelligence test data gathered from Danish troops. They found considerable violation of the cancellation axioms in the parole questionnaire data, but not in the intelligence test data. Moreover, they recorded the supposed "no test" instances of double cancellation. Interpreting these correctly as instances in support of double cancellation (Michell, 1988), the results of are better than what they believed.
applied conjoint measurement to performance on sequence completion tasks. The columns of their conjoint arrays (X) were defined by the demand placed upon working memory capacity through increasing numbers of working memory place keepers in letter series completion tasks. The rows were defined by levels of motivation (A), which consisted in different number of times available for completing the test. Their data (P) consisted of completion times and average number of series correct. They found support for the cancellation axioms, however, their study was biased by the small size of the conjoint arrays (3 × 3 is size) and by statistical techniques that did not take into consideration the ordinal restrictions imposed by the cancellation axioms.
Kyngdon (2011) used Karabatsos's (2001) order-restricted inference framework to test a conjoint matrix of reading item response proportions (P) where the examinee reading ability comprised the rows of the conjoint array (A) and the difficulty of the reading items formed the columns of the array (X''). The levels of reading ability were identified via raw total test score and the levels of reading item difficulty were identified by the Lexile Framework for Reading . Kyngdon found that satisfaction of the cancellation axioms was obtained only through permutation of the matrix in a manner inconsistent with the putative Lexile measures of item difficulty. Kyngdon also tested simulated ability test response data using polynomial conjoint measurement. The data were generated using Humphry's extended frame of reference Rasch model . He found support of distributive, single and double cancellation consistent with a distributive polynomial conjoint structure in three variables .
See also
References
(Part 1 translated by
External links
Karabatsos' S-Plus programs for testing conjoint axioms
Birnbaum's FORTRAN MONANOVA program for testing additivity
Kyngdon's R programs for enumerating cancellation tests, testing axioms and prospect theory
R statistical computing software
Psychometrics
conjoint measurement theory of
Latent variable models
conjoint measurement theory of
Mathematical psychology | Theory of conjoint measurement | [
"Mathematics"
] | 5,465 | [
"Applied mathematics",
"Mathematical psychology"
] |
18,254,249 | https://en.wikipedia.org/wiki/Load%20factor%20%28electrical%29 | In electrical engineering the load factor is defined as the average load divided by the peak load in a specified time period. It is a measure of the utilization rate, or efficiency of electrical energy usage; a high load factor indicates that load is using the electric system more efficiently, whereas consumers or generators that underutilize the electric distribution will have a low load factor.
An example, using a large commercial electrical bill:
peak demand =
use =
number of days in billing cycle =
Hence:
load factor = ( [ / { × } ] / ) × 100% = 18.22%
It can be derived from the load profile of the specific device or system of devices. Its value is always less than one because maximum demand is never lower than average demand, since facilities likely never operate at full capacity for the duration of an entire 24-hour day. A high load factor means power usage is relatively constant. Low load factor shows that occasionally a high demand is set. To service that peak, capacity is sitting idle for long periods, thereby imposing higher costs on the system. Electrical rates are designed so that customers with high load factor are charged less overall per kWh. This process along with others is called load balancing or peak shaving.
The load factor is closely related to and often confused with the demand factor.
The major difference to note is that the denominator in the demand factor is fixed depending on the system. Because of this, the demand factor cannot be derived from the load profile but needs the addition of the full load of the system in question.
See also
Availability factor
Capacity factor
Demand factor
Diversity factor
Utilization factor
References
Power engineering | Load factor (electrical) | [
"Engineering"
] | 329 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
18,255,021 | https://en.wikipedia.org/wiki/Singularity%20%28DeSmedt%20novel%29 | Singularity is a novel by Bill DeSmedt published by Per Aspera Press in 2004. It is DeSmedt's debut novel and explores the theory that the Tunguska event was caused by a micro black hole.
Synopsis and publication
Released in 2004, Singularity is both DeSmedt's and the publishing house's debut novel. Singularity is the first novel in the Archon Sequence series about the Tunguska event. DeSmedt's second and third novels in the series are Dualism (2014) and Triploidy (2022).
On Barnes & Noble's science fiction and fantasy list, Singularity ranked fifth. On Mysterious Galaxy's bestsellers rankings, it ranked seventh.
Plot summary
It is based on the theory that the Tunguska event was caused by a micro black hole. Trying to locate weapon of mass destruction, Marianna Bonaventure is an American in the United States Department of Energy's CROM (Critical Resources Oversight Mandate) who has to work together with the outstanding analyst Jonathan Knox.
Reception
The Seattle Timess Nisi Shawl wrote, "DeSmedt's clear descriptions of everything from the core of a typical star to the sinister device an assassin uses to mimic a wolf's bite make it easy to follow his swiftly swooping story line". Robert Folsom praised the book in The Kansas City Star, writing, "The dialogue would be another matter; it's very scientific. But De-Smedt has managed a neat trick: Conversations are lively even though they're peppered with accurate physicist's jargon. The thriller aspect of the book helps."
The San Diego Union-Tribunes Jim Hopper called the novel "a stylish technothriller". The Fayetteville Observer said the novel was "a science fiction thriller [that] will appeal to readers who enjoy Michael Crichton". Danica McKellar praised the book in an interview with the New York Post, stating, "It's my favorite science fiction thriller. It's got everything - great characters, suspense, action, romance, and you just might learn something about black holes along the way."
Referring to how Earth's gravity could have sucked in a black hole, John R. Alden wrote in The Plain Dealer, "Singularity takes this bizarre possibility, adds a cast of exotic characters, whips in a blitzkrieg plot and bakes it all into a hugely entertaining near-future thriller. James Bond would have loved to star in a story such as this." In a mixed review, Publishers Weekly said, "The sexual chemistry between Marianna and Jonathan adds spice. Exotic hardware, lifestyles of the rich and notorious, double- and triple-crosses and a slightly rushed and facile conclusion all make a respectable if not outstanding first effort."
Awards
The novel was awarded the "Gold Medal for Science Fiction" as part of Foreword Magazines "Book of the Year Awards". It received the Independent Publisher Book Awards's "Ippy prize for Best Fantasy/Science Fiction novel of 2004".
About the Author
Bill DeSmedt is an American author and software engineer.
References
External links
2004 American novels
2004 science fiction novels
American science fiction novels
Black holes
Debut novels | Singularity (DeSmedt novel) | [
"Physics",
"Astronomy"
] | 671 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
18,255,533 | https://en.wikipedia.org/wiki/PBLU | pBLU is a commercially produced bacterial plasmid that contains genes for ampicillin resistance (beta lactamase and beta galactosidase). It is often used in conjunction with an ampicillin-susceptible E. coli strain to teach students about transformation of eubacteria. It is 5,437 base pairs long. There is a multiple cloning site in the lacZ gene.
References
Molecular biology techniques
Plasmids | PBLU | [
"Chemistry",
"Biology"
] | 95 | [
"Plasmids",
"Biotechnology stubs",
"Molecular biology techniques",
"Bacteria",
"Molecular biology"
] |
14,204,445 | https://en.wikipedia.org/wiki/Mazur%20manifold | In differential topology, a branch of mathematics, a Mazur manifold is a contractible, compact, smooth four-dimensional manifold-with-boundary which is not diffeomorphic to the standard 4-ball. Usually these manifolds are further required to have a handle decomposition with a single -handle, and a single -handle; otherwise, they would simply be called contractible manifolds. The boundary of a Mazur manifold is necessarily a homology 3-sphere.
History
Barry Mazur and Valentin Poenaru discovered these manifolds simultaneously. Akbulut and Kirby showed that the Brieskorn homology spheres , and are boundaries of Mazur manifolds, effectively coining the term `Mazur Manifold.' These results were later generalized to other contractible manifolds by Casson, Harer and Stern. One of the Mazur manifolds is also an example of an Akbulut cork which can be used to construct exotic 4-manifolds.
Mazur manifolds have been used by Fintushel and Stern to construct exotic actions of a group of order 2 on the 4-sphere.
Mazur's discovery was surprising for several reasons:
Every smooth homology sphere in dimension is homeomorphic to the boundary of a compact contractible smooth manifold. This follows from the work of Kervaire and the h-cobordism theorem. Slightly more strongly, every smooth homology 4-sphere is diffeomorphic to the boundary of a compact contractible smooth 5-manifold (also by the work of Kervaire). But not every homology 3-sphere is diffeomorphic to the boundary of a contractible compact smooth 4-manifold. For example, the Poincaré homology sphere does not bound such a 4-manifold because the Rochlin invariant provides an obstruction.
The h-cobordism Theorem implies that, at least in dimensions there is a unique contractible -manifold with simply-connected boundary, where uniqueness is up to diffeomorphism. This manifold is the unit ball . It's an open problem as to whether or not admits an exotic smooth structure, but by the h-cobordism theorem, such an exotic smooth structure, if it exists, must restrict to an exotic smooth structure on . Whether or not admits an exotic smooth structure is equivalent to another open problem, the smooth Poincaré conjecture in dimension four. Whether or not admits an exotic smooth structure is another open problem, closely linked to the Schoenflies problem in dimension four.
Mazur's observation
Let be a Mazur manifold that is constructed as union a 2-handle. Here is a sketch of Mazur's argument that the double of such a Mazur manifold is . is a contractible 5-manifold constructed as union a 2-handle. The 2-handle can be unknotted since the attaching map is a framed knot in the 4-manifold . So union the 2-handle is diffeomorphic to . The boundary of is . But the boundary of is the double of .
References
Differential topology
Manifolds | Mazur manifold | [
"Mathematics"
] | 640 | [
"Space (mathematics)",
"Topological spaces",
"Topology",
"Differential topology",
"Manifolds"
] |
14,205,946 | https://en.wikipedia.org/wiki/Algae%20fuel | Algae fuel, algal biofuel, or algal oil is an alternative to liquid fossil fuels that uses algae as its source of energy-rich oils. Also, algae fuels are an alternative to commonly known biofuel sources, such as corn and sugarcane. When made from seaweed (macroalgae) it can be known as seaweed fuel or seaweed oil. These fuels have no practical significance but remain an aspirational target in the biofuels research area.
History
In 1942 Harder and Von Witsch were the first to propose that microalgae be grown as a source of lipids for food or fuel. Following World War II, research began in the US, Germany, Japan, England, and Israel on culturing techniques and engineering systems for growing microalgae on larger scales, particularly species in the genus Chlorella. Meanwhile, H. G. Aach showed that Chlorella pyrenoidosa could be induced via nitrogen starvation to accumulate as much as 70% of its dry weight as lipids. Since the need for alternative transportation fuel had subsided after World War II, research at this time focused on culturing algae as a food source or, in some cases, for wastewater treatment.
Interest in the application of algae for biofuels was rekindled during the oil embargo and oil price surges of the 1970s, leading the US Department of Energy to initiate the Aquatic Species Program in 1978. The Aquatic Species Program spent $25 million over 18 years with the goal of developing liquid transportation fuel from algae that would be price competitive with petroleum-derived fuels. The research program focused on the cultivation of microalgae in open outdoor ponds, systems which are low in cost but vulnerable to environmental disturbances like temperature swings and biological invasions. 3,000 algal strains were collected from around the country and screened for desirable properties such as high productivity, lipid content, and thermal tolerance, and the most promising strains were included in the SERI microalgae collection at the Solar Energy Research Institute (SERI) in Golden, Colorado and used for further research. Among the program's most significant findings were that rapid growth and high lipid production were "mutually exclusive", since the former required high nutrients and the latter required low nutrients. The final report suggested that genetic engineering may be necessary to be able to overcome this and other natural limitations of algal strains, and that the ideal species might vary with place and season. Although it was successfully demonstrated that large-scale production of algae for fuel in outdoor ponds was feasible, the program failed to do so at a cost that would be competitive with petroleum, especially as oil prices sank in the 1990s. Even in the best case scenario, it was estimated that unextracted algal oil would cost $59–186 per barrel, while petroleum cost less than $20 per barrel in 1995. Therefore, under budget pressure in 1996, the Aquatic Species Program was abandoned.
Other contributions to algal biofuels research have come indirectly from projects focusing on different applications of algal cultures. For example, in the 1990s Japan's Research Institute of Innovative Technology for the Earth (RITE) implemented a research program with the goal of developing systems to fix using microalgae. Although the goal was not energy production, several studies produced by RITE demonstrated that algae could be grown using flue gas from power plants as a source, an important development for algal biofuel research. Other work focusing on harvesting hydrogen gas, methane, or ethanol from algae, as well as nutritional supplements and pharmaceutical compounds, has also helped inform research on biofuel production from algae.
Following the disbanding of the Aquatic Species Program in 1996, there was a relative lull in algal biofuel research. Still, various projects were funded in the US by the Department of Energy, Department of Defense, National Science Foundation, Department of Agriculture, National Laboratories, state funding, and private funding, as well as in other countries. More recently, rising oil prices in the 2000s spurred a revival of interest in algal biofuels and US federal funding has increased, numerous research projects are being funded in Australia, New Zealand, Europe, the Middle East, and other parts of the world.
In December 2022, ExxonMobil, the last large oil company to invest in algae biofuels, ended its research funding.
In March 2023, researchers said that the commercialization of biofuels would require several billion dollars of funding, plus a long-term dedication to overcoming what appear to be fundamental biological limitations of wild organisms. Most researchers believe that large scale production of biofuels is either "a decade, and more likely two decades, away."
Food supplementation
Algal oil is used as a source of fatty acid supplementation in food products, as it contains mono- and polyunsaturated fats, in particular EPA and DHA. Its DHA content is roughly equivalent to that of salmon based fish oil.
Fuels
Algae can be converted into various types of fuels, depending on the production technologies and the part of the cells used. The lipid, or oily part of the algae biomass can be extracted and converted into biodiesel through a process similar to that used for any other vegetable oil, or converted in a refinery into "drop-in" replacements for petroleum-based fuels. Alternatively or following lipid extraction, the carbohydrate content of algae can be fermented into bioethanol or butanol fuel.
Biodiesel
Biodiesel is a diesel fuel derived from animal or plant lipids (oils and fats). Studies have shown that some species of algae can produce 60% or more of their dry weight in the form of oil. Because the cells grow in aqueous suspension, where they have more efficient access to water, and dissolved nutrients, microalgae are capable of producing large amounts of biomass and usable oil in either high rate algal ponds or photobioreactors. This oil can then be turned into biodiesel which could be sold for use in automobiles. Regional production of microalgae and processing into biofuels will provide economic benefits to rural communities.
As they do not have to produce structural compounds such as cellulose for leaves, stems, or roots, and because they can be grown floating in a rich nutritional medium, microalgae can have faster growth rates than terrestrial crops. Also, they can convert a much higher fraction of their biomass to oil than conventional crops, e.g. 60% versus 2-3% for soybeans. The per unit area yield of oil from algae is estimated to be from 58,700 to 136,900 L/ha/year, depending on lipid content, which is 10 to 23 times as high as the next highest yielding crop, oil palm, at 5 950 L/ha/year.
The U.S. Department of Energy's Aquatic Species Program, 1978–1996, focused on biodiesel from microalgae. The final report suggested that biodiesel could be the only viable method by which to produce enough fuel to replace current world diesel usage. If algae-derived biodiesel were to replace the annual global production of 1.1bn tons of conventional diesel then a land mass of 57.3 million hectares would be required, which would be highly favorable compared to other biofuels.
Biobutanol
Butanol can be made from algae or diatoms using only a solar powered biorefinery. This fuel has an energy density 10% less than gasoline, and greater than that of either ethanol or methanol. In most gasoline engines, butanol can be used in place of gasoline with no modifications. In several tests, butanol consumption is similar to that of gasoline, and when blended with gasoline, provides better performance and corrosion resistance than that of ethanol or E85.
The green waste left over from the algae oil extraction can be used to produce butanol. In addition, it has been shown that macroalgae (seaweeds) can be fermented by bacteria of genus Clostridia to butanol and other solvents. Transesterification of seaweed oil (into biodiesel) is also possible with species such as Chaetomorpha linum, Ulva lactuca, and Enteromorpha compressa (Ulva).
The following species are being investigated as suitable species from which to produce ethanol and/or butanol:
Alaria esculenta
Laminaria saccharina
Palmaria palmata
Biogasoline
Biogasoline is gasoline produced from biomass. Like traditionally produced gasoline, it contains between 6 (hexane) and 12 (dodecane) carbon atoms per molecule and can be used in internal-combustion engines.
Biogas
Biogas is composed mainly of methane () and carbon dioxide (), with some traces of hydrogen sulphide, oxygen, nitrogen, and hydrogen. Macroalgae has high methane production rate compared to plant biomass. Biogas production from macroalgae is more technically viable compared to other fuels, but it is not economically viable due to the high cost of macroalgae feedstock. Carbohydrate and protein in microalgae can be converted into biogas through anaerobic digestion, which includes hydrolysis, fermentation, and methanogenesis steps. The conversion of algal biomass into methane can potentially recover as much energy as it obtains, but it is more profitable when the algal lipid content is lower than 40%. Biogas production from microalgae is relatively low because of the high ratio of protein in microalgae, but microalgae can be co-digested with high C/N ratio products such as wastepaper. Another method to produce biogas is through gasification, where hydrocarbon is converted to syngas through a partial oxidation reaction at high temperature (typically 800 °C to 1000 °C). Gasification is usually performed with catalysts. Uncatalyzed gasification requires temperature to be about 1300 °C. Syngas can be burnt directly to produce energy or used a fuel in turbine engines. It can also be used as feedstock for other chemical productions.
Methane
Methane, the main constituent of natural gas, can be produced from algae by various methods, namely gasification, pyrolysis and anaerobic digestion. In gasification and pyrolysis methods methane is extracted under high temperature and pressure. Anaerobic digestion is a straightforward method involved in decomposition of algae into simple components then transforming it into fatty acids using microbes like acidogenic bacteria followed by removing any solid particles and finally adding methanogenic archaea to release a gas mixture containing methane. A number of studies have successfully shown that biomass from microalgae can be converted into biogas via anaerobic digestion. Therefore, in order to improve the overall energy balance of microalgae cultivation operations, it has been proposed to recover the energy contained in waste biomass via anaerobic digestion to methane for generating electricity.
Ethanol
The Algenol system which is being commercialized by BioFields in Puerto Libertad, Sonora, Mexico utilizes seawater and industrial exhaust to produce ethanol. Porphyridium cruentum also have shown to be potentially suitable for ethanol production due to its capacity for accumulating large amount of carbohydrates.
Green diesel
Algae can be used to produce 'green diesel' (also known as renewable diesel, hydrotreating vegetable oil or hydrogen-derived renewable diesel) through a hydrotreating refinery process that breaks molecules down into shorter hydrocarbon chains used in diesel engines. It has the same chemical properties as petroleum-based diesel meaning that it does not require new engines, pipelines or infrastructure to distribute and use. It has yet to be produced at a cost that is competitive with petroleum. While hydrotreating is currently the most common pathway to produce fuel-like hydrocarbons via decarboxylation/decarbonylation, there is an alternative process offering a number of important advantages over hydrotreating. In this regard, the work of Crocker et al. and Lercher et al. is particularly noteworthy. For oil refining, research is underway for catalytic conversion of renewable fuels by decarboxylation. As the oxygen is present in crude oil at rather low levels, of the order of 0.5%, deoxygenation in petroleum refining is not of much concern, and no catalysts are specifically formulated for oxygenates hydrotreating. Hence, one of the critical technical challenges to make the hydrodeoxygenation of algae oil process economically feasible is related to the research and development of effective catalysts.
Jet fuel
Trials of using algae as biofuel were carried out by Lufthansa and Virgin Atlantic as early as 2008, although there is little evidence that using algae is a reasonable source for jet biofuels. By 2015, cultivation of fatty acid methyl esters and alkenones from the algae, Isochrysis, was under research as a possible jet biofuel feedstock.
Algae-based energy harvester
In May 2022, scientists at University of Cambridge announced they created an algae energy harvester, that uses natural sunlight to power a small microprocessor, initially powering the processor for six months, and then kept going for a full year. The device, which is about the size of AA battery, is a small container with water and blue green algae. The device does not generate a huge amount of power, but it can be used for Internet of Things devices, eliminating the need for traditional batteries such as lithium-ion batteries. The goal is to have more a environmentally friendly power source that can be used in remote areas.
Species
Research into algae for the mass-production of oil focuses mainly on microalgae (organisms capable of photosynthesis that are less than 0.4 mm in diameter, including the diatoms and cyanobacteria) as opposed to macroalgae, such as seaweed. The preference for microalgae has come about due largely to their less complex structure, fast growth rates, and high oil-content (for some species). However, some research is being done into using seaweeds for biofuels, probably due to the high availability of this resource.
researchers across various locations worldwide have started investigating the following species for their suitability as a mass oil-producers:
Botryococcus braunii
Chlorella
Dunaliella tertiolecta
Gracilaria
Pleurochrysis carterae (also called CCMP647).
Sargassum, with 10 times the output volume of Gracilaria.
The amount of oil each strain of algae produces varies widely. Note the following microalgae and their various oil yields:
Ankistrodesmus TR-87: 28–40% dry weight
Botryococcus braunii: 29–75% dw
Chlorella sp.: 29%dw
Chlorella protothecoides(autotrophic/heterotrophic): 15–55% dw
Crypthecodinium cohnii: 20%dw
Cyclotella DI- 35: 42%dw
Dunaliella tertiolecta : 36–42%dw
Hantzschia DI-160: 66%dw
Nannochloris: 31(6–63)%dw
Nannochloropsis : 46(31–68)%dw
Nannochloropsis and biofuels
Neochloris oleoabundans: 35–54%dw
Nitzschia TR-114: 28–50%dw
Phaeodactylum tricornutum: 31%dw
Scenedesmus TR-84: 45%dw
Schizochytrium 50–77%dw
Stichococcus: 33(9–59)%dw
Tetraselmis suecica: 15–32%dw
Thalassiosira pseudonana: (21–31)%dw
In addition, due to its high growth-rate, Ulva has been investigated as a fuel for use in the SOFT cycle, (SOFT stands for Solar Oxygen Fuel Turbine), a closed-cycle power-generation system suitable for use in arid, subtropical regions.
Other species used include Clostridium saccharoperbutylacetonicum, Sargassum, Gracilaria, Prymnesium parvum, and Euglena gracilis.
Nutrients and growth inputs
Light is what algae primarily need for growth as it is the most limiting factor. Many companies are investing for developing systems and technologies for providing artificial light. One of them is OriginOil that has developed a Helix BioReactorTM that features a rotating vertical shaft with low-energy lights arranged in a helix pattern. Water temperature also influences the metabolic and reproductive rates of algae. Although most algae grow at low rate when the water temperature gets lower, the biomass of algal communities can get large due to the absence of grazing organisms. The modest increases in water current velocity may also affect rates of algae growth since the rate of nutrient uptake and boundary layer diffusion increases with current velocity.
Other than light and water, phosphorus, nitrogen, and certain micronutrients are also useful and essential in growing algae. Nitrogen and phosphorus are the two most significant nutrients required for algal productivity, but other nutrients such as carbon and silica are additionally required. Of the nutrients required, phosphorus is one of the most essential ones as it is used in numerous metabolic processes. The microalgae D. tertiolecta was analyzed to see which nutrient affects its growth the most. The concentrations of phosphorus (P), iron (Fe), cobalt (Co), zinc (Zn), manganese (Mn) and molybdenum (Mo), magnesium (Mg), calcium (Ca), silicon (Si) and sulfur (S) concentrations were measured daily using inductively coupled plasma (ICP) analysis. Among all these elements being measured, phosphorus resulted in the most dramatic decrease, with a reduction of 84% over the course of the culture. This result indicates that phosphorus, in the form of phosphate, is required in high amounts by all organisms for metabolism.
There are two enrichment media that have been extensively used to grow most species of algae: Walne medium and the Guillard's F/2 medium. These commercially available nutrient solutions may reduce time for preparing all the nutrients required to grow algae. However, due to their complexity in the process of generation and high cost, they are not used for large-scale culture operations. Therefore, enrichment media used for mass production of algae contain only the most important nutrients with agriculture-grade fertilizers rather than laboratory-grade fertilizers.
Cultivation
Algae grow much faster than food crops, and can produce hundreds of times more oil per unit area than conventional crops such as rapeseed, palms, soybeans, or jatropha. As algae have a harvesting cycle of 1–10 days, their cultivation permits several harvests in a very short time-frame, a strategy differing from that associated with annual crops. In addition, algae can be grown on land unsuitable for terrestrial crops, including arid land and land with excessively saline soil, minimizing competition with agriculture. Most research on algae cultivation has focused on growing algae in clean but expensive photobioreactors, or in open ponds, which are cheap to maintain but prone to contamination.
Closed-loop system
The lack of equipment and structures needed to begin growing algae in large quantities has inhibited widespread mass-production of algae for biofuel production. Maximum use of existing agriculture processes and hardware is the goal.
Closed systems (not exposed to open air) avoid the problem of contamination by other
organisms blown in by the air. The problem of a closed system is finding a cheap source of sterile .
Several experimenters have found the from a smokestack works well for growing algae.
For reasons of economy, some experts think that algae farming for biofuels will have to be done as part of cogeneration, where it can make use of waste heat and help soak up pollution.
To produce micro-algae at large-scale under controlled environment using PBR system, strategies such as light guides, sparger, and PBR construction materials required should be well considered.
Photobioreactors
Most companies pursuing algae as a source of biofuels pump nutrient-rich water through plastic or borosilicate glass tubes (called "bioreactors" ) that are exposed to sunlight (and so-called photobioreactors or PBR).
Running a PBR is more difficult than using an open pond, and costlier, but may provide a higher level of control and productivity. In addition, a photobioreactor can be integrated into a closed loop cogeneration system much more easily than ponds or other methods.
Open pond
Open pond systems consist of simple in ground ponds, which are often mixed by a paddle wheel. These systems have low power requirements, operating costs, and capital costs when compared to closed loop photobioreactor systems. Nearly all commercial algae producers for high value algal products utilize open pond systems.
Turf scrubber
The Algae scrubber is a system designed primarily for cleaning nutrients and pollutants out of water using algal turfs. An algal turf scrubber (ATS) mimics the algal turfs of a natural coral reef by taking in nutrient rich water from waste streams or natural water sources, and pulsing it over a sloped surface. This surface is coated with a rough plastic membrane or a screen, which allows naturally occurring algal spores to settle and colonize the surface. Once the algae has been established, it can be harvested every 5–15 days, and can produce 18 metric tons of algal biomass per hectare per year. In contrast to other methods, which focus primarily on a single high yielding species of algae, this method focuses on naturally occurring polycultures of algae. As such, the lipid content of the algae in an ATS system is usually lower, which makes it more suitable for a fermented fuel product, such as ethanol, methane, or butanol. Conversely, the harvested algae could be treated with a hydrothermal liquefaction process, which would make possible biodiesel, gasoline, and jet fuel production.
There are three major advantages of ATS over other systems. The first advantage is documented higher productivity over open pond systems. The second is lower operating and fuel production costs. The third is the elimination of contamination issues due to the reliance on naturally occurring algae species. The projected costs for energy production in an ATS system are $0.75/kg, compared to a photobioreactor which would cost $3.50/kg. Furthermore, due to the fact that the primary purpose of ATS is removing nutrients and pollutants out of water, and these costs have been shown to be lower than other methods of nutrient removal, this may incentivize the use of this technology for nutrient removal as the primary function, with biofuel production as an added benefit.
Fuel production
After harvesting the algae, the biomass is typically processed in a series of steps, which can differ based on the species and desired product; this is an active area of research and also is the bottleneck of this technology: the cost of extraction is higher than those obtained. One of the solutions is to use filter feeders to "eat" them. Improved animals can provide both foods and fuels. An alternative method to extract the algae is to grow the algae with specific types of fungi. This causes bio-flocculation of the algae which allows for easier extraction.
Dehydration
Often, the algae is dehydrated, and then a solvent such as hexane is used to extract energy-rich compounds like triglycerides from the dried material. Then, the extracted compounds can be processed into fuel using standard industrial procedures. For example, the extracted triglycerides are reacted with methanol to create biodiesel via transesterification. The unique composition of fatty acids of each species influences the quality of the resulting biodiesel and thus must be taken into account when selecting algal species for feedstock.
Hydrothermal liquefaction
An alternative approach called Hydrothermal liquefaction employs a continuous process that subjects harvested wet algae to high temperatures and pressures— and .
Products include crude oil, which can be further refined into aviation fuel, gasoline, or diesel fuel using one or many upgrading processes. The test process converted between 50 and 70 percent of the algae's carbon into fuel. Other outputs include clean water, fuel gas and nutrients such as nitrogen, phosphorus, and potassium.
Nutrients
Nutrients like nitrogen (N), phosphorus (P), and potassium (K), are important for plant growth and are essential parts of fertilizer. Silica and iron, as well as several trace elements, may also be considered important marine nutrients as the lack of one can limit the growth of, or productivity in, an area.
Carbon dioxide
Bubbling through algal cultivation systems can greatly increase productivity and yield (up to a saturation point). Typically, about 1.8 tonnes of will be utilised per tonne of algal biomass (dry) produced, though this varies with algae species. The Glenturret Distillery in Perthshire percolate made during the whisky distillation through a microalgae bioreactor. Each tonne of microalgae absorbs two tonnes of . Scottish Bioenergy, who run the project, sell the microalgae as high value, protein-rich food for fisheries. In the future, they will use the algae residues to produce renewable energy through anaerobic digestion.
Nitrogen
Nitrogen is a valuable substrate that can be utilized in algal growth. Various sources of nitrogen can be used as a nutrient for algae, with varying capacities. Nitrate was found to be the preferred source of nitrogen, in regards to amount of biomass grown. Urea is a readily available source that shows comparable results, making it an economical substitute for nitrogen source in large scale culturing of algae. Despite the clear increase in growth in comparison to a nitrogen-less medium, it has been shown that alterations in nitrogen levels affect lipid content within the algal cells. In one study nitrogen deprivation for 72 hours caused the total fatty acid content (on a per cell basis) to increase by 2.4-fold. 65% of the total fatty acids were esterified to triacylglycerides in oil bodies, when compared to the initial culture, indicating that the algal cells utilized de novo synthesis of fatty acids. It is vital for the lipid content in algal cells to be of high enough quantity, while maintaining adequate cell division times, so parameters that can maximize both are under investigation.
Wastewater
A possible nutrient source is wastewater from the treatment of sewage, agricultural, or flood plain run-off, all currently major pollutants and health risks. However, this waste water cannot feed algae directly and must first be processed by bacteria, through anaerobic digestion. If waste water is not processed before it reaches the algae, it will contaminate the algae in the reactor, and at the very least, kill much of the desired algae strain. In biogas facilities, organic waste is often converted to a mixture of carbon dioxide, methane, and organic fertilizer. Organic fertilizer that comes out of the digester is liquid, and nearly suitable for algae growth, but it must first be cleaned and sterilized.
The utilization of wastewater and ocean water instead of freshwater is strongly advocated due to the continuing depletion of freshwater resources. However, heavy metals, trace metals, and other contaminants in wastewater can decrease the ability of cells to produce lipids biosynthetically and also impact various other workings in the machinery of cells. The same is true for ocean water, but the contaminants are found in different concentrations. Thus, agricultural-grade fertilizer is the preferred source of nutrients, but heavy metals are again a problem, especially for strains of algae that are susceptible to these metals. In open pond systems the use of strains of algae that can deal with high concentrations of heavy metals could prevent other organisms from infesting these systems. In some instances it has even been shown that strains of algae can remove over 90% of nickel and zinc from industrial wastewater in relatively short periods of time.
Environmental impact
In comparison with terrestrial-based biofuel crops such as corn or soybeans, microalgal production results in a much less significant land footprint due to the higher oil productivity from the microalgae than all other oil crops. Algae can also be grown on marginal lands useless for ordinary crops and with low conservation value, and can use water from salt aquifers that is not useful for agriculture or drinking. Algae can also grow on the surface of the ocean in bags or floating screens. Thus microalgae could provide a source of clean energy with little impact on the provisioning of adequate food and water or the conservation of biodiversity. Algae cultivation also requires no external subsidies of insecticides or herbicides, removing any risk of generating associated pesticide waste streams. In addition, algal biofuels are much less toxic, and degrade far more readily than petroleum-based fuels. However, due to the flammable nature of any combustible fuel, there is potential for some environmental hazards if ignited or spilled, as may occur in a train derailment or a pipeline leak. This hazard is reduced compared to fossil fuels, due to the ability for algal biofuels to be produced in a much more localized manner, and due to the lower toxicity overall, but the hazard is still there nonetheless. Therefore, algal biofuels should be treated in a similar manner to petroleum fuels in transportation and use, with sufficient safety measures in place at all times.
Studies have determined that replacing fossil fuels with renewable energy sources, such as biofuels, have the capability of reducing emissions by up to 80%. An algae-based system could capture approximately 80% of the emitted from a power plant when sunlight is available. Although this will later be released into the atmosphere when the fuel is burned, this would have entered the atmosphere regardless. The possibility of reducing total emissions therefore lies in the prevention of the release of from fossil fuels. Furthermore, compared to fuels like diesel and petroleum, and even compared to other sources of biofuels, the production and combustion of algal biofuel does not produce any sulfur oxides or nitrous oxides, and produces a reduced amount of carbon monoxide, unburned hydrocarbons, and reduced emission of other harmful pollutants. Since terrestrial plant sources of biofuel production simply do not have the production capacity to meet current energy requirements, microalgae may be one of the only options to approach complete replacement of fossil fuels.
Microalgae production also includes the ability to use saline waste or waste streams as an energy source. This opens a new strategy to produce biofuel in conjunction with waste water treatment, while being able to produce clean water as a byproduct. When used in a microalgal bioreactor, harvested microalgae will capture significant quantities of organic compounds as well as heavy metal contaminants absorbed from wastewater streams that would otherwise be directly discharged into surface and ground-water. Moreover, this process also allows the recovery of phosphorus from waste, which is an essential but scarce element in nature – the reserves of which are estimated to have depleted in the last 50 years. Another possibility is the use of algae production systems to clean up non-point source pollution, in a system known as an algal turf scrubber (ATS). This has been demonstrated to reduce nitrogen and phosphorus levels in rivers and other large bodies of water affected by eutrophication, and systems are being built that will be capable of processing up to 110 million liters of water per day. ATS can also be used for treating point source pollution, such as the waste water mentioned above, or in treating livestock effluent.
Polycultures
Nearly all research in algal biofuels has focused on culturing single species, or monocultures, of microalgae. However, ecological theory and empirical studies have demonstrated that plant and algae polycultures, i.e. groups of multiple species, tend to produce larger yields than monocultures. Experiments have also shown that more diverse aquatic microbial communities tend to be more stable through time than less diverse communities. Recent studies found that polycultures of microalgae produced significantly higher lipid yields than monocultures. Polycultures also tend to be more resistant to pest and disease outbreaks, as well as invasion by other plants or algae. Thus culturing microalgae in polyculture may not only increase yields and stability of yields of biofuel, but also reduce the environmental impact of an algal biofuel industry.
Economic viability
There is clearly a demand for sustainable biofuel production, but whether a particular biofuel will be used ultimately depends not on sustainability but cost efficiency. Therefore, research is focusing on cutting the cost of algal biofuel production to the point where it can compete with conventional petroleum. The production of several products from algae has been mentioned as the most important factor for making algae production economically viable. Other factors are the improving of the solar energy to biomass conversion efficiency (currently 3%, but 5 to 7% is theoretically attainable) and making the oil extraction from the algae easier.
In a 2007 report a formula was derived estimating the cost of algal oil in order for it to be a viable substitute to petroleum diesel:
C(algal oil) = 25.9 × 10−3 C(petroleum)
where: C(algal oil) is the price of microalgal oil in dollars per gallon and C(petroleum) is the price of crude oil in dollars per barrel. This equation assumes that algal oil has roughly 80% of the caloric energy value of crude petroleum.
The IEA estimated in 2017 that algal biomass can be produced for a little as $0.54/kg in open pond in a warm climate to $10.20/kg in photobioreactors in cooler climates. Assuming that the biomass contains 30% oil by weight, the cost of biomass for providing a liter of oil would be approximately $1.40 ($5.30/gal) and $1.81 ($6.85/gal) for photobioreactors and raceways, respectively. Oil recovered from the lower cost biomass produced in photobioreactors is estimated to cost $2.80/L, assuming the recovery process contributes 50% to the cost of the final recovered oil. If existing algae projects can achieve biodiesel production price targets of less than $1 per gallon, the United States may realize its goal of replacing up to 20% of transport fuels by 2020 by using environmentally and economically sustainable fuels from algae production.
Whereas technical problems, such as harvesting, are being addressed successfully by the industry, the high up-front investment of algae-to-biofuels facilities is seen by many as a major obstacle to the success of this technology. As of 2007, only few studies on the economic viability were publicly available, and must often rely on the little data (often only engineering estimates) available in the public domain. Dmitrov examined the GreenFuel's photobioreactor and estimated that algae oil would only be competitive at an oil price of $800 per barrel. A study by Alabi et al. examined raceways, photobioreactors and anaerobic fermenters to make biofuels from algae and found that photobioreactors are too expensive to make biofuels. Raceways might be cost-effective in warm climates with very low labor costs, and fermenters may become cost-effective subsequent to significant process improvements. The group found that capital cost, labor cost and operational costs (fertilizer, electricity, etc.) by themselves are too high for algae biofuels to be cost-competitive with conventional fuels. Similar results were found by others, suggesting that unless new, cheaper ways of harnessing algae for biofuels production are found, their great technical potential may never become economically accessible. In 2012, Rodrigo E. Teixeira demonstrated a new reaction and proposed a process for harvesting and extracting raw materials for biofuel and chemical production that requires a fraction of the energy of current methods, while extracting all cell constituents.
Use of byproducts
Many of the byproducts produced in the processing of microalgae can be used in various applications, many of which have a longer history of production than algal biofuel. Some of the products not used in the production of biofuel include natural dyes and pigments, antioxidants, and other high-value bio-active compounds. These chemicals and excess biomass have found numerous use in other industries. For example, the dyes and oils have found a place in cosmetics, commonly as thickening and water-binding agents. Discoveries within the pharmaceutical industry include antibiotics and antifungals derived from microalgae, as well as natural health products, which have been growing in popularity over the past few decades. For instance Spirulina contains numerous polyunsaturated fats (Omega 3 and 6), amino acids, and vitamins, as well as pigments that may be beneficial, such as beta-carotene and chlorophyll.
Advantages
Ease of growth
One of the main advantages that using microalgae as the feedstock when compared to more traditional crops is that it can be grown much more easily. Algae can be grown in land that would not be considered suitable for the growth of the regularly used crops. In addition to this, wastewater that would normally hinder plant growth has been shown to be very effective in growing algae. Because of this, algae can be grown without taking up arable land that would otherwise be used for producing food crops, and the better resources can be reserved for normal crop production. Microalgae also require fewer resources to grow and little attention is needed, allowing the growth and cultivation of algae to be a very passive process.
Impact on food
Many traditional feedstocks for biodiesel, such as corn and palm, are also used as feed for livestock on farms, as well as a valuable source of food for humans. Because of this, using them as biofuel reduces the amount of food available for both, resulting in an increased cost for both the food and the fuel produced. Using algae as a source of biodiesel can alleviate this problem in a number of ways. First, algae is not used as a primary food source for humans, meaning that it can be used solely for fuel and there would be little impact in the food industry. Second, many of the waste-product extracts produced during the processing of algae for biofuel can be used as a sufficient animal feed. This is an effective way to minimize waste and a much cheaper alternative to the more traditional corn- or grain-based feeds.
Minimalisation of waste
Growing algae as a source of biofuel has also been shown to have numerous environmental benefits, and has presented itself as a much more environmentally friendly alternative to current biofuels. For one, it is able to utilize run-off, water contaminated with fertilizers and other nutrients that are a by-product of farming, as its primary source of water and nutrients. Because of this, it prevents this contaminated water from mixing with the lakes and rivers that currently supply our drinking water. In addition to this, the ammonia, nitrates, and phosphates that would normally render the water unsafe actually serve as excellent nutrients for the algae, meaning that fewer resources are needed to grow the algae. Many algae species used in biodiesel production are excellent bio-fixers, meaning they are able to remove carbon dioxide from the atmosphere to use as a form of energy for themselves. Because of this, they have found use in industry as a way to treat flue gases and reduce GHG emissions.
Disadvantage
High water requirement
The process of microalgae cultivation is highly water-intensive. Life cycle studies estimated that the production of 1 liter of microalgae based biodiesel requires between 607 and 1944 liters of water. That said, abundant wastewater and/or seawater, which also contain various nutrients, can theoretically be used for this purpose instead of freshwater.
Commercial viability
Algae biodiesel is still a fairly new technology. Despite the fact that research began over 30 years ago, it was put on hold during the mid-1990s, mainly due to a lack of funding and a relatively low petroleum cost. For the next few years algae biofuels saw little attention; it was not until the gas peak of the early 2000s that it eventually had a revitalization in the search for alternative fuel sources.
Increasing interest in seaweed farming for carbon sequestration, eutrophication reduction and production of food has resulted in the creation of commercial seaweed cultivation since 2017. Reductions in the cost of cultivation and harvesting as well as the development of commercial industry will improve the economics of macroalgae biofuels. Climate change has created a proliferation of brown macroalgae mats, which wash up on the shores of the Caribbean. Currently these mats are disposed of but there is interest in developing them into a feedstock for biofuel production.
Stability
The biodiesel produced from the processing of microalgae differs from other forms of biodiesel in the content of polyunsaturated fats. Polyunsaturated fats are known for their ability to retain fluidity at lower temperatures. While this may seem like an advantage in production during the colder temperatures of the winter, the polyunsaturated fats result in lower stability during regular seasonal temperatures.
International policies
Canada
Numerous policies have been put in place since the 1975 oil crisis in order to promote the use of Renewable Fuels in the United States, Canada and Europe. In Canada, these included the implementation of excise taxes exempting propane and natural gas which was extended to ethanol made from biomass and methanol in 1992. The federal government also announced their renewable fuels strategy in 2006 which proposed four components: increasing availability of renewable fuels through regulation, supporting the expansion of Canadian production of renewable fuels, assisting farmers to seize new opportunities in this sector and accelerating the commercialization of new technologies. These mandates were quickly followed by the Canadian provinces:
United States
Policies in the United States have included a decrease in the subsidies provided by the federal and state governments to the oil industry which have usually included $2.84 billion. This is more than what is actually set aside for the biofuel industry. The measure was discussed at the G20 in Pittsburgh where leaders agreed that "inefficient fossil fuel subsidies encourage wasteful consumption, reduce our energy security, impede investment in clean sources and undermine efforts to deal with the threat of climate change". If this commitment is followed through and subsidies are removed, a fairer market in which algae biofuels can compete will be created. In 2010, the U.S. House of Representatives passed a legislation seeking to give algae-based biofuels parity with cellulose biofuels in federal tax credit programs. The algae-based renewable fuel promotion act (HR 4168) was implemented to give biofuel projects access to a $1.01 per gal production tax credit and 50% bonus depreciation for biofuel plant property. The U.S Government also introduced the domestic Fuel for Enhancing National Security Act implemented in 2011. This policy constitutes an amendment to the Federal property and administrative services act of 1949 and federal defense provisions in order to extend to 15 the number of years that the Department of Defense (DOD) multiyear contract may be entered into the case of the purchase of advanced biofuel. Federal and DOD programs are usually limited to a 5-year period
Other
The European Union (EU) has also responded by quadrupling the credits for second-generation algae biofuels which was established as an amendment to the Biofuels and Fuel Quality Directives
See also
References
Further reading
External links
A Report on Commercial Usage and Production of Algal Oil
A Sober Look at Biofuels from Algae (Biodiesel Magazine)
US National Renewable Energy Laboratory Publications
Current Status and Potential for Algal Biofuels Production
Bioreactors
High lipid content microalgae
Renewable energy
Sustainable energy
Renewable fuels
Biochemical engineering | Algae fuel | [
"Chemistry",
"Engineering",
"Biology"
] | 9,182 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Chemical engineering",
"Biochemical engineering",
"Microbiology equipment",
"Biochemistry"
] |
14,210,112 | https://en.wikipedia.org/wiki/Extended%20finite%20element%20method | The extended finite element method (XFEM), is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method (FEM) approach by enriching the solution space for solutions to differential equations with discontinuous functions.
History
The extended finite element method (XFEM) was developed in 1999 by Ted Belytschko and collaborators,
to help alleviate shortcomings of the finite element method and has been used to model the propagation of various discontinuities: strong (cracks) and weak (material interfaces). The idea behind XFEM is to retain most advantages of meshfree methods while alleviating their negative sides.
Rationale
The extended finite element method was developed to ease difficulties in solving problems with localized features that are not efficiently resolved by mesh refinement. One of the initial applications was the modelling of fractures in a material. In this original implementation, discontinuous basis functions are added to standard polynomial basis functions for nodes that belonged to elements that are intersected by a crack to provide a basis that included crack opening displacements. A key advantage of XFEM is that in such problems the finite element mesh does not need to be updated to track the crack path. Subsequent research has illustrated the more general use of the method for problems involving singularities, material interfaces, regular meshing of microstructural features such as voids, and other problems where a localized feature can be described by an appropriate set of basis functions.
Principle
Enriched finite element methods extend, or enrich, the
approximation space so that it is able to naturally reproduce the
challenging feature associated with the problem of interest: the
discontinuity, singularity, boundary layer, etc. It was shown that
for some problems, such an embedding of the problem's feature into the approximation
space can significantly improve convergence rates and accuracy.
Moreover, treating problems with discontinuities with eXtended
Finite Element Methods suppresses the need to mesh and remesh the
discontinuity surfaces, thus alleviating the computational costs and projection errors
associated with conventional finite element methods, at the cost of restricting the discontinuities to mesh edges.
Existing XFEM codes
There exists several research codes implementing this technique to various degrees.
GetFEM++
xfem++
openxfem++
Dynaflow
eXlibris
ngsxfem
XFEM has also been implemented in code like Altair Radioss, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.).
References
Numerical differential equations
Partial differential equations
Continuum mechanics
Finite element method
Mechanics | Extended finite element method | [
"Physics",
"Engineering"
] | 584 | [
"Mechanics",
"Classical mechanics",
"Mechanical engineering",
"Continuum mechanics"
] |
14,212,068 | https://en.wikipedia.org/wiki/Mercury%28IV%29%20fluoride | Mercury(IV) fluoride, HgF4, is a purported compound, the first to be reported with mercury in the +4 oxidation state. Mercury, like the other group 12 elements (cadmium and zinc), has an s2d10 electron configuration and generally only forms bonds involving its 6s orbital. This means that the highest oxidation state mercury normally attains is +2, and for this reason it is sometimes considered a post-transition metal instead of a transition metal. HgF4 was first reported from experiments in 2007, but its existence remains disputed; experiments conducted in 2008 could not replicate the compound.
History
Speculation about higher oxidation states for mercury had existed since the 1970s, and theoretical calculations in the 1990s predicted that it should be stable in the gas phase, with a square-planar geometry consistent with a formal d8 configuration. However, experimental proof remained elusive until 2007, when HgF4 was first prepared using solid neon and argon for matrix isolation at a temperature of 4 K. The compound was detected using infrared spectroscopy.
However, the compound's synthesis has not been replicated in other labs, and more recent theoretical studies cast doubt on the possible existence of mercury(IV) (and copernicium(IV)) fluoride. Dirac-Hartree-Fock computations including both relativistic effects and electron correlation suggest that an HgF4 compound would be unbound by about 2 eV (and CnF4 by 14 eV).
Explanation
Theoretical studies suggest that mercury is unique among the natural elements of group 12 in forming a tetrafluoride, and attribute this observation to relativistic effects. According to calculations, the tetrafluorides of the "less relativistic" elements cadmium and zinc are unstable and eliminate a fluorine molecule, F2, to form the metal difluoride complex. On the other hand, the tetrafluoride of the "more relativistic" synthetic element 112, copernicium, is predicted to be more stable.
Subsequent density functional theory and coupled cluster calculations indicated that bonding in HgF4 (if it really exists) involves d orbitals. This has led to the suggestion that mercury should be considered a transition metal (the group 12 metals are sometimes excluded from the transition metals because they do not oxidize beyond +2). Chemical historian William B. Jensen has argued that the compound alone is insufficient to reclassify the metal, because HgF4 represents at best a non-equilibrium transient state.
Synthesis and properties
HgF4 is produced by the reaction of elemental mercury with fluorine:
Hg + 2 F2 → HgF4
HgF4 is only stable in matrix isolation at ; upon heating, or if the HgF4 molecules touch each other, it decomposes to mercury(II) fluoride and fluorine:
HgF4 → HgF2 + F2
HgF4 is a diamagnetic, square planar molecule. The mercury atom has a formal 6s25d86p6 electron configuration, and as such obeys the octet rule but not the 18-electron rule. HgF4 is isoelectronic with the tetrafluoroaurate anion, , and is valence isoelectronic with the tetrachloroaurate (), tetrabromoaurate (), and tetrachloroplatinate () anions.
References
Mercury compounds
Fluorides
Metal halides
Hypothetical chemical compounds
Substances discovered in the 2000s | Mercury(IV) fluoride | [
"Chemistry"
] | 734 | [
"Inorganic compounds",
"Hypotheses in chemistry",
"Salts",
"Theoretical chemistry",
"Metal halides",
"Hypothetical chemical compounds",
"Fluorides"
] |
14,212,831 | https://en.wikipedia.org/wiki/Alpha-methylacyl-CoA%20racemase | α-Methylacyl-CoA racemase (AMACR, ) is an enzyme that in humans is encoded by the AMACR gene. AMACR catalyzes the following chemical reaction:
(2R)-2-methylacyl-CoA (2S)-2-methylacyl-CoA
In mammalian cells, the enzyme is responsible for converting (2R)-methylacyl-CoA esters to their (2S)-methylacyl-CoA epimers and known substrates, including coenzyme A esters of pristanic acid (mostly derived from phytanic acid, a 3-methyl branched-chain fatty acid that is abundant in the diet) and bile acids derived from cholesterol. This transformation is required in order to degrade (2R)-methylacyl-CoA esters by β-oxidation, which process requires the (2S)-epimer. The enzyme is known to be localised in peroxisomes and mitochondria, both of which are known to β-oxidize 2-methylacyl-CoA esters.
Nomenclature
This enzyme belongs to the family of isomerases, specifically the racemases and epimerases which act on other compounds. The systematic name of this enzyme class is 2-methylacyl-CoA 2-epimerase. In vitro experiments with the human enzyme AMACR 1A show that both (2S)- and (2R)-methyldecanoyl-CoA esters are substrates and are converted by the enzyme with very similar efficiency. Prolonged incubation of either substrate with the enzyme establishes an equilibrium with both substrates or products present in a near 1:1 ratio. The mechanism of the enzyme requires removal of the α-proton of the 2-methylacyl-CoA to form a deprotonated intermediate (which is probably the enol or enolate) followed by non-sterespecific reprotonation. Thus either epimer is converted into a near 1:1 mixture of both isomers upon full conversion of the substrate.
Clinical significance
Both decreased and increased levels of the enzyme in humans are linked with diseases.
Neurological diseases
Reduction of the protein level or activity results in the accumulation of (2R)-methyl fatty acids such as bile acids which causes neurological symptoms. The symptoms are similar to those of adult Refsum disease and usually appear in the late teens or early twenties.
The first documented cases of AMACR deficiency in adults were reported in 2000. This deficiency falls within a class of disorders called peroxisome biogenesis disorders (PBDs), although it is quite different from other peroxisomal disorders and does not share classic Refsum disorder symptoms. The deficiency causes an accumulation of pristanic acid, dihydroxycholestanoic acid (DHCA) and trihydroxycholestanoic acid (THCA) and to a lesser extent phytanic acid. This phenomenon was verified in 2002, when researchers reported of a certain case, "His condition would have been missed if they hadn't measured the pristanic acid concentration."
AMACR deficiency can cause mental impairment, confusion, learning difficulties, and liver damage. It can be treated by dietary elimination of pristanic and phytanic acid through reduced intake of dairy products and meats such as beef, lamb, and chicken. Compliance to the diet is low, however, because of eating habits and loss of weight.
Cancer
Increased levels of AMACR protein concentration and activity are associated with prostate cancer, and the enzyme is used widely as a biomarker (known in cancer literature as P504S) in biopsy tissues. Around 10 different variants of human AMACR have been identified from prostate cancer tissues, which variants arise from alternative mRNA splicing. Some of these splice variants lack catalytic residues in the active site or have changes in the C-terminus, which is required for dimerisation. Increased levels of AMACR are also associated with some breast, colon, and other cancers, but it is unclear exactly what the role of AMACR is in these cancers.
Antibodies to AMACR are used in immunohistochemistry to demonstrate prostate carcinoma, since the enzyme is greatly overexpressed in this type of tumour.
Ibuprofen metabolism
The enzyme is also involved in a chiral inversion pathway which converts ibuprofen, a member of the 2-arylpropionic acid (2-APA) non-steroidal anti-inflammatory drug family (NSAIDs), from the R-enantiomer to the S-enantiomer. The pathway is uni-directional because only R-ibuprofen can be converted into ibuprofenoyl-CoA, which is then epimerized by AMACR. Conversion of S-ibuprofenoyl-CoA to S-ibuprofen is assumed to be performed by one of the many human acyl-CoA thioesterase enzymes (ACOTs). The reaction is of pharmacological importance because ibuprofen is typically used as a racemic mixture, and the drug is converted to the S-isomer upon uptake, which inhibits the activity of the cyclo-oxygenase enzymes and induces an anti-inflammatory effect. Human AMACR 1A has been demonstrated to epimerise other 2-APA-CoA esters, suggesting a common chiral inversion pathway for this class of drugs.
References
External links
N.S. man thought he'd never find anyone else with his condition. Then he got a text from Oklahoma. CBC News. Feb 7, 2022
Further reading
EC 5.1.99
Enzymes of known structure
Genes on human chromosome 5
Tumor markers | Alpha-methylacyl-CoA racemase | [
"Chemistry",
"Biology"
] | 1,191 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
19,262,028 | https://en.wikipedia.org/wiki/ZZ%20diboson | ZZ dibosons are rare pairs of Z bosons. They were first observed by the experiments at the Large Electron–Positron Collider (ALEPH, DELPHI, L3 and OPAL). The first observation in a hadron collider was made by the scientists of DØ collaboration at Fermilab.
Discussion
ZZ dibosons are force-carrying particles observed as products of proton–antiproton collisions at the Tevatron, the world's second highest-energy particle accelerator (after the CERN Large Hadron Collider). The first observation of the ZZ dibosons was announced at a Fermilab seminar on 30 July 2008.
The rarest diboson processes after ZZ dibosons are those involving the Higgs boson, so seeing ZZ diboson is an essential step in demonstrating the ability to see the Higgs boson. ZZ dibosons are the latest in a series of observations of pairs of gauge bosons (force-carrying particles) by DØ and its sister experiment CDF (also at Tevatron).
Final analysis of the data for this discovery was done by a team of international researchers including scientists of American, Belgian, British, Georgian, Italian, and Russian nationalities. The observations began with the study of the already-rare production of W bosons plus photons (); then Z bosons plus photons (); then observation of W pairs (); then a mix of W and Z boson (). The ZZ () is the combination which has the lowest predicted likelihood of production in the Standard Model due to the smaller couplings.
See also
Dineutron
Diproton
Pauli exclusion principle
Higgs boson
List of particles
References
External links
Bosons
Electroweak theory | ZZ diboson | [
"Physics"
] | 371 | [
"Physical phenomena",
"Matter",
"Electroweak theory",
"Bosons",
"Fundamental interactions",
"Particle physics",
"Particle physics stubs",
"Subatomic particles"
] |
19,264,415 | https://en.wikipedia.org/wiki/Norwegian%20Contractors | Norwegian Contractors AS was a concrete gravity base (GBS) structure supplier from 1974 to 1994. Aker Marine Contractors AS (AMC) was established in 1995 and is a continuance of the marine activities in Norwegian Contractors AS.
Norwegian Contractors AS have worked on following offshore platforms:
Ecofisk tank
Frigg 3 offshore platforms
Statfjord A
Statfjord B
Statfjord C
Gullfaks A
Gullfaks B
Oseberg A
Gullfaks C (heaviest object ever moved by mankind)
Draugen
Heidrun
Hibernia-Bohrplatform (1997)
Nordhordland-Brücke (1994)
Sleipner A (1993)
Snorre
Troll A platform (1995)
See also
Offshore concrete structure
References
Engineering companies of Norway
Oil platforms | Norwegian Contractors | [
"Chemistry",
"Engineering"
] | 165 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
10,188,326 | https://en.wikipedia.org/wiki/B-theory%20of%20time | The B-theory of time, also called the "tenseless theory of time", is one of two positions regarding the temporal ordering of events in the philosophy of time. B-theorists argue that the flow of time is only a subjective illusion of human consciousness, that the past, present, and future are equally real, and that time is tenseless: temporal becoming is not an objective feature of reality. Therefore, there is nothing privileged about the present, ontologically speaking.
The B-theory is derived from a distinction drawn by J. M. E. McTaggart between A series and B series. The B-theory is often drawn upon in theoretical physics, and is seen in theories such as eternalism.
Origin of terms
The terms A-theory and B-theory, first coined by Richard M. Gale in 1966, derive from Cambridge philosopher J. M. E. McTaggart's analysis of time and change in "The Unreality of Time" (1908), in which events are ordered via a tensed A-series or a tenseless B-series. It is popularly assumed that the A theory represents time like an A-series, while the B theory represents time like a B-series.
Events (or "times"), McTaggart observed, may be characterized in two distinct but related ways. On the one hand they can be characterized as past, present or future, normally indicated in natural languages such as English by the verbal inflection of tenses or auxiliary adverbial modifiers. Alternatively, events may be described as earlier than, simultaneous with, or later than others. Philosophers are divided as to whether the tensed or tenseless mode of expressing temporal fact is fundamental. Some philosophers have criticised hybrid theories, where one holds a tenseless view of time but asserts that the present has special properties, as falling foul of McTaggart's paradox. For a thorough discussion of McTaggart's paradox, see R. D. Ingthorsson (2016).
The debate between A-theorists and B-theorists is a continuation of a metaphysical dispute reaching back to the ancient Greek philosophers Heraclitus and Parmenides. Parmenides thought that reality is timeless and unchanging. Heraclitus, in contrast, believed that the world is a process of ceaseless change or flux. Reality for Heraclitus is dynamic and ephemeral. Indeed, the world is so fleeting, according to Heraclitus, that it is impossible to step twice into the same river. The metaphysical issues that continue to divide A-theorists and B-theorists concern the reality of the past, the reality of the future, and the ontological status of the present.
B-theory in metaphysics
The difference between A-theorists and B-theorists is often described as a dispute about temporal passage or 'becoming' and 'progressing'. B-theorists argue that this notion is purely psychological. Many A-theorists argue that in rejecting temporal 'becoming', B-theorists reject time's most vital and distinctive characteristic. It is common (though not universal) to identify A-theorists' views with belief in temporal passage. Another way to characterise the distinction revolves around what is known as the principle of temporal parity, the thesis that contrary to what appears to be the case, all times really exist in parity. A-theory (and especially presentism) denies that all times exist in parity, while B-theory insists all times exist in parity.
B-theorists such as D. H. Mellor and J. J. C. Smart wish to eliminate all talk of past, present and future in favour of a tenseless ordering of events, believing the past, present, and future to be equally real, opposing the idea that they are irreducible foundations of temporality. B-theorists also argue that the past, present, and future feature very differently in deliberation and reflection. For example, we remember the past and anticipate the future, but not vice versa. B-theorists maintain that the fact that we know much less about the future simply reflects an epistemological difference between the future and the past: the future is no less real than the past; we just know less about it.
Opposition
Irreducibility of tense
Earlier B-theorists argued that one could paraphrase tensed sentences (such as "the sun is now shining", uttered on September 28) into tenseless sentences (such as "on September 28, the sun shines") without loss of meaning. Later B-theorists argued that tenseless sentences could give the truth conditions of tensed sentences or their tokens. Quentin Smith argues that "now" cannot be reduced to descriptions of dates and times, because all date and time descriptions, and therefore truth conditionals, are relative to certain events. Tensed sentences, on the other hand, do not have such truth conditionals. The B-theorist could argue that "now" is reducible to a token-reflexive phrase such as "simultaneous with this utterance", yet Smith states that even such an argument fails to eliminate tense. One can think the statement "I am not uttering anything now", and such a statement would be true. The statement "I am not uttering anything simultaneous with this utterance" is self-contradictory, and cannot be true even when one thinks the statement. Finally, while tensed statements can express token-independent truth values, no token-reflexive statement can do so (by definition of the term "token-reflexive"). Smith claims that proponents of the B-theory argue that the inability to translate tensed sentences into tenseless sentences does not prove A-theory.
Logician and philosopher Arthur Prior has also drawn a distinction between what he calls A-facts and B-facts. The latter are facts about tenseless relations, such as the fact that the year 2025 is 25 years later than the year 2000. The former are tensed facts, such as that the Jurassic age is in the past, or that the end of the universe is in the future. Prior asks the reader to imagine having a headache, and after the headache subsides, saying "thank goodness that's over." Prior argues that the B-theory cannot make sense of this sentence. It seems bizarre to be thankful that a headache is earlier than one's utterance, anymore than being thankful that the headache is later than one's utterance. Indeed, most people who say "thank goodness that's over" are not even thinking of their own utterance. Therefore, when people say "thank goodness that's over," they are thankful for an A-fact, and not a B-fact. Yet, A-facts are only possible on the A-theory of time. (See also: Further facts.)
Endurantism and perdurantism
Opponents also charge the B-theory with being unable to explain persistence of objects. The two leading explanations for this phenomenon are endurantism and perdurantism. According to the former, an object is wholly present at every moment of its existence. According to the latter, objects are extended in time and therefore have temporal parts. Hales and Johnson explain endurantism as follows: "something is an enduring object only if it is wholly present at each time in which it exists. An object is wholly present at a time if all of its parts co-exist at that time." Under endurantism, all objects must exist as wholes at each point in time, but an object such as a rotting fruit will have the property of being not rotten one day and being rotten on another. On eternalism, and hence the B-theory, it seems that one is committed to two conflicting states for the same object. The spacetime (Minkowskian) interpretation of relativity adds an additional problem for endurantism under B-theory. On the spacetime interpretation, an object may appear as a whole at its rest frame, but on an inertial frame, it will have proper parts at different positions, and therefore different parts at different times. Hence it will not exist as a whole at any time, contradicting endurantism.
Opponents will then charge perdurantism with numerous difficulties of its own. First, it is controversial whether perdurantism can be formulated coherently. An object is defined as a collection of spatiotemporal parts, defined as pieces of a perduring object. If objects have temporal parts, this leads to difficulties. For example, the rotating discs argument asks the reader to imagine a world containing nothing more than a homogeneous spinning disk. Under endurantism, the same disc endures despite its rotations. The perdurantist supposedly has a difficult time explaining what it means for such a disc to have a determinate state of rotation. Temporal parts also seem to act unlike physical parts. A piece of chalk can be broken into two physical halves, but it seems nonsensical to talk about breaking it into two temporal halves. American epistemologist Roderick Chisholm argued that someone who hears the bird call "Bob White" knows "that his experience of hearing 'Bob' and his experience of hearing 'White' were not also had by two other things, each distinct from himself and from each other. The endurantist can explain the experience as "There exists an x such that x hears 'Bob' and then x hears 'White'" but the perdurantist cannot give such an account. Peter van Inwagen asks the reader to consider Descartes as a four-dimensional object that extends from 1596 to 1650. If Descartes had lived a much shorter life, he would have had a radically different set of temporal parts. This diminished Descartes, he argues, could not have been the same person on perdurantism, since their temporal extents and parts are so different.
Notes
References
Craig, W.L. (2001) The Tensed Theory of Time: A Critical Examination. Synthese Library.
Craig, W.L. (2000) The Tenseless Theory of Time: A Critical Examination. Synthese Library.
Davies, Paul (1980) Other Worlds. Harmondsworth: Penguin.
Michael Lockwood The Labyrinth of Time, Oxford University Press, 2005, ISBN 9780199249954.
McTaggart, J.M.E. (1927) The Nature of Existence, Vol II. Cambridge: Cambridge University Press.
Mellor, D.H. (1998) Real Time II. London: Routledge.
Prior, A.N. (2003) Papers on Time and Tense. New Edition by Per Hasle, Peter Øhrstrøm, Torben Braüner & Jack Copeland. Oxford: Clarendon.
Quine, W. V. O. (1960) Word and Object, Cambridge, MA: M.I.T. Press.
External links
Markosian, Ned, 2002, "Time", Stanford Encyclopedia of Philosophy
Arthur Prior, Stanford Encyclopedia of Philosophy
Concepts in metaphysics
Concepts in the philosophy of science
Ontology
Time
Theories of time | B-theory of time | [
"Physics",
"Mathematics"
] | 2,284 | [
"Physical quantities",
"Time",
"Quantity",
"Philosophy of time",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
10,195,662 | https://en.wikipedia.org/wiki/Sarett%20oxidation | The Sarett oxidation is an organic reaction that oxidizes primary and secondary alcohols to aldehydes and ketones, respectively, using chromium trioxide and pyridine. Unlike the similar Jones oxidation, the Sarett oxidation will not further oxidize primary alcohols to their carboxylic acid form, neither will it affect carbon-carbon double bonds. Use of the original Sarett oxidation has become largely antiquated however, in favor of other modified oxidation techniques. The unadulterated reaction is still occasionally used in teaching settings and in small scale laboratory research.
History
First appearance
The reaction is named after the American chemist Lewis Hastings Sarett (1917–1999). The first description of its use appears in a 1953 article co-authored by Sarett that relates to the synthesis of adrenal steroids. The paper proposes the use of the pyridine chromium complex CrO3-2C5H5N to oxidize primary and secondary alcohols. The complex would later become known as the "Sarett Reagent".
Modifications and improvements
Although the Sarett reagent gives good yields of ketones, its conversion of primary alcohols is less efficient. Furthermore, the isolation of products from the reaction solution can be difficult. These limitations were partially addressed with the introduction of the Collins oxidation. The active ingredient in both the Sarett reagent is identical to that in the so-called "Collins reagent", i.e. the pyridine complex (CrO3(C5H5N)2. The Collins oxidation varies from the Sarett oxidation only in that it uses methylene chloride as solvent instead of neat pyridine. The initially proposed methods of executing the Collins and Sarett oxidations were still not ideal however, as the Sarett reagent's hygroscopic, and pyrophoric properties make it difficult to prepare. This issues lead to an improvement of the Collins oxidation protocol known as the Ratcliffe variant.
Preparation of the Sarett reagent
Techniques
The Sarett reagent was originally prepared in 1953 by addition of chromium trioxide to pyridine. The pyridine must be cooled because the reaction is dangerously exothermic. Slowly, the brick-red CrO3 transform into the bis(pyridine) adduct. Subsequent to the conversion to the Sarett reagent, it is immediately used.
Safety
The specific methods of the reagent's preparation are critical, as improper technique can cause the explosion of the materials. Some technical improvements to the original methodology have reduced the risks associated with preparation. One such recent improvement reduced the likelihood of explosion by using chromic anhydride granules that would immediately sink below the surface of the cooled pyridine upon addition. It should also be mentioned that chromium trioxide is a corrosive carcinogen and therefore must be handled with extreme care.
Collins technique
The original Collins oxidation calls for the Sarett reagent to be removed from the excess pyridine and dissolved in the less basic methylene chloride. While the new solvent improves the overall yield of the reaction, it also requires the dangerous transfer of the pyrophoric reagent. The 1970 Ratcliffe variation reduced the risk of explosion by calling for the Sarett reagent to be made in situ. This was achieved by creating the Sarett reagent according to the original protocol using a stirred mixture of pyridine and methylene chloride.
Specific applications
The Sarett oxidation efficiently oxidizes primary alcohols to aldehydes without further oxidizing them to carboxylic acids. This key difference from the Jones oxidation is that the Jones oxidation occurs in the presence of water, which adds to the alcohol following oxidation to an aldehyde. The Sarett and Collins oxidations occur in the absence of water. The Sarett oxidation also proceeds under basic conditions, which allows for the use of acid sensitive substrates, such as those containing certain protecting groups. This is dissimilar to other common acidic oxidation reactions such as the Baeyer-Villiger oxidation, which would remove or alter such groups. Additionally, the Sarett reagent is relatively inert towards double bonds and thioether groups. These groups cannot effectively interact with the chromium of the Sarett reagent, as compared to the chromium in oxidizing complexes used prior to 1953.
See also
Pyridinium chlorochromate
Jones oxidation
References
External links
Sarett oxidation
Organic oxidation reactions
Name reactions | Sarett oxidation | [
"Chemistry"
] | 953 | [
"Name reactions",
"Organic oxidation reactions",
"Organic redox reactions",
"Organic reactions"
] |
10,195,736 | https://en.wikipedia.org/wiki/Collins%20reagent | Collins reagent is the complex of chromium(VI) oxide with pyridine in dichloromethane. This metal-pyridine complex, a red solid, is used to oxidize primary alcohols to the corresponding aldehydes and secondary alcohols to the corresponding ketones.
This complex is a hygroscopic orange solid.
Synthesis and structure
The complex is produced by treating chromium trioxide with pyridine. The complex is diamagnetic. According to X-ray crystallography, the complex is 5-coordinate with mutually trans pyridine ligands. The Cr-O and Cr-N distances are respectively 163 and 215 picometers.
In terms of history, the complex was first produced by Sisler et al.
Reactions
Collins reagent is especially useful for oxidations of acid sensitive compounds. Primary and secondary alcohols are oxidized respectively to aldehydes and ketones in yields of 87-98%.
Like other oxidations by Cr(VI), the stoichiometry of the oxidations is complex because the metal undergoes 3e reduction and the substrate is oxidized by 2 electrons:
3 RCH2OH + 2 CrO3(pyridine)2 → 3 RCHO + 3 H2O + Cr2O3 + 4 pyridine
The reagent is typically used in a sixfold excess. Methylene chloride is the typical solvent, with the solubility of 12.5 g/100 ml.
The application of this reagent to oxidations was discovered by G. I. Poos, G. E. Arth, R. E. Beyler and L.H. Sarett in 1953. It was popularized by J. C. Collins several years later.
Other reagents
Sarett oxidation
Oxidation with chromium(VI)-amine complexes
Collins reagent can be used as an alternative to the Jones reagent and pyridinium chlorochromate (PCC) when oxidizing secondary alcohols to ketones. PCC and pyridinium dichromate (PDC) oxidations have largely supplanted Collins oxidation.
Safety and environmental aspects
The solid is flammable. Generally speaking chromium (VI) compounds are carcinogenic.
References
Oxidizing agents
Chromium(VI) compounds | Collins reagent | [
"Chemistry"
] | 501 | [
"Redox",
"Oxidizing agents"
] |
10,196,392 | https://en.wikipedia.org/wiki/Gaisser%E2%80%93Hillas%20function | The Gaisser–Hillas function is used in astroparticle physics. It parameterizes the longitudinal particle density in a cosmic ray air shower. The function was proposed in 1977 by Thomas K. Gaisser and Anthony Michael Hillas.
The number of particles as a function of traversed atmospheric depth is expressed as
where is maximum number of particles observed at depth , and and are primary mass and energy dependent parameters.
Using substitutions
, and
the function can be written in an alternative one-parametric (m) form as
References
Cosmic rays | Gaisser–Hillas function | [
"Physics",
"Astronomy"
] | 111 | [
"Physical phenomena",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Radiation",
"Particle physics",
"Particle physics stubs",
"Cosmic rays"
] |
10,197,065 | https://en.wikipedia.org/wiki/Aeration%20turbine | Aeration turbines are designed to aerate and mix fluids industrially. They are foremost used in brewing, pond aeration and sewage treatment plants.
Aeration turbines are designed for mixing gases, usually air, with a liquid, usually water. They can serve additional purposes like destratification, agitator or pump.
There are numerous design variations in use or newly entering the market. Most are centrifugal, where fluid enters at the axis and exits around the perimeter of the rotor. Aeration turbines can run open or in a housing. Some designs have bladed rotors which leads to more splashing and those need to run close to the surface, which is an obvious sign of lesser efficiency.
Generally the performance efficiency of aeration turbines is very high, the space requirement is compact, both at high reliability and all those factors reduce operational cost.
The use of aeration turbines in industry is still underdeveloped especially with waste water treatment. With raising population and the growing strain on clean water supplies such environmentally friendly solutions are becoming more important.
The technology allows a more decentralized approach to waste water treatment as sewage can be oxygenised at pumping stations for bacteria to start breaking down the sewage before it even arrives at centralized processing plants.
Aeration turbines are more and more entering industry because of their significantly higher efficiency and reduced size of used gear compared to other methods of aeration. This means less investment into process infrastructure and substantial savings to both the overall cost but foremost running cost for electricity.
Reference List
Centrifuges | Aeration turbine | [
"Chemistry",
"Engineering"
] | 310 | [
"Chemical equipment",
"Centrifugation",
"Centrifuges"
] |
10,197,275 | https://en.wikipedia.org/wiki/Dry%20matter | The dry matter or dry weight is a measure of the mass of a completely dried substance.
Analysis of food
The dry matter of plant and animal material consists of all its constituents excluding water. The dry matter of food includes carbohydrates, fats, proteins, vitamins, minerals, and antioxidants (e.g., thiocyanate, anthocyanin, and quercetin). Carbohydrates, fats, and proteins, which provide the energy in foods (measured in kilocalories or kilojoules), make up ninety percent of the dry weight of a diet.
Water composition
Water content in foods varies widely. A large number of foods are more than half water by weight, including boiled oatmeal (84.5%), cooked macaroni (78.4%), boiled eggs (73.2%), boiled rice (72.5%), white meat chicken (70.3%) and sirloin steak (61.9%). Fruits and vegetables are 70 to 95% water. Most meats are on average about 70% water. Breads are approximately 36% water. Some foods have a water content of less than 5%, e.g., peanut butter, crackers, and chocolate cake.
Water content of dairy products is quite variable. Butter is 15% water. Cow's milk ranges between 88 and 86% water. Swiss cheese is 37% percent water. The water content of milk and dairy products varies with the percentage of butterfat so that whole milk has the lowest percentage of water and skimmed milk has the highest.
Dry matter basis
The nutrient or mineral content of foods, animal feeds or plant tissues are often expressed on a dry matter basis, i.e. as a proportion of the total dry matter in the material. For example, a 138-gram apple contains 84% water (116 g water and 22 g dry matter per apple). The potassium content is 0.72% on a dry matter basis, i.e. 0.72% of the dry matter is potassium. The apple, therefore, contains 158 mg potassium (0.72/100 X 22 g). Dried apple contains the same concentration of potassium on a dry matter basis (0.72%), but is only 32% water (68% dry matter). So 138 g of dried apple contains 93.8 g dry matter and 675 mg potassium (0.72/100 x 93.8 g).
When formulating a diet or mixed animal feed, nutrient or mineral concentrations are generally given on a dry matter basis; it is therefore important to consider the moisture content of each constituent when calculating total quantities of the different nutrients supplied.
Fat in dry matter (FDM)
Cheese contains both dry matter and water. The dry matter in cheese contains proteins, butterfat, minerals, and lactose (milk sugar), although little lactose survives fermentation when the cheese is made. A cheese's fat content is expressed as the percentage of fat in the cheese's dry matter (abbreviated FDM or FiDM), which excludes the cheese's water content. For example, if a cheese is 50% water (and, therefore, 50% dry matter) and has 25% fat, its fat content would be 50% fat in dry matter.
Techniques
In the sugar industry the dry matter content is an important parameter to control the crystallization process and is often measured on-line by means of microwave density meters.
Animal feed
Dry matter can refer to the dry portion of animal feed. A substance in the feed, such as a nutrient or toxin, can be referred to on a dry matter basis (abbreviated DMB) to show its level in the feed (e.g., ppm). Considering nutrient levels in different feeds on a dry matter basis (rather than an as-is basis) makes a comparison easier because feeds contain different percentages of water. This also allows a comparison between the level of a given nutrient in dry matter and the level needed in an animal's diet. Dry matter intake (DMI) refers to feed intake excluding its water content. The percentage of water is frequently determined by heating the feed on a paper plate in a microwave oven or using the Koster Tester to dry the feed. Ascertaining DMI can be useful for low-energy feeds with a high percentage of water in order to ensure adequate energy intake. Animals eating these kinds of feeds have been shown to consume less dry matter and food energy. A problem called dry matter loss can result from heat generation, as caused by microbial respiration. It decreases the content of nonstructural carbohydrate, protein, and food energy.
See also
Body water
Moisture
References
Solids
Measurement
Food analysis | Dry matter | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 989 | [
"Physical quantities",
"Quantity",
"Phases of matter",
"Measurement",
"Size",
"Condensed matter physics",
"Solids",
"Food analysis",
"Food chemistry",
"Matter"
] |
12,529,812 | https://en.wikipedia.org/wiki/Gowdy%20solution | Gowdy universes or, alternatively, Gowdy solutions of Einstein's equations are simple model spacetimes in general relativity which represent an expanding universe filled with a regular pattern of gravitational waves.
External links
– a description of the different types of Gowdy universes suitable for a general audience
General relativity | Gowdy solution | [
"Physics"
] | 65 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
12,531,400 | https://en.wikipedia.org/wiki/Contention-based%20protocol | A contention-based protocol (CBP) is a communications protocol for operating wireless telecommunication equipment that allows many users to use the same radio channel without pre-coordination. The "listen before talk" operating procedure in IEEE 802.11 is the most well known contention-based protocol.
Section 90.7 of Part 90 of the United States Federal Communications Commission rules define CBP as:
A protocol that allows multiple users to share the same spectrum by defining the events that must occur when two or more transmitters attempt to simultaneously access the same channel and establishing rules by which a transmitter provides reasonable opportunities for other transmitters to operate. Such a protocol may consist of procedures for initiating new transmissions, procedures for determining the state of the channel (available or unavailable), and procedures for managing retransmissions in the event of a busy channel.
This definition was added as part of the Rules for Wireless Broadband Services in the 3650-3700 MHz Band.
References
Wireless networking | Contention-based protocol | [
"Technology",
"Engineering"
] | 193 | [
"Wireless networking",
"Computer networks engineering"
] |
12,532,834 | https://en.wikipedia.org/wiki/Building%20services%20engineering | Building services engineering (BSE) is a professional engineering discipline that strives to achieve a safe and comfortable indoor environment while minimizing the environmental impact of a building.
Professional bodies
The two most notable professional bodies are:
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) was founded in 1894.
The British Chartered Institution of Building Services Engineers (CIBSE) was founded in 1976 and received a Royal Charter in the United Kingdom, formally recognising building services engineering as a profession.
Education
Building services engineers typically possess an academic degree in civil engineering, architectural engineering, building services engineering, mechanical engineering or electrical engineering. The length of study for such a degree is usually 3–4 years for a Bachelor of Engineering (BEng) or Bachelor of Science (BSc) and 4–5 years for a Master of Engineering (MEng).
In the United Kingdom, the Chartered Institution of Building Services Engineers (CIBSE) accredits university degrees in Building Services Engineering. In the United States, ABET accredits degrees.
Building services engineering software
Many tasks in building services engineering involve the use of engineering software, for example to design/model or draw solutions. The most common types of tool are whole building energy simulation and CAD (traditionally 2D) or the increasingly popular Building Information Modeling (BIM) which is 3D. 3D BIM software can have integrated tools for Building Services calculations such sizing ventilation ducts or estimating noise levels. Another use of 3D/4D BIM is that empowers more informed decision making and better coordination between different disciplines, such as 'collision testing'.
See also
American Society of Heating, Refrigerating and Air-Conditioning Engineers
Architectural engineering
Building engineer
Building Engineering Services Association
References
External links
ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers
BESA Building Engineering Services Association
BSRIA The Building Services Research and Information Association
CIBSE Chartered Institution of Building Services Engineers
ECA ECA - Excellence in Electrotechnical and Engineering Services
Modern Building Services journal
Online Building Services Engineering Lecture Notes
India
School of Planning and Architecture, JNA & FAU, Hyderabad, India
Building engineering
Heating, ventilation, and air conditioning | Building services engineering | [
"Engineering"
] | 448 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
12,533,877 | https://en.wikipedia.org/wiki/Revenue%20equivalence | Revenue equivalence is a concept in auction theory that states that given certain conditions, any mechanism that results in the same outcomes (i.e. allocates items to the same bidders) also has the same expected revenue.
Notation
There is a set of possible outcomes.
There are agents which have different valuations for each outcome. The valuation of agent (also called its "type") is represented as a function:
which expresses the value it has for each alternative, in monetary terms.
The agents have quasilinear utility functions; this means that, if the outcome is and in addition the agent receives a payment (positive or negative), then the total utility of agent is:
The vector of all value-functions is denoted by .
For every agent , the vector of all value-functions of the other agents is denoted by . So .
A mechanism is a pair of functions:
An function, that takes as input the value-vector and returns an outcome (it is also called a social choice function);
A function, that takes as input the value-vector and returns a vector of payments, , determining how much each player should receive (a negative payment means that the player should pay a positive amount).
The agents' types are independent identically-distributed random variables. Thus, a mechanism induces a Bayesian game in which a player's strategy is his reported type as a function of his true type. A mechanism is said to be Bayesian-Nash incentive compatible if there is a Bayesian Nash equilibrium in which all players report their true type.
Statement
Under these assumptions, the revenue equivalence theorem then says the following.
For any two Bayesian-Nash incentive compatible mechanisms, if:
The function is the same in both mechanisms, and:
For some type , the expected payment of player (averaged on the types of the other players) is the same in both mechanisms;
The valuation of each player is drawn from a path-connected set,
then:
The expected payments of all types are the same in both mechanisms, and hence:
The expected revenue (- sum of payments) is the same in both mechanisms.
Example
A classic example is the pair of auction mechanisms: first price auction and second price auction. First-price auction has a variant which is Bayesian-Nash incentive compatible; second-price auction is dominant-strategy-incentive-compatible, which is even stronger than Bayesian-Nash incentive compatible. The two mechanisms fulfill the conditions of the theorem because:
The function is the same in both mechanisms - the highest bidder wins the item; and:
A player who values the item as 0 always pays 0 in both mechanisms.
Indeed, the expected payment for each player is the same in both auctions, and the auctioneer's revenue is the same; see the page on first-price sealed-bid auction for details.
Equivalence of auction mechanisms in single item auctions
In fact, we can use revenue equivalence to prove that many types of auctions are revenue equivalent. For example, the first price auction, second price auction, and the all-pay auction are all revenue equivalent when the bidders are symmetric (that is, their valuations are independent and identically distributed).
Second price auction
Consider the second price single item auction, in which the player with the highest bid pays the second highest bid. It is optimal for each player to bid its own value .
Suppose wins the auction, and pays the second highest bid, or . The revenue from this auction is simply .
First price auction
In the first price auction, where the player with the highest bid simply pays its bid, if all players bid using a bidding function this is a Nash equilibrium.
In other words, if each player bids such that they bid the expected value of second highest bid, assuming that theirs was the highest, then no player has any incentive to deviate. If this were true, then it is easy to see that the expected revenue from this auction is also if wins the auction.
Proof
To prove this, suppose that a player 1 bids where , effectively bluffing that its value is rather than . We want to find a value of such that the player's expected payoff is maximized.
The probability of winning is then . The expected cost of this bid is . Then a player's expected payoff is
Let , a random variable. Then we can rewrite the above as
.
Using the general fact that , we can rewrite the above as
.
Taking derivatives with respect to , we obtain
.
Thus bidding with your value maximizes the player's expected payoff. Since is monotone increasing, we verify that this is indeed a maximum point.
English auction
In the open ascending price auction (aka English auction), a buyer's dominant strategy is to remain in the auction until the asking price is equal to his value. Then, if he is the last one remaining in the arena, he wins and pays the second-highest bid.
Consider the case of two buyers, each with a value that is an independent draw from a distribution with support [0,1], cumulative distribution function F(v) and probability density function f(v). If buyers behave according to their dominant strategies, then a buyer with value v wins if his opponent's value x is lower. Thus his win probability is
and his expected payment is
The expected payment conditional upon winning is therefore
Multiplying both sides by F(v) and differentiating by v yields the following differential equation for e(v).
.
Rearranging this equation,
Let B(v) be the equilibrium bid function in the sealed first-price auction. We establish revenue equivalence by showing that B(v)=e(v), that is, the equilibrium payment by the winner in one auction is equal to the equilibrium expected payment by the winner in the other.
Suppose that a buyer has value v and bids b. His opponent bids according to the equilibrium bidding strategy. The support of the opponent's bid distribution is [0,B(1)]. Thus any bid of at least B(1) wins with probability 1. Therefore, the best bid b lies in the interval [0,B(1)] and so we can write this bid as b = B(x) where x lies in [0,1]. If the opponent has value y he bids B(y). Therefore, the win probability is
.
The buyer's expected payoff is his win probability times his net gain if he wins, that is,
.
Differentiating, the necessary condition for a maximum is
.
That is if B(x) is the buyer's best response it must satisfy this first order condition. Finally we note that for B(v) to be the equilibrium bid function, the buyer's best response must be B(v). Thus x=v.
Substituting for x in the necessary condition,
.
Note that this differential equation is identical to that for e(v). Since e(0)=B(0)=0 it follows that .
Using revenue equivalence to predict bidding functions
We can use revenue equivalence to predict the bidding function of a player in a game. Consider the two player version of the second price auction and the first price auction, where each player's value is drawn uniformly from .
Second price auction
The expected payment of the first player in the second price auction can be computed as follows:
Since players bid truthfully in a second price auction, we can replace all prices with players' values. If player 1 wins, he pays what player 2 bids, or . Player 1 himself bids . Since payment is zero when player 1 loses, the above is
Since come from a uniform distribution, we can simplify this to
First price auction
We can use revenue equivalence to generate the correct symmetric bidding function in the first price auction. Suppose that in the first price auction, each player has the bidding function , where this function is unknown at this point.
The expected payment of player 1 in this game is then
(as above)
Now, a player simply pays what the player bids, and let's assume that players with higher values still win, so that the probability of winning is simply a player's value, as in the second price auction. We will later show that this assumption was correct. Again, a player pays nothing if he loses the auction. We then obtain
By the Revenue Equivalence principle, we can equate this expression to the revenue of the second-price auction that we calculated above:
From this, we can infer the bidding function:
Note that with this bidding function, the player with the higher value still wins. We can show that this is the correct equilibrium bidding function in an additional way, by thinking about how a player should maximize his bid given that all other players are bidding using this bidding function. See the page on first-price sealed-bid auction.
All-pay auctions
Similarly, we know that the expected payment of player 1 in the second price auction is , and this must be equal to the expected payment in the all-pay auction, i.e.
Thus, the bidding function for each player in the all-pay auction is
Implications
An important implication of the theorem is that any single-item auction which unconditionally gives the item to the highest bidder is going to have the same expected revenue. This means that, if we want to increase the auctioneer's revenue, the outcome function must be changed. One way to do this is to set a Reservation price on the item. This changes the Outcome function since now the item is not always given to the highest bidder. By carefully selecting the reservation price, an auctioneer can get a substantially higher expected revenue.
Limitations
The revenue-equivalence theorem breaks in some important cases:
When the players are risk-averse rather than risk-neutral as assumed above. In this case, it is known that first-price auctions generate more revenue than second-price auctions.
When the players' valuations are inter-dependent, e.g., if the valuations depend on some state of the world that is only partially known to the bidders (this is related to the Winner's curse). In this scenario, English auction generates more revenue than second-price auction, as it lets the bidders learn information from the bids of other players.
References
.
Auction theory
Mechanism design | Revenue equivalence | [
"Mathematics"
] | 2,103 | [
"Game theory",
"Mechanism design",
"Auction theory"
] |
12,534,519 | https://en.wikipedia.org/wiki/Amplitude%20adjusting | The Amplitude adjusting (also referred to as Amplitude control) enables the power control of electric loads, which are operated with AC voltage. A representative application is the heating control of industrial high temperature furnaces.
Functionality
Contrary to the conventional phase angle or full wave control,
during amplitude control only the Amplitude of the sinusoidal supply current is changed. The level of the amplitude only depends on the consumed power. The sinus oscillation does not change.
Because current and voltage are in phase, only real power is taken from the mains for amplitude control. So the current consumption from the mains is considerably lower than the current consumption in case of phase-angle operation.
Advantages
The continuous current flow causes a mild operation of the used heater elements and consequently significant longer lifetimes are realized. Depending on the ambient conditions the lifetime can be twice as long.
Especially the surface damage of the heater elements at thresholds can be reduced.
The amplitude control eliminates the flicker effects and harmonics, as usual for Thyristor units, so that the standard specifications according to EN 61000-3-2 and EN 61000-3-3 are observed.
Reactive power compensation is not required, reducing equipment costs.
Applications
Sinus units or IGBT power converters for power control of:
Resistance heatings
Silicon carbide (SC) - heater elements
Molybdenum disilicide (MoSi2) - heater elements
Infralight radiators
Literature
Manfred Schleicher, Winfried Schneider: Electronic power units. , (Download as PDF)
Electric power | Amplitude adjusting | [
"Physics",
"Engineering"
] | 322 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,777,422 | https://en.wikipedia.org/wiki/Faraday%20cup | A Faraday cup is a metal (conductive) cup designed to catch charged particles. The resulting current can be measured and used to determine the number of ions or electrons hitting the cup. The Faraday cup was named after Michael Faraday who first theorized ions around 1830.
Examples of devices which use Faraday cups include space probes (Voyager 1, & 2, Parker Solar Probe, etc.) and mass spectrometers. Faraday cups can also be used to measure charged aerosol particles.
Principle of operation
When a beam or packet of ions or electrons (e.g. from an electron beam) hits the metallic body of the cup, the apparatus gains a small net charge. The cup can then be discharged to measure a small current proportional to the charge carried by the impinging ions or electrons. By measuring the electric current (the number of electrons flowing through the circuit per second) in the cup, the number of charges can be determined. For a continuous beam of ions (assumed to be singly charged) or electrons, the total number N hitting the cup per unit time (in seconds) is
where I is the measured current (in amperes) and e is the elementary charge (1.60 × 10−19 C). Thus, a measured current of one nanoamp (10−9 A) corresponds to about 6 billion singly charged particles striking the Faraday cup each second.
Faraday cups are not as sensitive as electron multiplier detectors, but are highly regarded for accuracy because of the direct relation between the measured current and number of ions.
In plasma diagnostics
The Faraday cup uses a physical principle according to which the electrical charges delivered to the inner surface of a hollow conductor are redistributed around its outer surface due to mutual self-repelling of charges of the same sign – a phenomenon discovered by Faraday.
The conventional Faraday cup is applied for measurements of ion (or electron) flows from plasma boundaries and comprises a metallic cylindrical receiver-cup – 1 (Fig. 1) closed with, and insulated from, a washer-type metallic electron-suppressor lid – 2 provided with the round axial through enter-hollow of an aperture with a surface area . Both the receiver cup and the electron-suppressor lid are enveloped in, and insulated from, a grounded cylindrical shield – 3 having an axial round hole coinciding with the hole in the electron-suppressor lid – 2. The electron-suppressor lid is connected by 50 Ω RF cable with the source of variable DC voltage . The receiver-cup is connected by 50 Ω RF cable through the load resistor with a sweep generator producing saw-type pulses . Electric capacity is formed of the capacity of the receiver-cup – 1 to the grounded shield – 3 and the capacity of the RF cable. The signal from enables an observer to acquire an I-V characteristic of the Faraday cup by oscilloscope. Proper operating conditions: (due to possible potential sag) and , where is the ion free path. Signal from is the Faraday cup I-V characteristic which can be observed and memorized by oscilloscope
In Fig. 1: 1 – cup-receiver, metal (stainless steel). 2 – electron-suppressor lid, metal (stainless steel). 3 – grounded shield, metal (stainless steel). 4 – insulator (teflon, ceramic). – capacity of Faraday cup. – load resistor.
Thus we measure the sum of the electric currents through the load resistor : (Faraday cup current) plus the current induced through the capacitor by the saw-type voltage of the sweep-generator: The current component can be measured at the absence of the ion flow and can be subtracted further from the total current measured with plasma to obtain the actual Faraday cup I-V characteristic for processing. All of the Faraday cup elements and their assembly that interact with plasma are fabricated usually of temperature-resistant materials (often these are stainless steel and teflon or ceramic for insulators). For processing of the Faraday cup I-V characteristic, we are going to assume that the Faraday cup is installed far enough away from an investigated plasma source where the flow of ions could be considered as the flow of particles with parallel velocities directed exactly along the Faraday cup axis. In this case, the elementary particle current corresponding to the ion density differential in the range of velocities between and of ions flowing in through operating aperture of the electron-suppressor can be written in the form
where
is elementary charge, is the ion charge state, and is the one-dimensional ion velocity distribution function. Therefore, the ion current at the ion-decelerating voltage of the Faraday cup can be calculated by integrating Eq. () after substituting Eq. (),
where the lower integration limit is defined from the equation where is the velocity of the ion stopped by the decelerating potential , and is the ion mass. Thus Eq. () represents the I-V characteristic of the Faraday cup. Differentiating Eq. () with respect to , one can obtain the relation
where the value is an invariable constant for each measurement. Therefore, the average velocity of ions arriving into the Faraday cup and their average energy can be calculated (under the assumption that we operate with a single type of ion) by the expressions
where is the ion mass in atomic units. The ion concentration in the ion flow at the Faraday cup vicinity can be calculated by the formula
which follows from Eq. () at ,
and from the conventional condition for distribution function normalizing
Fig. 2 illustrates the I-V characteristic and its first derivative of the Faraday cup with installed at output of the Inductively coupled plasma source powered with RF 13.56 MHz and operating at 6 mTorr of H2. The value of the electron-suppressor voltage (accelerating the ions) was set experimentally at , near the point of suppression of the secondary electron emission from the inner surface of the Faraday cup.
Error sources
The counting of charges collected per unit time is impacted by two error sources: 1) the emission of low-energy secondary electrons from the surface struck by the incident charge and 2) backscattering (~180 degree scattering) of the incident particle, which causes it to leave the collecting surface, at least temporarily. Especially with electrons, it is fundamentally impossible to distinguish between a fresh new incident electron and one that has been backscattered or even a fast secondary electron.
See also
Nanocoulombmeter
Electron multiplier
Microchannel plate detector
Daly detector
Faraday cup electrometer
Faraday cage
Faraday constant
SWEAP
References
External links
Detecting Ions in Mass Spectrometers with the Faraday Cup By Kenneth L. Busch
Mass spectrometry
Measuring instruments
Plasma diagnostics | Faraday cup | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,401 | [
"Spectrum (physical sciences)",
"Plasma physics",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Plasma diagnostics",
"Mass spectrometry",
"Matter"
] |
1,777,481 | https://en.wikipedia.org/wiki/Clapboard | Clapboard (), also called bevel siding, lap siding, and weatherboard, with regional variation in the definition of those terms, is wooden siding of a building in the form of horizontal boards, often overlapping.
Clapboard, in modern American usage, is a word for long, thin boards used to cover walls and (formerly) roofs of buildings. Historically, it has also been called clawboard and cloboard. In the United Kingdom, Australia and New Zealand, the term weatherboard is always used.
An older meaning of "clapboard" is small split pieces of oak imported from Germany for use as barrel staves, and the name is a partial translation (from , "to fit") of Middle Dutch and related to German .
Types
Riven
Clapboards were originally riven radially by hand producing triangular or "feather-edged" sections, attached thin side up and overlapped thick over thin to shed water.
Radially sawn
Later, the boards were radially sawn in a type of sawmill called a clapboard mill, producing vertical-grain clapboards. The more commonly used boards in New England are vertical-grain boards. Depending on the diameter of the log, cuts are made from deep along the full length of the log. Each time the log turns for the next cut, it is rotated until it has turned 360°. This gives the radially sawn clapboard its taper and true vertical grain.
Flat-sawn
Flat-grain clapboards are cut tangent to the annual growth rings of the tree. As this technique was common in most parts of the British Isles, it was carried by immigrants to their colonies in the Americas and in Australia and New Zealand. Flat-sawn wood cups more and does not hold paint as well as radially sawn wood.
Chamferboard
Chamferboards are an Australian form of weatherboarding using tongue-and-groove joints to link the boards together to give a flatter external appearance than regular angled weatherboards.
Finger jointed
Some modern clapboards are made up of shorter pieces of wood finger jointed together with an adhesive.
Wood species
In North America clapboards were historically made of split oak, pine and spruce. Modern clapboards are available in red cedar and pine.
In some areas, clapboards were traditionally left as raw wood, relying upon good air circulation and the use of 'semi-hardwoods' to keep the boards from rotting. These boards eventually go grey as the tannins are washed out from the wood. More recently clapboard has been tarred or painted—traditionally black or white due to locally occurring minerals or pigments. In modern clapboard these colors remain popular, but with a hugely wider variety due to chemical pigments and stains.
Clapboard houses may be found in most parts of the British Isles, and the style may be part of all types of traditional building, from cottages to windmills, shops to workshops, as well as many others.
In New Zealand, clapboard housing dominates buildings before 1960. Clapboard, with a corrugated iron roof, was found to be a cost-effective building style. After the big earthquakes of 1855 and 1931, wooden buildings were perceived as being less vulnerable to damage. Clapboard is always referred to as weatherboard in New Zealand.
Newer, cheaper designs often imitate the form of clapboard construction as siding made of vinyl (uPVC), aluminum, fiber cement, or other man-made materials. These materials can provide a lightweight alternative to wooden cladding.
See also
Clinker (boat building)
Shiplap
Siding § Wood siding
Tongue and groove
References
External links
Research report containing photos of a clapboard roof in Virginia, U.S.A.
Building materials
House styles
Wood products | Clapboard | [
"Physics",
"Engineering"
] | 760 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
1,778,123 | https://en.wikipedia.org/wiki/Vapour%20density | Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen.
vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas .
vapour density = molar mass of gas / molar mass of H2
vapour density = molar mass of gas / 2.01568
vapour density = × molar mass
(and thus: molar mass = ~2 × vapour density)
For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity.
Vapour density = density of gas / density of hydrogen (H2)
Alternative definition
In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2.
With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
See also
Relative density (also known as specific gravity)
Victor Meyer apparatus
References
Density
Gases | Vapour density | [
"Physics",
"Chemistry",
"Mathematics"
] | 440 | [
"Fluid dynamics stubs",
"Gases",
"Physical quantities",
"Quantity",
"Mass",
"Phases of matter",
"Statistical mechanics",
"Density",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
1,780,425 | https://en.wikipedia.org/wiki/Clausius%E2%80%93Clapeyron%20relation | The Clausius–Clapeyron relation, in chemical thermodynamics, specifies the temperature dependence of pressure, most importantly vapor pressure, at a discontinuous phase transition between two phases of matter of a single constituent. It is named after Rudolf Clausius and Benoît Paul Émile Clapeyron. However, this relation was in fact originally derived by Sadi Carnot in his Reflections on the Motive Power of Fire, which was published in 1824 but largely ignored until it was rediscovered by Clausius, Clapeyron, and Lord Kelvin decades later. Kelvin said of Carnot's argument that "nothing in the whole range of Natural Philosophy is more remarkable than the establishment of general laws by such a process of reasoning."
Kelvin and his brother James Thomson confirmed the relation experimentally in 1849–50, and it was historically important as a very early successful application of theoretical thermodynamics. Its relevance to meteorology and climatology is the increase of the water-holding capacity of the atmosphere by about 7% for every 1 °C (1.8 °F) rise in temperature.
Definition
Exact Clapeyron equation
On a pressure–temperature (P–T) diagram, for any phase change the line separating the two phases is known as the coexistence curve. The Clapeyron relation gives the slope of the tangents to this curve. Mathematically,
where is the slope of the tangent to the coexistence curve at any point, is the molar change in enthalpy (latent heat, the amount of energy absorbed in the transformation), is the temperature, is the molar volume change of the phase transition, and is the molar entropy change of the phase transition. Alternatively, the specific values may be used instead of the molar ones.
Clausius–Clapeyron equation
The Clausius–Clapeyron equation applies to vaporization of liquids where vapor follows ideal gas law using the ideal gas constant and liquid volume is neglected as being much smaller than vapor volume V. It is often used to calculate vapor pressure of a liquid.
The equation expresses this in a more convenient form just in terms of the latent heat, for moderate temperatures and pressures.
Derivations
Derivation from state postulate
Using the state postulate, take the molar entropy for a homogeneous substance to be a function of molar volume and temperature .
The Clausius–Clapeyron relation describes a Phase transition in a closed system composed of two contiguous phases, condensed matter and ideal gas, of a single substance, in mutual thermodynamic equilibrium, at constant temperature and pressure. Therefore,
Using the appropriate Maxwell relation gives
where is the pressure. Since pressure and temperature are constant, the derivative of pressure with respect to temperature does not change. Therefore, the partial derivative of molar entropy may be changed into a total derivative
and the total derivative of pressure with respect to temperature may be factored out when integrating from an initial phase to a final phase , to obtain
where and are respectively the change in molar entropy and molar volume. Given that a phase change is an internally reversible process, and that our system is closed, the first law of thermodynamics holds:
where is the internal energy of the system. Given constant pressure and temperature (during a phase change) and the definition of molar enthalpy , we obtain
Given constant pressure and temperature (during a phase change), we obtain
Substituting the definition of molar latent heat gives
Substituting this result into the pressure derivative given above (), we obtain
This result (also known as the Clapeyron equation) equates the slope of the coexistence curve to the function of the molar latent heat , the temperature , and the change in molar volume . Instead of the molar values, corresponding specific values may also be used.
Derivation from Gibbs–Duhem relation
Suppose two phases, and , are in contact and at equilibrium with each other. Their chemical potentials are related by
Furthermore, along the coexistence curve,
One may therefore use the Gibbs–Duhem relation
(where is the specific entropy, is the specific volume, and is the molar mass) to obtain
Rearrangement gives
from which the derivation of the Clapeyron equation continues as in the previous section.
Ideal gas approximation at low temperatures
When the phase transition of a substance is between a gas phase and a condensed phase (liquid or solid), and occurs at temperatures much lower than the critical temperature of that substance, the specific volume of the gas phase greatly exceeds that of the condensed phase . Therefore, one may approximate
at low temperatures. If pressure is also low, the gas may be approximated by the ideal gas law, so that
where is the pressure, is the specific gas constant, and is the temperature. Substituting into the Clapeyron equation
we can obtain the Clausius–Clapeyron equation
for low temperatures and pressures, where is the specific latent heat of the substance. Instead of the specific, corresponding molar values (i.e. in kJ/mol and = 8.31 J/(mol⋅K)) may also be used.
Let and be any two points along the coexistence curve between two phases and . In general, varies between any two such points, as a function of temperature. But if is approximated as constant,
or
These last equations are useful because they relate equilibrium or saturation vapor pressure and temperature to the latent heat of the phase change without requiring specific-volume data. For instance, for water near its normal boiling point, with a molar enthalpy of vaporization of 40.7 kJ/mol and = 8.31 J/(mol⋅K),
Clapeyron's derivation
In the original work by Clapeyron, the following argument is advanced.
Clapeyron considered a Carnot process of saturated water vapor with horizontal isobars. As the pressure is a function of temperature alone, the isobars are also isotherms. If the process involves an infinitesimal amount of water, , and an infinitesimal difference in temperature , the heat absorbed is
and the corresponding work is
where is the difference between the volumes of in the liquid phase and vapor phases.
The ratio is the efficiency of the Carnot engine, . Substituting and rearranging gives
where lowercase denotes the change in specific volume during the transition.
Applications
Chemistry and chemical engineering
For transitions between a gas and a condensed phase with the approximations described above, the expression may be rewritten as
where are the pressures at temperatures respectively and is the ideal gas constant. For a liquid–gas transition, is the molar latent heat (or molar enthalpy) of vaporization; for a solid–gas transition, is the molar latent heat of sublimation. If the latent heat is known, then knowledge of one point on the coexistence curve, for instance (1 bar, 373 K) for water, determines the rest of the curve. Conversely, the relationship between and is linear, and so linear regression is used to estimate the latent heat.
Meteorology and climatology
Atmospheric water vapor drives many important meteorologic phenomena (notably, precipitation), motivating interest in its dynamics. The Clausius–Clapeyron equation for water vapor under typical atmospheric conditions (near standard temperature and pressure) is
where
The temperature dependence of the latent heat can be neglected in this application. The August–Roche–Magnus formula provides a solution under that approximation:
where is in hPa, and is in degrees Celsius (whereas everywhere else on this page, is an absolute temperature, e.g. in kelvins).
This is also sometimes called the Magnus or Magnus–Tetens approximation, though this attribution is historically inaccurate. But see also the discussion of the accuracy of different approximating formulae for saturation vapour pressure of water.
Under typical atmospheric conditions, the denominator of the exponent depends weakly on (for which the unit is degree Celsius). Therefore, the August–Roche–Magnus equation implies that saturation water vapor pressure changes approximately exponentially with temperature under typical atmospheric conditions, and hence the water-holding capacity of the atmosphere increases by about 7% for every 1 °C rise in temperature.
Example
One of the uses of this equation is to determine if a phase transition will occur in a given situation. Consider the question of how much pressure is needed to melt ice at a temperature below 0 °C. Note that water is unusual in that its change in volume upon melting is negative. We can assume
and substituting in
we obtain
To provide a rough example of how much pressure this is, to melt ice at −7 °C (the temperature many ice skating rinks are set at) would require balancing a small car (mass ~ 1000 kg) on a thimble (area ~ 1 cm2). This shows that ice skating cannot be simply explained by pressure-caused melting point depression, and in fact the mechanism is quite complex.
Second derivative
While the Clausius–Clapeyron relation gives the slope of the coexistence curve, it does not provide any information about its curvature or second derivative. The second derivative of the coexistence curve of phases 1 and 2 is given by
where subscripts 1 and 2 denote the different phases, is the specific heat capacity at constant pressure, is the thermal expansion coefficient, and is the isothermal compressibility.
See also
Van 't Hoff equation
Antoine equation
Lee–Kesler method
References
Bibliography
Notes
1849 in science
1850 in science
Thermodynamic equations
Atmospheric thermodynamics
Engineering thermodynamics | Clausius–Clapeyron relation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,979 | [
"Thermodynamic equations",
"Equations of physics",
"Engineering thermodynamics",
"Thermodynamics",
"Mechanical engineering"
] |
1,780,823 | https://en.wikipedia.org/wiki/AC%20power | In an electric circuit, instantaneous power is the time rate of flow of energy past a given point of the circuit. In alternating current circuits, energy storage elements such as inductors and capacitors may result in periodic reversals of the direction of energy flow. Its SI unit is the watt.
The portion of instantaneous power that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as instantaneous active power, and its time average is known as active power or real power. The portion of instantaneous power that results in no net transfer of energy but instead oscillates between the source and load in each cycle due to stored energy is known as instantaneous reactive power, and its amplitude is the absolute value of reactive power.
Active, reactive, apparent, and complex power in sinusoidal steady-state
In a simple alternating current (AC) circuit consisting of a source and a linear time-invariant load, both the current and voltage are sinusoidal at the same frequency. If the load is purely resistive, the two quantities reverse their polarity at the same time. Hence, the instantaneous power, given by the product of voltage and current, is always positive, such that the direction of energy flow does not reverse and always is toward the resistor. In this case, only active power is transferred.
If the load is purely reactive, then the voltage and current are 90 degrees out of phase. For two quarters of each cycle, the product of voltage and current is positive, but for the other two quarters, the product is negative, indicating that on average, exactly as much energy flows into the load as flows back out. There is no net energy flow over each half cycle. In this case, only reactive power flows: There is no net transfer of energy to the load; however, electrical power does flow along the wires and returns by flowing in reverse along the same wires. The current required for this reactive power flow dissipates energy in the line resistance, even if the ideal load device consumes no energy itself. Practical loads have resistance as well as inductance, or capacitance, so both active and reactive powers will flow to normal loads.
Apparent power is the product of the RMS values of voltage and current. Apparent power is taken into account when designing and operating power systems, because although the current associated with reactive power does no work at the load, it still must be supplied by the power source. Conductors, transformers and generators must be sized to carry the total current, not just the current that does useful work. Insufficient reactive power can depress voltage levels on an electrical grid and, under certain operating conditions, collapse the network (a blackout). Another consequence is that adding the apparent power for two loads will not accurately give the total power unless they have the same phase difference between current and voltage (the same power factor).
Conventionally, capacitors are treated as if they generate reactive power, and inductors are treated as if they consume it. If a capacitor and an inductor are placed in parallel, then the currents flowing through the capacitor and the inductor tend to cancel rather than add. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are inserted in a circuit to partially compensate for reactive power 'consumed' ('generated') by the load. Purely capacitive circuits supply reactive power with the current waveform leading the voltage waveform by 90 degrees, while purely inductive circuits absorb reactive power with the current waveform lagging the voltage waveform by 90 degrees. The result of this is that capacitive and inductive circuit elements tend to cancel each other out.
Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them):
Active power, P, or real power: watt (W);
Reactive power, Q: volt-ampere reactive (var);
Complex power, S: volt-ampere (VA);
Apparent power, |S|: the magnitude of complex power S: volt-ampere (VA);
Phase of voltage relative to current, φ: the angle of difference (in degrees) between current and voltage; . Current lagging voltage (quadrant I vector), current leading voltage (quadrant IV vector).
These are all denoted in the adjacent diagram (called a power triangle).
In the diagram, P is the active power, Q is the reactive power (in this case positive), S is the complex power and the length of S is the apparent power. Reactive power does not do any work, so it is represented as the imaginary axis of the vector diagram. Active power does do work, so it is the real axis.
The unit for power is the watt (symbol: W). Apparent power is often expressed in volt-amperes (VA) since it is the product of RMS voltage and RMS current. The unit for reactive power is var, which stands for volt-ampere reactive. Since reactive power transfers no net energy to the load, it is sometimes called "wattless" power. It does, however, serve an important function in electrical grids and its lack has been cited as a significant factor in the Northeast blackout of 2003. Understanding the relationship among these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed using complex numbers, S = P + j Q (where j is the imaginary unit).
Calculations and equations in sinusoidal steady-state
The formula for complex power (units: VA) in phasor form is:
,
where V denotes voltage in phasor form, with the amplitude as RMS, and I denotes current in phasor form, with the amplitude as RMS. Also by convention, the complex conjugate of I is used, which is denoted (or ), rather than I itself. This is done because otherwise using the product V I to define S would result in a quantity that depends on the reference angle chosen for V or I, but defining S as V I* results in a quantity that doesn't depend on the reference angle and allows to relate S to P and Q.
Other forms of complex power (units in volt-amps, VA) are derived from Z, the load impedance (units in ohms, Ω).
.
Consequentially, with reference to the power triangle, real power (units in watts, W) is derived as:
.
For a purely resistive load, real power can be simplified to:
.
R denotes resistance (units in ohms, Ω) of the load.
Reactive power (units in volts-amps-reactive, var) is derived as:
.
For a purely reactive load, reactive power can be simplified to:
,
where X denotes reactance (units in ohms, Ω) of the load.
Combining, the complex power (units in volt-amps, VA) is back-derived as
,
and the apparent power (units in volt-amps, VA) as
.
These are simplified diagrammatically by the power triangle.
Power factor
The ratio of active power to apparent power in a circuit is called the power factor. For two systems transmitting the same amount of active power, the system with the lower power factor will have higher circulating currents due to energy that returns to the source from energy storage in the load. These higher currents produce higher losses and reduce overall transmission efficiency. A lower power factor circuit will have a higher apparent power and higher losses for the same amount of active power. The power factor is 1.0 when the voltage and current are in phase. It is zero when the current leads or lags the voltage by 90 degrees. When the voltage and current are 180 degrees out of phase, the power factor is negative one, and the load is feeding energy into the source (an example would be a home with solar cells on the roof that feed power into the power grid when the sun is shining). Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle of current with respect to voltage. Voltage is designated as the base to which current angle is compared, meaning that current is thought of as either "leading" or "lagging" voltage. Where the waveforms are purely sinusoidal, the power factor is the cosine of the phase angle () between the current and voltage sinusoidal waveforms. Equipment data sheets and nameplates will often abbreviate power factor as "" for this reason.
Example: The active power is and the phase angle between voltage and current is 45.6°. The power factor is . The apparent power is then: . The concept of power dissipation in AC circuit is explained and illustrated with the example.
For instance, a power factor of 0.68 means that only 68 percent of the total current supplied (in magnitude) is actually doing work; the remaining current does no work at the load. Power Factor is very important in Power sector substations. Form the national grid the sub sectors are required to have minimum amount of power factor. Otherwise there are many loss. Mainly the required vary around 0.90 to 0.96 or more. Better the power factor less the loss.
Reactive power
In a direct current circuit, the power flowing to the load is proportional to the product of the current through the load and the potential drop across the load. The power that happens because of a capacitor or inductor is called reactive power. It happens because of the AC nature of elements like inductors and capacitors. Energy flows in one direction from the source to the load. In AC power, the voltage and current both vary approximately sinusoidally. When there is inductance or capacitance in the circuit, the voltage and current waveforms do not line up perfectly. The power flow has two components – one component flows from source to load and can perform work at the load; the other portion, known as "reactive power", is due to the delay between voltage and current, known as phase angle, and cannot do useful work at the load. It can be thought of as current that is arriving at the wrong time (too late or too early). To distinguish reactive power from active power, it is measured in units of "volt-amperes reactive", or var. These units can simplify to watts but are left as var to denote that they represent no actual work output.
Energy stored in capacitive or inductive elements of the network gives rise to reactive power flow. Reactive power flow strongly influences the voltage levels across the network. Voltage levels and reactive power flow must be carefully controlled to allow a power system to be operated within acceptable limits. A technique known as reactive compensation is used to reduce apparent power flow to a load by reducing reactive power supplied from transmission lines and providing it locally. For example, to compensate an inductive load, a shunt capacitor is installed close to the load itself. This allows all reactive power needed by the load to be supplied by the capacitor and not have to be transferred over the transmission lines. This practice saves energy because it reduces the amount of energy that is required to be produced by the utility to do the same amount of work. Additionally, it allows for more efficient transmission line designs using smaller conductors or fewer bundled conductors and optimizing the design of transmission towers.
Capacitive vs. inductive loads
Stored energy in the magnetic or electric field of a load device, such as a motor or capacitor, causes an offset between the current and the voltage waveforms. A capacitor is a device that stores energy in the form of an electric field. As current is driven through the capacitor, charge build-up causes an opposing voltage to develop across the capacitor. This voltage increases until some maximum dictated by the capacitor structure. In an AC network, the voltage across a capacitor is constantly changing. The capacitor opposes this change, causing the current to lead the voltage in phase. Capacitors are said to "source" reactive power, and thus to cause a leading power factor.
Induction machines are some of the most common types of loads in the electric power system today. These machines use inductors, or large coils of wire to store energy in the form of a magnetic field. When a voltage is initially placed across the coil, the inductor strongly resists this change in a current and magnetic field, which causes a time delay for the current to reach its maximum value. This causes the current to lag behind the voltage in phase. Inductors are said to "sink" reactive power, and thus to cause a lagging power factor. Induction generators can source or sink reactive power, and provide a measure of control to system operators over reactive power flow and thus voltage. Because these devices have opposite effects on the phase angle between voltage and current, they can be used to "cancel out" each other's effects. This usually takes the form of capacitor banks being used to counteract the lagging power factor caused by induction motors.
Reactive power control
Transmission connected generators are generally required to support reactive power flow. For example, on the United Kingdom transmission system, generators are required by the Grid Code Requirements to supply their rated power between the limits of 0.85 power factor lagging and 0.90 power factor leading at the designated terminals. The system operator will perform switching actions to maintain a secure and economical voltage profile while maintaining a reactive power balance equation:
The "system gain" is an important source of reactive power in the above power balance equation, which is generated by the capacitative nature of the transmission network itself. By making decisive switching actions in the early morning before the demand increases, the system gain can be maximized early on, helping to secure the system for the whole day. To balance the equation some pre-fault reactive generator use will be required. Other sources of reactive power that will also be used include shunt capacitors, shunt reactors, static VAR compensators and voltage control circuits.
Unbalanced sinusoidal polyphase systems
While active power and reactive power are well defined in any system, the definition of apparent power for unbalanced polyphase systems is considered to be one of the most controversial topics in power engineering. Originally, apparent power arose merely as a figure of merit. Major delineations of the concept are attributed to Stanley's Phenomena of Retardation in the Induction Coil (1888) and Steinmetz's Theoretical Elements of Engineering (1915). However, with the development of three phase power distribution, it became clear that the definition of apparent power and the power factor could not be applied to unbalanced polyphase systems. In 1920, a "Special Joint Committee of the AIEE and the National Electric Light Association" met to resolve the issue. They considered two definitions.
,
that is, the arithmetic sum of the phase apparent powers; and
,
that is, the magnitude of total three-phase complex power.
The 1920 committee found no consensus and the topic continued to dominate discussions. In 1932
, another committee formed and once again failed to resolve the question. The transcripts of their discussions are the lengthiest and most controversial ever published by the AIEE. Further resolution of this debate did not come until the late 1990s.
A new definition based on symmetrical components theory was proposed in 1993 by Alexander Emanuel for unbalanced linear load supplied with asymmetrical sinusoidal voltages:
,
that is, the root of squared sums of line voltages multiplied by the root of squared sums of line currents.
denotes the positive sequence power:
denotes the positive sequence voltage phasor, and
denotes the positive sequence current phasor.
Real number formulas
A perfect resistor stores no energy; so current and voltage are in phase. Therefore, there is no reactive power and (using the passive sign convention). Therefore, for a perfect resistor
.
For a perfect capacitor or inductor, there is no net power transfer; so all power is reactive. Therefore, for a perfect capacitor or inductor:
.
where is the reactance of the capacitor or inductor.
If is defined as being positive for an inductor and negative for a capacitor, then the modulus signs can be removed from S and X and get
.
Instantaneous power is defined as:
,
where and are the time-varying voltage and current waveforms.
This definition is useful because it applies to all waveforms, whether they are sinusoidal or not. This is particularly useful in power electronics, where non-sinusoidal waveforms are common.
In general, engineers are interested in the active power averaged over a period of time, whether it is a low frequency line cycle or a high frequency power converter switching period. The simplest way to get that result is to take the integral of the instantaneous calculation over the desired period:
.
This method of calculating the average power gives the active power regardless of harmonic content of the waveform. In practical applications, this would be done in the digital domain, where the calculation becomes trivial when compared to the use of rms and phase to determine active power:
.
Multiple frequency systems
Since an RMS value can be calculated for any waveform, apparent power can be calculated from this. For active power it would at first appear that it would be necessary to calculate many product terms and average all of them. However, looking at one of these product terms in more detail produces a very interesting result.
However, the time average of a function of the form is zero provided that ω is nonzero. Therefore, the only product terms that have a nonzero average are those where the frequency of voltage and current match. In other words, it is possible to calculate active (average) power by simply treating each frequency separately and adding up the answers. Furthermore, if voltage of the mains supply is assumed to be a single frequency (which it usually is), this shows that harmonic currents are a bad thing. They will increase the RMS current (since there will be non-zero terms added) and therefore apparent power, but they will have no effect on the active power transferred. Hence, harmonic currents will reduce the power factor. Harmonic currents can be reduced by a filter placed at the input of the device. Typically this will consist of either just a capacitor (relying on parasitic resistance and inductance in the supply) or a capacitor-inductor network. An active power factor correction circuit at the input would generally reduce the harmonic currents further and maintain the power factor closer to unity.
See also
War of the currents
Electric power transmission
Transformer
Mains electricity
Deformed power
References
External links
"AC Power Java Applet"
Electric power | AC power | [
"Physics",
"Engineering"
] | 3,889 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,781,347 | https://en.wikipedia.org/wiki/Fusible%20alloy | A fusible alloy is a metal alloy capable of being easily fused, i.e. easily meltable, at relatively low temperatures. Fusible alloys are commonly, but not necessarily, eutectic alloys.
Sometimes the term "fusible alloy" is used to describe alloys with a melting point below . Fusible alloys in this sense are used for solder.
Introduction
Fusible alloys are typically made from low melting metals.
There are 14 low melting metallic elements that are stable for practical handling. These are in 2 distinct groups:
The 5 alkali metals have 1 s electron and melt between +181 (Li) and +28 (Cs) Celsius;
The 9 poor metals have 10 d electrons and from none (Zn, Cd, Hg) to three (Bi) p electrons, they melt between -38 (Hg) and +419 (Zn) Celsius.
From a practical view, low-melting alloys can be divided into the following categories:
Mercury-containing alloys
Only alkali metal-containing alloys
Gallium-containing alloys (but neither alkali metal nor mercury)
Only bismuth, lead, tin, cadmium, zinc, indium, and sometimes thallium-containing alloys
Other alloys (rarely used)
A practical reason here is that the chemical behaviour of alkali metals is very distinct from poor metals. Of the 9 poor metals Hg (mp -38 C) and Ga (mp +29 C) have each their distinct practical issues, and the remaining 7 poor metals from In (mp +156 C) to Zn (mp +419 C) can be viewed together.
Of elements which might be viewed as related but do not share the distinct properties of poor metals:
Po is estimated to melt at 254 C and might be poor metal by properties but is too radioactive (longest halflife 125 years) for practical use;
At same reasoning as Po;
Sb melts at 630 C and is regarded as semimetal rather than poor metal;
Te is also regarded as semimetal not poor metal;
of other metals, next lowest melting point is Pu, but its melting point at 640 Celsius leaves a 220 degree gap between Zn and Pu, thus making the "poor metals" from In to Zn a natural group.
Some reasonably well-known fusible alloys are Wood's metal, Field's metal, Rose metal, Galinstan, and NaK.
Applications
Melted fusible alloys can be used as coolants as they are stable under heating and can give much higher thermal conductivity than most other coolants; particularly with alloys made with a high thermal conductivity metal such as indium or sodium. Metals with low neutron cross-section are used for cooling nuclear reactors.
Such alloys are used for making the fusible plugs inserted in the furnace crowns of steam boilers, as a safeguard in the event of the water level being allowed to fall too low. When this happens the plug, being no longer covered with water, is heated to such a temperature that it melts and allows the contents of the boiler to escape into the furnace. In automatic fire sprinklers the orifices of each sprinkler is closed with a plug that is held in place by fusible metal, which melts and liberates the water when, owing to an outbreak of fire in the room, the temperature rises above a predetermined limit.
Bismuth on solidification expands by about 3.3% by volume. Alloys with at least half of bismuth display this property too. This can be used for mounting of small parts, e.g. for machining, as they will be tightly held.
Low-melting alloys and metallic elements
Well-known alloys
Other alloys
Starting with a table of component elements and selected binary and multiple systems ordered by melting point:
Then organized by practical group and alphabetic symbols of components:
Most of the pairwise phase diagrams of 2 component metal systems have data available for analysis, like at https://himikatus.ru/art/phase-diagr1/diagrams.php
Taking the pairwise alloys of the 7 poor metals other than Hg and Ga, and ordering the pairs (total 21) by alphabetic of these elements Bi, Cd, In, Pb, Sn, Tl, Zn are as follows:
Bi-Cd https://himikatus.ru/art/phase-diagr1/Bi-Cd.php simple eutectic (Bi at 271 C, Cd at 321, eutectic at 146)
Bi-In https://himikatus.ru/art/phase-diagr1/Bi-In.php has ordered phases, eutectic at +72 - in table above
Bi-Pb https://himikatus.ru/art/phase-diagr1/Bi-Pb.php eutectic at +125 - in table above
Bi-Sn https://himikatus.ru/art/phase-diagr1/Bi-Sn.php eutectic at +139 - in table above
Bi-Tl https://himikatus.ru/art/phase-diagr1/Bi-Tl.php an intermetallic alloy and the lower melting eutectic at +188
Bi-Zn https://himikatus.ru/art/phase-diagr1/Bi-Zn.php eutectic at +255
Cd-In https://himikatus.ru/art/phase-diagr1/Cd-In.php eutectic at +128
Cd-Pb https://himikatus.ru/art/phase-diagr1/Cd-Pb.php eutectic at +248
Cd-Sn https://himikatus.ru/art/phase-diagr1/Cd-Sn.php eutectic at +176
Cd-Tl https://himikatus.ru/art/phase-diagr1/Cd-Tl.php eutectic at +204
Cd-Zn https://himikatus.ru/art/phase-diagr1/Cd-Zn.php eutectic at +266
In-Pb https://himikatus.ru/art/phase-diagr1/In-Pb.php is NOT eutectic because Pb solid solution in In only raises melting point
In-Sn https://himikatus.ru/art/phase-diagr1/In-Sn.php eutectic at +120
In-Tl https://himikatus.ru/art/phase-diagr1/In-Tl.php also NOT eutectic because Tl solid solution in In raises melting point
In-Zn https://himikatus.ru/art/phase-diagr1/In-Zn.php eutectic at +143
Pb-Sn https://himikatus.ru/art/phase-diagr1/Pb-Sn.php eutectic at +183 - in table above
Pb-Tl https://himikatus.ru/art/phase-diagr1/Pb-Tl.php also NOT eutectic because the solid solution is higher melting than components
Pb-Zn https://himikatus.ru/art/phase-diagr1/Pb-Zn.php eutectic at +318
Sn-Tl https://himikatus.ru/art/phase-diagr1/Sn-Tl.php eutectic at +168
Sn-Zn https://himikatus.ru/art/phase-diagr1/Sn-Zn.php eutectic at +198 - in table above
Tl-Zn https://himikatus.ru/art/phase-diagr1/Tl-Zn.php eutectic at +292
Considering the binary systems between alkali metals: Li only has appreciable solubility in pair
Li-Na https://himikatus.ru/art/phase-diagr1/Li-Na.php eutectic at +92
The other three alkali metals:
K-Li https://himikatus.ru/art/phase-diagr1/K-Li.php
Li-Rb https://himikatus.ru/art/phase-diagr1/Li-Rb.php
Cs-Li https://himikatus.ru/art/phase-diagr1/Cs-Li.php
practically do not dissolve Li even when liquid and therefore their melting points are not lowered by presence of Li
Na is in liquid phase miscible with all three heavier alkali metals, but on freezing forms intermetallic compounds and eutectics:
K-Na https://himikatus.ru/art/phase-diagr1/K-Na.php eutectic at -12,6 - in table above
Na-Rb https://himikatus.ru/art/phase-diagr1/Na-Rb.php eutectic at -4,5
Cs-Na https://himikatus.ru/art/phase-diagr1/Cs-Na.php eutectic at -31,8
The 3 binary systems between the three heavier alkali metals are all miscible in solid at melting point, but all form poor solid solutions that have melting point minima. This is distinct from eutectic: at eutectic point, two solid phases coexist, and close to eutectic point, the liquidus temperature rises rapidly as just one separates, whereas at poor solid solution melting point minimum, there is a single solid phase, and away from the minimum the liquidus temperature rises only slowly.
K-Rb https://himikatus.ru/art/phase-diagr1/K-Rb.php solid solution minimum mp +34
Cs-K https://himikatus.ru/art/phase-diagr1/Cs-K.php solid solution minimum mp -38 - in table above
Cs-Rb https://himikatus.ru/art/phase-diagr1/Cs-Rb.php solid solution minimum mp +10
See also
Liquid metal
List of elements by melting point
References
Further reading
Weast, R.C., "CRC Handbook of Chemistry and Physics", 55th ed, CRC Press, Cleveland, 1974, p. F-22
External links
Fusible (Low Temp) Alloys
Fusible Alloys. Archived from the original on 2012-10-12.
Jenson, W.B. "Ask the Historian - Onion's fusible alloy"
Coolants | Fusible alloy | [
"Chemistry",
"Materials_science"
] | 2,272 | [
"Metallurgy",
"Alloys",
"Fusible alloys"
] |
1,781,678 | https://en.wikipedia.org/wiki/Cocktail%20party%20effect | The cocktail party effect refers to a phenomenon wherein the brain focuses a person's attention on a particular stimulus, usually auditory. This focus excludes a range of other stimuli from conscious awareness, as when a partygoer follows a single conversation in a noisy room. This ability is widely distributed among humans, with most listeners more or less easily able to portion the totality of sound detected by the ears into distinct streams, and subsequently to decide which streams are most pertinent, excluding all or most others.
It has been proposed that a person's sensory memory subconsciously parses all stimuli and identifies discrete portions of these sensations according to their salience. This allows most people to tune effortlessly into a single voice while tuning out all others. The phenomenon is often described as a "selective attention" or "selective hearing". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input.
A person who lacks the ability to segregate stimuli in this way is often said to display the cocktail party problem or cocktail party deafness. This may also be described as auditory processing disorder or King-Kopetzky syndrome.
Neurological basis (and binaural processing)
Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus, a non-primary region of auditory cortex; a fronto-parietal network involving the inferior frontal gyrus, superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing, and attention control. Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans show that target streams are treated with more attention than competing streams.
Furthermore, activity in the superior temporal gyrus (STG) toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" – the ability to detect significant stimuli in multi-talker situations – has also been labeled the "cocktail party problem", because the ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level.
The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears. The benefit of using two ears may be partially related to the localization of sound sources. The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources. However, much of this binaural benefit can be attributed to two other processes, better-ear listening and binaural unmasking. Better-ear listening is the process of exploiting the better of the two signal-to-noise ratios available at the ears. Binaural unmasking is a process that involves a combination of information from the two ears in order to extract signals from noise.
Early work
In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers. At that time, controllers received messages from pilots over loudspeakers in the control tower. Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult. The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953.
Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task. His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch, and the rate of speech.
Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel). Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing. Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages.
More recent work
Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices. This shows that infants selectively attend to specific stimuli in their environment. Reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone. This preference indicates that infants can recognize physical changes in the tone of speech. The accuracy in noticing these physical differences, like tone, amid background noise improves over time. Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age; research suggests that infants do not understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond. The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise.
Examples of messages that catch people's attention include personal names and taboo words. The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months. Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification. Taboo words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks. Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language.
Selective attention begins to waver as we get older. Older adults have longer latency periods in discriminating between conversation streams, typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.).
Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham, Daniel Baldauf, and Jyrki Ahveninen using magnetoencephalography; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging.
Models of attention
Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom. For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs. This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics, of the unattended message was processed before selection. In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs.
Broadbent
The earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent, who proposed a theory that came to be known as the filter model. This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory, a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it. Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory, the set of mechanisms that underlies short-term memory and communicates with long-term memory. In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume. Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure. For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel.
Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears. For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first.
Treisman
In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model. In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire, directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream.
Deutsch and Deutsch
Diana Deutsch, best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch and Norman proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness.
Kahneman
Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli, a proposition which has received some support. This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention. Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions. Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity. That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them.
Visual correlates
Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli. They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained.
Effect in animals
Animals that communicate in choruses such as frogs, insects, songbirds and other animals that communicate acoustically can experience the cocktail party effect as multiple signals or calls occur concurrently. Similar to their human counterparts, acoustic mediation allows animals to listen for what they need to within their environments. For Bank swallows, cliff swallows, and king penguins, acoustic mediation allows for parent/offspring recognition in noisy environments. Amphibians also demonstrate this effect as evidenced in frogs; female frogs can listen for and differentiate male mating calls, while males can mediate other males' aggression calls. There are two leading theories as to why acoustic signaling evolved among different species. Receiver psychology holds that the development of acoustic signaling can be traced back to the nervous system and the processing strategies the nervous system uses. Specifically, how the physiology of auditory scene analysis affects how a species interprets and gains meaning from sound. Communication Network Theory states that animals can gain information by eavesdropping on other signals between others of their species. This is true especially among songbirds.
Hearables for the cocktail party effect
Hearable devices like noise-canceling headphones have been designed to address the cocktail party problem. These type of devices could provide wearers with a degree of control over the sound sources around them.
Deep learning headphone systems like target speech hearing have been proposed to give wearers the ability to hear a target person in a crowded room with multiple speakers and background noise. This technology uses real-time neural networks to learn the voice characteristics of an enrolled target speaker, which is later used to focus on their speech while suppressing other speakers and noise. Semantic hearing headsets also use neural networks to enable wearers to hear specific sounds, such as birds tweeting or alarms ringing, based on their semantic description, while suppressing other ambient sounds in the environment. Real-time neural networks have also been used to create programmable sound bubbles on headsets, allowing all speakers within the bubble to be audible while suppressing speakers and noise outside the bubble.
These devices could benefit individuals with hearing loss, sensory processing disorders and misophonia as well as people who require focused listening for their job in health-care and military, or for factory or construction workers.
See also
References
Acoustics
Hearing
Attention
Audiology
Psychological effects | Cocktail party effect | [
"Physics"
] | 3,627 | [
"Classical mechanics",
"Acoustics"
] |
1,782,065 | https://en.wikipedia.org/wiki/1%2C8-Diazabicyclo%285.4.0%29undec-7-ene | 1,8-Diazabicyclo[5.4.0]undec-7-ene, or more commonly DBU, is a chemical compound and belongs to the class of amidine compounds. It is used in organic synthesis as a catalyst, a complexing ligand, and a non-nucleophilic base.
Occurrence
Although all commercially available DBU is produced synthetically, it may also be isolated from the sea sponge Niphates digitalis. The biosynthesis of DBU has been proposed to begin with adipaldehyde and 1,3-diaminopropane.
Uses
As a reagent in organic chemistry, DBU is used as a ligand and base. As a base, protonation occurs at the imine nitrogen. Lewis acids also attach to the same nitrogen.
These properties recommend DBU for use as a catalyst, for example as a curing agent for epoxy resins and polyurethane.
It is used in the separation of fullerenes in conjunction with trimethylbenzene. It reacts with C70 and higher fullerenes, but not with C60.
It is useful for dehydrohalogenations.
See also
1,5-Diazabicyclo[4.3.0]non-5-ene
DABCO
References
Amidines
Reagents for organic chemistry
Non-nucleophilic bases | 1,8-Diazabicyclo(5.4.0)undec-7-ene | [
"Chemistry"
] | 290 | [
"Non-nucleophilic bases",
"Amidines",
"Functional groups",
"Reagents for organic chemistry",
"Bases (chemistry)"
] |
11,618,922 | https://en.wikipedia.org/wiki/Abrasive%20saw |
An abrasive saw, also known as a cut-off saw or chop saw, is a circular saw (a kind of power tool) which is typically used to cut hard materials, such as metals, tile, and concrete. The cutting action is performed by an abrasive disc, similar to a thin grinding wheel. Technically speaking this is not a saw, as it does not use regularly shaped edges (teeth) for cutting.
These saws are available in a number of configurations, including table top, free hand, and walk behind models. In the table top models, which are commonly used to cut tile and metal, the cutting wheel and motor are mounted on a pivoting arm attached to a fixed base plate. Table top saws are often electrically powered and generally have a built-in vise or other clamping arrangement. The free hand designs are typically used to cut concrete, asphalt, and pipe on construction sites. They are designed with the handles and motor near the operator, with the blade at the far end of the saw. Free hand saws do not feature a vise, because the materials being cut are larger and heavier. Walk-behind models, sometimes called flat saws are larger saws which use a stand or cart to cut into concrete floors as well as asphalt and concrete paving materials.
Abrasive saws typically use composite friction disk blades to abrasively cut through the steel. The disks are consumable items as they wear throughout the cut. The abrasive disks for these saws are typically in diameter and thick. Larger saws use diameter blades. Disks are available for steel and stainless steel. Abrasive saws can also use superabrasive (i.e., diamond and cubic boron nitride or CBN) blades, which last longer than conventional abrasive materials and do not generate as hazardous particulate matter. Superabrasive materials are more commonly used when cutting concrete, asphalt, and tile; however, they are also suitable for cutting ferrous metals.
Since their introduction, portable cut-off saws have made many building site jobs easier. With these saws, lightweight steel fabrication previously performed in workshops using stationary power bandsaws or cold saws can be done on-site. Abrasive saws have replaced more expensive and hazardous acetylene torches in many applications, such as cutting rebar. In addition, these saws allow construction workers to cut through concrete, asphalt, and pipe on job sites in a more precise manner than is possible with heavy equipment.
See also
Angle grinder
Cold saw
Miter box
Ring saw
References
Sources
External links
Saw Blade Troubleshooting
Cutting machines
Metalworking cutting tools
Saws | Abrasive saw | [
"Physics",
"Technology"
] | 551 | [
"Physical systems",
"Machines",
"Cutting machines"
] |
11,619,257 | https://en.wikipedia.org/wiki/Lofting | Lofting is a drafting technique to generate curved lines. It is used in plans for streamlined objects such as aircraft and boats. The lines may be drawn on wood and the wood then cut for advanced woodworking. The technique can be as simple as bending a flexible object, such as a long strip of thin wood or thin plastic, so that it passes over three non-linear points, and scribing the resultant curved line; or as elaborate as plotting the line using computers or mathematical tables.
Lofting is particularly useful in boat building, when it is used to draw and cut pieces for hulls and keels. These are usually curved, often in three dimensions. Loftsmen at the mould lofts of shipyards were responsible for taking the dimensions and details from drawings and plans, and translating this information into templates, battens, ordinates, cutting sketches, profiles, margins and other data. From the early 1970s onward computer-aided design (CAD) became normal for the shipbuilding design and lofting process.
Lofting was also commonly used in aircraft design before the widespread adoption of computer-generated shaping programs.
Basic lofting
As ship design evolved from craft to science, designers learned various ways to produce long curves on a flat surface. Generating and drawing such curves became a part of ship lofting; "lofting" means drawing full-sized patterns, so-called because it was often done in large, lightly constructed mezzanines or lofts above the factory floor. When aircraft design progressed beyond the stick-and-fabric boxes of its first decade of existence, the practice of lofting moved naturally into the aeronautical realm. As the storm clouds of World War II gathered in Europe, a US aircraft company, North American Aviation, took the practice into the purely mathematical realm. One of that war's outstanding warplanes, the North American P-51 Mustang, was designed using mathematical charts and tables rather than lofting tables.
Lofting is the transfer of a Lines Plan to a Full-Sized Plan. This helps to assure that the boat will be accurate in its layout and pleasing in appearance. There are many methods to loft a set of plans.
Generally, boat building books have a detailed description of the lofting process, beyond the scope of this article. Plans can be lofted on a level wooden floor, marking heavy paper such as Red Rosin for the full-sized plans or directly on plywood sheets.
The first step is to layout the grid, mark the Base Line along the length of the paper or plywood sheet. Then nail Battens every 12 inches (or more in some cases) where the station lines are to be set as a mark for the perpendicular line, which is marked with a T-square. The previous steps are followed in turn by marking the Top Line and the Water Line. Before continuing make sure to check the lines by using the Pythagorean theorem and make sure the grid is square.
The second step is to mark the points from the table of offsets. All measurements off the table of offsets are listed in Millimeters or the Feet, Inches, and Eighths. The points are plotted at each station then use a small nail and a batten to Fair (draw with a fair curve) the boat's lines.
Definitions
Full sized plan A 1:1 scale construction drawing of a boat and its parts
Lines plan A scaled-down version of a full-sized drawing often including the body, plan, profile, and section views
Body Plan A view of the boat from both dead ahead and dead astern split in half
Plan view A view looking down on the boat from above
Profile view A view of the boat from the side
Section view Cross-section of the boat's width
Batten A long stick to help draw fair lines
See also
Lofting coordinates, used in aircraft design
Loft (3D) for the etymologically derived process used in computer-based 3D modeling
References
Books
"The Evolution of the Wooden Ship", Basil Greenhill, Sam Manning, 1988
"The Boatbuilder's Apprentice", Greg Rossel, 2007
"Lofting a Boat: A Step by Step Manual" Roger Kopanycia, Adlard Coles, 2011
Woodworking
Technical drawing
Shipbuilding
Aeronautics | Lofting | [
"Engineering"
] | 865 | [
"Design engineering",
"Shipbuilding",
"Civil engineering",
"Marine engineering",
"Technical drawing"
] |
11,628,729 | https://en.wikipedia.org/wiki/Berendsen%20thermostat | The Berendsen thermostat is an algorithm to re-scale the velocities of particles in molecular dynamics simulations to control the simulation temperature. It is named after Herman Berendsen.
Description
In this scheme, the system is weakly coupled to a heat bath with some temperature. The thermostat suppresses fluctuations of the kinetic energy of the system and therefore cannot produce trajectories consistent with the canonical ensemble. The temperature of the system is corrected such that the deviation exponentially decays with some time constant .
Though the thermostat does not generate a correct canonical ensemble (especially for small systems), for large systems on the order of hundreds or thousands of atoms/molecules, the approximation yields roughly correct results for most calculated properties. The scheme is widely used due to the efficiency with which it relaxes a system to some target (bath) temperature. In many instances, systems are initially equilibrated using the Berendsen scheme, while properties are calculated using the widely known Nosé–Hoover thermostat, which correctly generates trajectories consistent with a canonical ensemble. However, the Berendsen thermostat can result in the flying ice cube effect, an artifact which can be eliminated by using the more rigorous Bussi–Donadio–Parrinello thermostat; for this reason, it has been recommended that usage of the Berendsen thermostat be discontinued in almost all cases except for replication of prior studies.
See also
Molecular mechanics
Software for molecular mechanics modeling
References
Molecular dynamics | Berendsen thermostat | [
"Physics",
"Chemistry"
] | 320 | [
" and optical physics stubs",
"Molecular physics",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
7,892,663 | https://en.wikipedia.org/wiki/Stanley%E2%80%93Wilf%20conjecture | The Stanley–Wilf conjecture, formulated independently by Richard P. Stanley and Herbert Wilf in the late 1980s, states that the growth rate of every proper permutation class is singly exponential. It was proved by and is no longer a conjecture. Marcus and Tardos actually proved a different conjecture, due to , which had been shown to imply the Stanley–Wilf conjecture by .
Statement
The Stanley–Wilf conjecture states that for every permutation β, there is a constant C such that the number |Sn(β)| of permutations of length n which avoid β as a permutation pattern is at most Cn. As observed, this is equivalent to the convergence of the limit
The upper bound given by Marcus and Tardos for C is exponential in the length of β. A stronger conjecture of had stated that one could take C to be , where k denotes the length of β, but this conjecture was disproved for the permutation by . Indeed, has shown that C is, in fact, exponential in k for almost all permutations.
Allowable growth rates
The growth rate (or Stanley–Wilf limit) of a permutation class is defined as
where an denotes the number of permutations of length n in the class. Clearly not every positive real number can be a growth rate of a permutation class, regardless of whether it is defined by a single forbidden pattern or a set of forbidden patterns. For example, numbers strictly between 0 and 1 cannot be growth rates of permutation classes.
proved that if the number of permutations in a class of length n is ever less than the nth Fibonacci number then the enumeration of the class is eventually polynomial. Therefore, numbers strictly between 1 and the golden ratio also cannot be growth rates of permutation classes. Kaiser and Klazar went on to establish every possible growth constant of a permutation class below 2; these are the largest real roots of the polynomials
for an integer k ≥ 2. This shows that 2 is the least accumulation point of growth rates of permutation classes.
later extended the characterization of growth rates of permutation classes up to a specific algebraic number κ≈2.20. From this characterization, it follows that κ is the least accumulation point of accumulation points of growth rates and that all growth rates up to κ are algebraic numbers. established that there is an algebraic number ξ≈2.31 such that there are uncountably many growth rates in every neighborhood of ξ, but only countably many growth rates below it. characterized the (countably many) growth rates below ξ, all of which are also algebraic numbers. Their results also imply that in the set of all growth rates of permutation classes, ξ is the least accumulation point from above.
In the other direction, proved that every real number at least 2.49 is the growth rate of a permutation class. That result was later improved by , who proved that every real number at least 2.36 is the growth rate of a permutation class.
See also
Enumerations of specific permutation classes for the growth rates of specific permutation classes.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
How Adam Marcus and Gabor Tardos divided and conquered the Stanley–Wilf conjecture – by Doron Zeilberger.
Enumerative combinatorics
Theorems in discrete mathematics
Permutation patterns | Stanley–Wilf conjecture | [
"Mathematics"
] | 707 | [
"Discrete mathematics",
"Enumerative combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Mathematical problems",
"Mathematical theorems"
] |
7,895,973 | https://en.wikipedia.org/wiki/ERAM | The En Route Automation Modernization (ERAM) system architecture replaces the En Route Host computer system and its backup. ERAM provides all of today's functionality and:
Adds new capabilities needed to support the evolution of US National Airspace System.
Improves information security and streamlines traffic flow at US international borders.
Processes flight radar data.
Provides communications support.
Generates display data to air traffic controllers.
The display system provides real-time electronic aeronautical information and efficient data management.
Provides a fully functional backup system, precluding the need to restrict operations in the event of a primary failure.
The backup system provides the National Transportation Safety Board-recommended safety alerts, altitude warnings and conflict alerts.
Improves surveillance by using a greater number and variety of surveillance sources.
Detects and alerts air traffic controllers when aircraft are flying too close together for both safety and long term planning.
ERAM simultaneously supports many operating modes and complex airspace configurations, driven by thousands of users who want to use the airspace differently.
Allows more radars and flights than the old Host Computer System which ERAM replaces.
The open system architecture enables the use of future capabilities to efficiently handle traffic growth, and ensure a more stable and supportable system.
Implementation
The FAA is deploying ERAM at 20 Air Route Traffic Control Centers (ARTCCs), the Williams J. Hughes Technical Center, and the FAA Academy.
Step 1, 2006 Replace the current En Route computer backup system with Enhanced Backup Surveillance.
Step 2, 2007 Provide controllers real-time electronic access to weather data, aeronautical data, air traffic control procedures documents, Notices to Airmen (NOTAMs), Pilot Reports (PIREPs) and other information with the En Route Information Display System (ERIDS).
Step 3, 2009 Replace the current En Route Host computer air traffic control with a fully redundant, state of the art system that enables new capabilities and requires no stand-alone backup system.
Nationwide adoption
By the end of September 2011, ERAM was in continuous use at two relatively low-traffic centers, the Salt Lake City (ZLC) and Seattle (ZSE) ARTCCs. The project was over budget and behind schedule, and the original deployment dates were pushed back several times. While the system was deemed suitable for operational use, many workarounds were in place while awaiting software updates. Testing and dry runs continued while software bugs and requirements changes were worked out.
As of March 2015, the Operational Readiness Decision (ORD) for ERAM has been declared at the Salt Lake City, Seattle, Denver (ZDV), Minneapolis (ZMP), Albuquerque (ZAB), Chicago (ZAU), Los Angeles (ZLA), Kansas City (ZKC), Houston (ZHU), Indianapolis (ZID), Oakland (ZOA), Boston (ZBW), Miami (ZMA), Cleveland (ZOB), Fort Worth (ZFW), Memphis (ZME), Atlanta (ZTL), Jacksonville (ZJX) and New York (ZNY) ARTCCs. ORD marks the point after which the legacy HOST Computer System can be decommissioned. In addition to the ORD sites, continuous operations have been declared at the Washington (ZDC) ARTCC, meaning all 20 ARTCCs in the CONUS are now using ERAM 24/7 to control en route air traffic over an area covering more than 3 million square miles.
In April 2014, the ERAM system at the Los Angeles ARTCC failed, causing a ground-stop that propagated throughout the western United States and lasting as long as 2.5 hours.
All ARTCCs operational under ERAM are running with software that includes the NextGen capabilities of Automatic Dependent Surveillance-Broadcast (ADS-B) and System Wide Information Management (SWIM).
References
Air traffic control systems | ERAM | [
"Technology",
"Engineering"
] | 780 | [
"Information systems",
"Air traffic control systems",
"Control engineering"
] |
7,897,442 | https://en.wikipedia.org/wiki/Subsurface%20engineer | Subsurface engineers (also known as "completion engineers") are a subset within Petroleum Engineering and typically work closely with Drilling engineers. The job of a Subsurface Engineer is to effectively select equipment that will best suit the subsurface environment in order to best produce the hydrocarbon reserves. Once the hardware has been selected, a Subsurface Engineer will monitor and adjust the equipment to ensure the well and reservoir produces under ideal circumstances.
Overview
Subsurface engineers must design a successful well completion system by selecting equipment that is adequate for both downhole environments and applications. Considerations must be given to the various functions under which the completion equipment must operate and the effects any changes in temperatures or differential pressure will have on the equipment. The completion system must also be efficient and cost effective to achieve maximum production and financial goals. Another factor in the selection of specific completion equipment is the production rates of the well. The typical job duties of a Subsurface engineer include managing the interface between the reservoir and the well, including perforations, sand control, artificial lift, downhole flow control, and downhole monitoring equipment. Additional responsibilities of a Subsurface engineer include: performing a cost and risk analysis on the design, contacting vendors for the rental, purchase, and shipment of equipment, and working closely with fellow employees (geologists, reservoir engineers, drilling engineers, and production engineers).
The Society of Petroleum Engineers (SPE) has technical disciplines which allow SPE members to focus their attention on the technical activities that most interest them. Drilling and Completions historically have been intertwined work within Petroleum Engineering. In 2016, SPE split the Drilling and Completions technical disciplines so SPE members would be able to focus more on Drilling or Completions. SPE continues to publish the SPE Drilling & Completions journal, it has been publishing the journal since 1993. SPE illustrates the technical activities of Drilling and Completions on its website and also hosts a page about SPE offerings related to Completions engineering. SPE also has many on demand webinars on Completions topics.
Design Components
The design components considered to perform a well completion may include:
Cost and risk analysis
Determining the specifications for the wellbore clean-out
Use of specific Packer assemblies
Determining specific tool selection to operate equipment within the well
Assess possible equipment load specifications and incorporation of safety factors
Best use of flow control accessories (sliding sleeves and safety valves)
Determining the appropriate perforating shots per foot and charges based on the target formations
Acidifying the formation to inhibit flow of hydrocarbons
Sand Control operations to increase production
Prevention of formation sand production with the use of wire screens
Review Well logs to determine equipment placement within the well
Determination of specific production pipe regarding well flow rates
Selection of equipment to maintain well stability
Oversee completion operations
Suggested Reading
Clegg, Joe Dunn. Production Operations Engineering. Richardson, TX: SPE, 2007. Print.
References
Engineering occupations
Petroleum engineering | Subsurface engineer | [
"Engineering"
] | 590 | [
"Petroleum engineering",
"Energy engineering"
] |
17,037,006 | https://en.wikipedia.org/wiki/Naloxazone | Naloxazone is an irreversible μ-opioid receptor antagonist which is selective for the μ1 receptor subtype. Naloxazone produces very long lasting antagonist effects as it forms a covalent bond to the active site of the μ-opioid receptor, thus making it impossible for the molecule to unbind and blocking the receptor permanently until the receptor is recycled by endocytosis.
Naloxazone is the hydrazone analog of naloxone. It has been reported that naloxazone is unstable in acidic solution, dimerizing into the more stable and much more potent antagonist naloxonazine via the free NH2 of the hydrazone to form an azine linkage. Under conditions in which no naloxonazine formation could be detected, naloxazone did not display irreversible μ opioid receptor binding.
See also
Chlornaltrexamine, an irreversible mixed agonist-antagonist
Oxymorphazone, an irreversible μ-opioid full agonist
References
Mu-opioid receptor antagonists
4,5-Epoxymorphinans
Hydroxyarenes
Tertiary alcohols
Cyclohexanols
Allylamines
Ethers
Semisynthetic opioids
Hydrazones
Alkylating agents
Irreversible antagonists | Naloxazone | [
"Chemistry"
] | 280 | [
"Alkylating agents",
"Functional groups",
"Organic compounds",
"Hydrazones",
"Ethers",
"Reagents for organic chemistry"
] |
1,171,044 | https://en.wikipedia.org/wiki/High-energy%20nuclear%20physics | High-energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high-energy physics. The primary focus of this field is the study of heavy-ion collisions, as compared to lighter atoms in other particle accelerators. At sufficient collision energies, these types of collisions are theorized to produce the quark–gluon plasma. In peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron–positron colliders due to their much smaller luminosities.
Previous high-energy nuclear accelerator experiments have studied heavy-ion collisions using projectile energies of 1 GeV/nucleon at JINR and LBNL-Bevalac up to 158 GeV/nucleon at CERN-SPS. Experiments of this type, called "fixed-target" experiments, primarily accelerate a "bunch" of ions (typically around 106 to 108 ions per bunch) to speeds approaching the speed of light (0.999c) and smash them into a target of similar heavy ions. While all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at Brookhaven National Laboratory's Alternating Gradient Synchrotron (AGS) and uranium beams on uranium targets at CERN's Super Proton Synchrotron.
High-energy nuclear physics experiments are continued at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) and at the CERN Large Hadron Collider. At RHIC the programme began with four experiments— PHENIX, STAR, PHOBOS, and BRAHMS—all dedicated to study collisions of highly relativistic nuclei. Unlike fixed-target experiments, collider experiments steer two accelerated beams of ions toward each other at (in the case of RHIC) six interaction regions. At RHIC, ions can be accelerated (depending on the ion size) from 100 GeV/nucleon to 250 GeV/nucleon. Since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center-of-mass collision energy of 200 GeV/nucleon for gold and 500 GeV/nucleon for protons.
The ALICE (A Large Ion Collider Experiment) detector at the LHC at CERN is specialized in studying Pb–Pb nuclei collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. All major LHC detectors—ALICE, ATLAS, CMS and LHCb—participate in the heavy-ion programme.
History
The exploration of hot hadron matter and of multiparticle production has a long history initiated by theoretical work on multiparticle production by Enrico Fermi in the US and Lev Landau in the USSR. These efforts paved the way to the development in the early 1960s of the thermal description of multiparticle production and the statistical bootstrap model by Rolf Hagedorn. These developments led to search for and discovery of quark-gluon plasma. Onset of the production of this new form of matter remains under active investigation.
First collisions
The first heavy-ion collisions at modestly relativistic conditions were undertaken at the Lawrence Berkeley National Laboratory (LBNL, formerly LBL) at Berkeley, California, U.S.A., and at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, USSR. At the LBL, a transport line was built to carry heavy ions from the heavy-ion accelerator HILAC to the Bevatron. The energy scale at the level of 1–2 GeV per nucleon attained initially yields compressed nuclear matter at few times normal nuclear density. The demonstration of the possibility of studying the properties of compressed and excited nuclear matter motivated research programs at much higher energies in accelerators available at BNL and CERN with relativist beams targeting laboratory fixed targets. The first collider experiments started in 1999 at RHIC, and LHC begun colliding heavy ions at one order of magnitude higher energy in 2010.
CERN operation
The LHC collider at CERN operates one month a year in the nuclear-collision mode, with Pb nuclei colliding at 2.76 TeV per nucleon pair, about 1500 times the energy equivalent of the rest mass. Overall 1250 valence quarks collide, generating a hot quark–gluon soup. Heavy atomic nuclei stripped of their electron cloud are called heavy ions, and one speaks of (ultra)relativistic heavy ions when the kinetic energy exceeds significantly the rest energy, as it is the case at LHC. The outcome of such collisions is production of very many strongly interacting particles.
In August 2012 ALICE scientists announced that their experiments produced quark–gluon plasma with temperature at around 5.5 trillion kelvins, the highest temperature achieved in any physical experiments thus far. This temperature is about 38% higher than the previous record of about 4 trillion kelvins, achieved in the 2010 experiments at the Brookhaven National Laboratory. The ALICE results were announced at the August 13 Quark Matter 2012 conference in Washington, D.C. The quark–gluon plasma produced by these experiments approximates the conditions in the universe that existed microseconds after the Big Bang, before the matter coalesced into atoms.
Objectives
There are several scientific objectives of this international research program:
The formation and investigation of a new state of matter made of quarks and gluons, the quark–gluon plasma QGP, which prevailed in early universe in first 30 microseconds.
The study of color confinement and the transformation of color confining = quark confining vacuum state to the excited state physicists call perturbative vacuum, in which quarks and gluons can roam free, which occurs at Hagedorn temperature;
The study the origins of hadron (proton, neutron etc.) matter mass believed to be related to the phenomenon of quark confinement and vacuum structure.
Experimental program
This experimental program follows on a decade of research at the RHIC collider at BNL and almost two decades of studies using fixed targets at SPS at CERN and AGS at BNL. This experimental program has already confirmed that the extreme conditions of matter necessary to reach QGP phase can be reached. A typical temperature range achieved in the QGP created
is more than times greater than in the center of the Sun. This corresponds to an energy density
.
The corresponding relativistic-matter pressure is
More information
Rutgers University Nuclear Physics Home Page
Publications - High Energy Nuclear Physics (HENP)
https://web.archive.org/web/20101212105542/http://www.er.doe.gov/np/
References
Nuclear physics
Quantum chromodynamics | High-energy nuclear physics | [
"Physics"
] | 1,429 | [
"Nuclear physics"
] |
1,171,980 | https://en.wikipedia.org/wiki/Bilge%20pump | A bilge pump is a water pump used to remove bilge water. Since fuel can be present in the bilge, electric bilge pumps are designed to not cause sparks. Electric bilge pumps are often fitted with float switches which turn on the pump when the bilge fills to a set level. Since bilge pumps can fail, use of a backup pump is often advised. The primary pump is normally located at the lowest point of the bilge, while the secondary pump would be located somewhat higher. This ensures that the secondary pump activates only when the primary pump is overwhelmed or fails, and keeps the secondary pump free of the debris in the bilge that tends to clog the primary pump.
Ancient bilge force pumps had a number of common uses. Depending on where the pump was located in the hull of the ship, it could be used to suck in sea water into a live fish tank to preserve fish until the ship was docked and the fish ready to be sold. Another use of the force pump was to combat fires. Water would again be sucked in through the bottom of the hull, and then pumped onto the blaze. Yet another suggested use for a force pump was to dispel water from a ship. The pump would be placed near the bottom of the hull so as to suck water out of the ship. Force pumps were used on land as well. They could be used to bring water up from a well or to fill high placed tanks so that water could be pressure pumped from these tanks. These tanks were for household use and/or small-scale irrigation. The force pump was portable and could therefore, as on ships, be used to fight fire.
Force pumps could be made of either wood or bronze. Based on ancient texts, it seems that bronze was the preferred material since it lasted longer and was more easily transported. Wood was easier to build, put together, and repair but was not as durable as bronze. Because these were high-value objects, few are found in shipwrecks; they were often recovered after the ship sank. Force pumps were fairly simple in their construction consisting of a cylinder, a piston, and a few valves. Water would fill the cylinder after which the piston would descend into the cylinder, causing the water to move to a higher placed pipe. The valve would close, locking the water into the higher pipe, and then propelling it in a jet stream.
Archimedes' screw
The Greek writer Athenaeus of Naucratis described how King Hieron II commissioned Archimedes to design a huge ship, Syracusia, which could be used for luxury travel, carrying supplies, and as a naval warship. Syracusia is said to have been the largest ship built in classical antiquity. According to Athenaeus, she was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among her facilities. Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes' screw was purportedly developed in order to remove the bilge water. Archimedes' machine was a device with a revolving screw-shaped blade inside a cylinder. It was turned by hand, and could also be used to transfer water from a body of water into irrigation canals. The Archimedes' screw is still in use today for pumping liquids and granulated solids such as coal and grain. The Archimedes' screw described in Roman times by Vitruvius may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon, but this is disputed due to a lack of actual evidence.
References
Pumps
Ship design | Bilge pump | [
"Physics",
"Chemistry"
] | 746 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
1,172,094 | https://en.wikipedia.org/wiki/Radiosurgery | Radiosurgery is surgery using radiation, that is, the destruction of precisely selected areas of tissue using ionizing radiation rather than excision with a blade. Like other forms of radiation therapy (also called radiotherapy), it is usually used to treat cancer. Radiosurgery was originally defined by the Swedish neurosurgeon Lars Leksell as "a single high dose fraction of radiation, stereotactically directed to an intracranial region of interest".
In stereotactic radiosurgery (SRS), the word "stereotactic" refers to a three-dimensional coordinate system that enables accurate correlation of a virtual target seen in the patient's diagnostic images with the actual target position in the patient. Stereotactic radiosurgery may also be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR) when used outside the central nervous system (CNS).
History
Stereotactic radiosurgery was first developed in 1949 by the Swedish neurosurgeon Lars Leksell to treat small targets in the brain that were not amenable to conventional surgery. The initial stereotactic instrument he conceived used probes and electrodes. The first attempt to supplant the electrodes with radiation was made in the early fifties, with x-rays. The principle of this instrument was to hit the intra-cranial target with narrow beams of radiation from multiple directions. The beam paths converge in the target volume, delivering a lethal cumulative dose of radiation there, while limiting the dose to the adjacent healthy tissue. Ten years later significant progress had been made, due in considerable measure to the contribution of the physicists Kurt Liden and Börje Larsson. At this time, stereotactic proton beams had replaced the x-rays. The heavy particle beam presented as an excellent replacement for the surgical knife, but the synchrocyclotron was too clumsy. Leksell proceeded to develop a practical, compact, precise and simple tool which could be handled by the surgeon himself. In 1968 this resulted in the Gamma Knife, which was installed at the Karolinska Institute and consisted of several cobalt-60 radioactive sources placed in a kind of helmet with central channels for irradiation with gamma rays. This prototype was designed to produce slit-like radiation lesions for functional neurosurgical procedures to treat pain, movement disorders, or behavioral disorders that did not respond to conventional treatment. The success of this first unit led to the construction of a second device, containing 179 cobalt-60 sources. This second Gamma Knife unit was designed to produce spherical lesions to treat brain tumors and intracranial arteriovenous malformations (AVMs). Additional units were installed in the 1980s all with 201 cobalt-60 sources.
In parallel to these developments, a similar approach was designed for a linear particle accelerator or Linac. Installation of the first 4 MeV clinical linear accelerator began in June 1952 in the Medical Research Council (MRC) Radiotherapeutic Research Unit at the Hammersmith Hospital, London. The system was handed over for physics and other testing in February 1953 and began to treat patients on 7 September that year. Meanwhile, work at the Stanford Microwave Laboratory led to the development of a 6 MeV accelerator, which was installed at Stanford University Hospital, California, in 1956. Linac units quickly became favored devices for conventional fractionated radiotherapy but it lasted until the 1980s before dedicated Linac radiosurgery became a reality. In 1982, the Spanish neurosurgeon J. Barcia-Salorio began to evaluate the role of cobalt-generated and then Linac-based photon radiosurgery for the treatment of AVMs and epilepsy. In 1984, Betti and Derechinsky described a Linac-based radiosurgical system. Winston and Lutz further advanced Linac-based radiosurgical prototype technologies by incorporating an improved stereotactic positioning device and a method to measure the accuracy of various components. Using a modified Linac, the first patient in the United States was treated in Boston Brigham and Women's Hospital in February 1986.
21st century
Technological improvements in medical imaging and computing have led to increased clinical adoption of stereotactic radiosurgery and have broadened its scope in the 21st century. The localization accuracy and precision that are implicit in the word "stereotactic" remain of utmost importance for radiosurgical interventions and are significantly improved via image-guidance technologies such as the N-localizer and Sturm-Pastyr localizer that were originally developed for stereotactic surgery.
In the 21st century the original concept of radiosurgery expanded to include treatments comprising up to five fractions, and stereotactic radiosurgery has been redefined as a distinct neurosurgical discipline that utilizes externally generated ionizing radiation to inactivate or eradicate defined targets, typically in the head or spine, without the need for a surgical incision. Irrespective of the similarities between the concepts of stereotactic radiosurgery and fractionated radiotherapy the mechanism to achieve treatment is subtly different, although both treatment modalities are reported to have identical outcomes for certain indications. Stereotactic radiosurgery has a greater emphasis on delivering precise, high doses to small areas, to destroy target tissue while preserving adjacent normal tissue. The same principle is followed in conventional radiotherapy although lower dose rates spread over larger areas are more likely to be used (for example as in VMAT treatments). Fractionated radiotherapy relies more heavily on the different radiosensitivity of the target and the surrounding normal tissue to the total accumulated radiation dose. Historically, the field of fractionated radiotherapy evolved from the original concept of stereotactic radiosurgery following discovery of the principles of radiobiology: repair, reassortment, repopulation, and reoxygenation. Today, both treatment techniques are complementary, as tumors that may be resistant to fractionated radiotherapy may respond well to radiosurgery, and tumors that are too large or too close to critical organs for safe radiosurgery may be suitable candidates for fractionated radiotherapy.
Today, both Gamma Knife and Linac radiosurgery programs are commercially available worldwide. While the Gamma Knife is dedicated to radiosurgery, many Linacs are built for conventional fractionated radiotherapy and require additional technology and expertise to become dedicated radiosurgery tools. There is not a clear difference in efficacy between these different approaches. The major manufacturers, Varian and Elekta offer dedicated radiosurgery Linacs as well as machines designed for conventional treatment with radiosurgery capabilities. Systems designed to complement conventional Linacs with beam-shaping technology, treatment planning, and image-guidance tools to provide. An example of a dedicated radiosurgery Linac is the CyberKnife, a compact Linac mounted onto a robotic arm that moves around the patient and irradiates the tumor from a large set of fixed positions, thereby mimicking the Gamma Knife concept.
Mechanism of action
The fundamental principle of radiosurgery is that of selective ionization of tissue, by means of high-energy beams of radiation. Ionization is the production of ions and free radicals which are damaging to the cells. These ions and radicals, which may be formed from the water in the cell or biological materials, can produce irreparable damage to DNA, proteins, and lipids, resulting in the cell's death. Thus, biological inactivation is carried out in a volume of tissue to be treated, with a precise destructive effect. The radiation dose is usually measured in grays (one gray (Gy) is the absorption of one joule of energy per kilogram of mass). A unit that attempts to take into account both the different organs that are irradiated and the type of radiation is the sievert, a unit that describes both the amount of energy deposited and the biological effectiveness.
Clinical applications
When used outside the CNS it may be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR).
Brain and spine
Radiosurgery is performed by a multidisciplinary team of neurosurgeons, radiation oncologists and medical physicists to operate and maintain highly sophisticated, highly precise and complex instruments, including medical linear accelerators, the Gamma Knife unit and the Cyberknife unit. The highly precise irradiation of targets within the brain and spine is planned using information from medical images that are obtained via computed tomography, magnetic resonance imaging, and angiography.
Radiosurgery is indicated primarily for the therapy of tumors, vascular lesions and functional disorders. Significant clinical judgment must be used with this technique and considerations must include lesion type, pathology if available, size, location and age and general health of the patient. General contraindications to radiosurgery include excessively large size of the target lesion, or lesions too numerous for practical treatment. Patients can be treated within one to five days as outpatients. By comparison, the average hospital stay for a craniotomy (conventional neurosurgery, requiring the opening of the skull) is about 15 days. The radiosurgery outcome may not be evident until months after the treatment. Since radiosurgery does not remove the tumor but inactivates it biologically, lack of growth of the lesion is normally considered to be treatment success. General indications for radiosurgery include many kinds of brain tumors, such as acoustic neuromas, germinomas, meningiomas, metastases, trigeminal neuralgia, arteriovenous malformations, and skull base tumors, among others.
Stereotatic radiosurgery of the spinal metastasis is efficient in controlling pain in up to 90% of the cases and ensures stability of the tumours on imaging evaluation in 95% of the cases, and is more efficient for spinal metastasis involving one or two segments. Meanwhile, conventional external beam radiotherapy is more suitable for multiple spinal involvement.
Combination therapy
SRS may be administered alone or in combination with other therapies. For brain metastases, these treatment options include whole brain radiation therapy (WBRT), surgery, and systemic therapies. However, a recent systematic review found no difference in the affects on overall survival or deaths due to brain metastases when comparing SRS treatment alone to SRS plus WBRT treatment or WBRT alone.
Other bodily organs
Expansion of stereotactic radiotherapy to other lesions is increasing, and includes liver cancer, lung cancer, pancreatic cancer, etc.
Risks
The New York Times reported in December 2010 that radiation overdoses had occurred with the linear accelerator method of radiosurgery, due in large part to inadequate safeguards in equipment retrofitted for stereotactic radiosurgery. In the U.S. the Food and Drug Administration (FDA) regulates these devices, whereas the Gamma Knife is regulated by the Nuclear Regulatory Commission.
This is evidence that immunotherapy may be useful for treatment of radiation necrosis following stereotactic radiotherapy.
Types of radiation source
The selection of the proper kind of radiation and device depends on many factors including lesion type, size, and location in relation to critical structures. Data suggest that similar clinical outcomes are possible with all of the various techniques. More important than the device used are issues regarding indications for treatment, total dose delivered, fractionation schedule and conformity of the treatment plan.
Gamma Knife
A Gamma Knife (also known as the Leksell Gamma Knife) is used to treat brain tumors by administering high-intensity gamma radiation therapy in a manner that concentrates the radiation over a small volume. The device was invented in 1967 at the Karolinska Institute in Stockholm, Sweden, by Lars Leksell, Romanian-born neurosurgeon Ladislau Steiner, and radiobiologist Börje Larsson from Uppsala University, Sweden.
A Gamma Knife typically contains 201 cobalt-60 sources of approximately 30 curies each (1.1 TBq), placed in a hemispheric array in a heavily shielded assembly. The device aims gamma radiation through a target point in the patient's brain. The patient wears a specialized helmet that is surgically fixed to the skull, so that the brain tumor remains stationary at the target point of the gamma rays. An ablative dose of radiation is thereby sent through the tumor in one treatment session, while surrounding brain tissues are relatively spared.
Gamma Knife therapy, like all radiosurgery, uses doses of radiation to kill cancer cells and shrink tumors, delivered precisely to avoid damaging healthy brain tissue. Gamma Knife radiosurgery is able to accurately focus many beams of gamma radiation on one or more tumors. Each individual beam is of relatively low intensity, so the radiation has little effect on intervening brain tissue and is concentrated only at the tumor itself.
Gamma Knife radiosurgery has proven effective for patients with benign or malignant brain tumors up to in size, vascular malformations such as an arteriovenous malformation (AVM), pain, and other functional problems. For treatment of trigeminal neuralgia the procedure may be used repeatedly on patients.
Acute complications following Gamma Knife radiosurgery are rare, and complications are related to the condition being treated.
Linear accelerator-based therapies
A linear accelerator (linac) produces x-rays from the impact of accelerated electrons striking a high z target, usually tungsten. The process is also referred to as "x-ray therapy" or "photon therapy." The emission head, or "gantry", is mechanically rotated around the patient in a full or partial circle. The table where the patient is lying, the "couch", can also be moved in small linear or angular steps. The combination of the movements of the gantry and of the couch allow the computerized planning of the volume of tissue that is going to be irradiated. Devices with a high energy of 6 MeV are commonly used for the treatment of the brain, due to the depth of the target. The diameter of the energy beam leaving the emission head can be adjusted to the size of the lesion by means of collimators. They may be interchangeable orifices with different diameters, typically varying from 5 to 40 mm in 5 mm steps, or multileaf collimators, which consist of a number of metal leaflets that can be moved dynamically during treatment in order to shape the radiation beam to conform to the mass to be ablated. Linacs were capable of achieving extremely narrow beam geometries, such as 0.15 to 0.3 mm. Therefore, they can be used for several kinds of surgeries which hitherto had been carried out by open or endoscopic surgery, such as for trigeminal neuralgia. Long-term follow-up data has shown it to be as effective as radiofrequency ablation, but inferior to surgery in preventing the recurrence of pain.
The first such systems were developed by John R. Adler, a Stanford University professor of neurosurgery and radiation oncology, and Russell and Peter Schonberg at Schonberg Research, and commercialized under the brand name CyberKnife.
Proton beam therapy
Protons may also be used in radiosurgery in a procedure called Proton Beam Therapy (PBT) or proton therapy. Protons are extracted from proton donor materials by a medical synchrotron or cyclotron, and accelerated in successive transits through a circular, evacuated conduit or cavity, using powerful magnets to shape their path, until they reach the energy required to just traverse a human body, usually about 200 MeV. They are then released toward the region to be treated in the patient's body, the irradiation target. In some machines, which deliver protons of only a specific energy, a custom mask made of plastic is interposed between the beam source and the patient to adjust the beam energy to provide the appropriate degree of penetration. The phenomenon of the Bragg peak of ejected protons gives proton therapy advantages over other forms of radiation, since most of the proton's energy is deposited within a limited distance, so tissue beyond this range (and to some extent also tissue inside this range) is spared from the effects of radiation. This property of protons, which has been called the "depth charge effect" by analogy to the explosive weapons used in anti-submarine warfare, allows for conformal dose distributions to be created around even very irregularly shaped targets, and for higher doses to targets surrounded or backstopped by radiation-sensitive structures such as the optic chiasm or brainstem. The development of "intensity modulated" techniques allowed similar conformities to be attained using linear accelerator radiosurgery.
there was no evidence that proton beam therapy is better than any other types of treatment in most cases, except for a "handful of rare pediatric cancers". Critics, responding to the increasing number of very expensive PBT installations, spoke of a "medical arms race" and "crazy medicine and unsustainable public policy".
References
External links
Treating Tumors that Move with Respiration Book on Radiosurgery to moving targets (July 2007)
Shaped Beam Radiosurgery Book on LINAC-based radiosurgery using multileaf collimation (March 2011)
Neurology procedures
Radiobiology
Radiation therapy procedures
Neurosurgery | Radiosurgery | [
"Chemistry",
"Biology"
] | 3,559 | [
"Radiobiology",
"Radioactivity"
] |
1,172,161 | https://en.wikipedia.org/wiki/Biaxial%20nematic | A biaxial nematic is a spatially homogeneous liquid crystal with three distinct optical axes. This is to be contrasted to a simple nematic, which has a single preferred axis, around which the system is rotationally symmetric. The symmetry group of a biaxial nematic is i.e. that of a rectangular right parallelepiped, having 3 orthogonal axes and three orthogonal mirror planes. In a frame co-aligned with optical axes the second rank order parameter tensor, the so-called Q tensor of a biaxial nematic has the form
where is the standard nematic scalar order parameter and is a measure of the biaxiality.
The first report of a thermotropic biaxial nematic appeared in 2004 based on a boomerang shaped oxadiazole bent-core mesogen. The biaxial nematic phase for this particular compound only occurs at temperatures around 200 °C and is preceded by as yet unidentified smectic phases.
It is also found that this material can segregate into chiral domains of opposite handedness. For this to happen the boomerang shaped molecules adopt a helical superstructure.
In one azo bent-core mesogen a thermal transition is found from a uniaxial Nu to a biaxial nematic Nb mesophase, as predicted by theory and simulation. This transition is observed on heating from the Nu phase with Polarizing optical microscopy as a change in Schlieren texture and increased light transmittance and from x-ray diffraction as the splitting of the nematic reflection. The transition is a second order transition with low energy content and therefore not observed in differential scanning calorimetry. The positional order parameter for the uniaxial nematic phase is 0.75 to 1.5 times the mesogen length and for the biaxial nematic phase 2 to 3.3 times the mesogen length.
Another strategy towards biaxial nematics is the use of mixtures of classical rodlike mesogens and disklike discotic mesogens. The biaxial nematic phase is expected to be located below the minimum in the rod-disk phase diagram. In one study a miscible system of rods and disks is actually found although the biaxial nematic phase remains elusive.
See also
Liquid crystal
Liquid crystal display
Liquid crystal polymer
Lyotropic liquid crystal
Plastic crystallinity
Smart glass
Thermochromics
References
Phases of matter
Crystallography
Liquid crystals | Biaxial nematic | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 515 | [
"Phases of matter",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Matter"
] |
1,174,281 | https://en.wikipedia.org/wiki/Sclerometer | The sclerometer, also known as the Turner-sclerometer (from meaning "hard"), is an instrument used by metallurgists, material scientists and mineralogists to measure the scratch hardness of materials. It was invented in 1896 by Thomas Turner (1861–1951), the first Professor of metallurgy in Britain, at the University of Birmingham.
The Turner-Sclerometer test consists of measuring the amount of load required to make a scratch.
In test a weighted diamond point is drawn, once forward and once backward, over the smooth surface of the material to be tested. The hardness number is the weight in grams required to produce a standard scratch. The scratch selected is one which is just visible to the naked eye as a dark line on a bright reflecting surface. It is also the scratch which can just be felt with the edge of a quill when the latter is drawn over the smooth surface at right angles to a series of such scratches produced by regularly increasing weights.
See also
References
External links
Testing the Hardness of Metals
Concrete
Hardness instruments
Metallurgy
Mineralogy | Sclerometer | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 221 | [
"Structural engineering",
"Metallurgy",
"Hardness instruments",
"Materials science",
"Measuring instruments",
"nan",
"Concrete"
] |
1,174,316 | https://en.wikipedia.org/wiki/Science%20by%20press%20conference | Science by press conference or science by press release is the practice by which scientists put an unusual focus on publicizing results of research in the news media via press conferences or press releases. The term is usually used disparagingly, to suggest that the seekers of publicity are promoting claims of questionable scientific merit, using the media for attention as they are unlikely to win the approval of the scientific community.
Premature publicity violates a cultural value of most of the scientific community, which is that findings should be subjected to independent review with a "thorough examination by the scientific community" before they are widely publicized. The standard practice is to publish a paper in a peer-reviewed scientific journal. This idea has many merits, including that the scientific community has a responsibility to conduct itself in a deliberative, non-attention seeking way; and that its members should be oriented more towards the pursuit of insight than fame. Science by press conference in its most egregious forms can be undertaken on behalf of an individual researcher seeking fame, a corporation seeking to sway public opinion or investor perception, or a political or ideological movement.
Etymology
The phrase was coined by Spyros Andreopoulos, a public affairs officer at Stanford University Medical School, in a 1980 letter which appeared in the New England Journal of Medicine. Andreopoulos was commenting specifically on the publicity practices of biotechnology startups, including Biogen and Genentech. The journal in which it appeared had implemented a long-standing policy under editor Franz J. Ingelfinger which prohibited seeking publicity for research prior to its submission or publication, informally called the Ingelfinger Rule.
Notable examples
In 1989, chemists Stanley Pons and Martin Fleischmann held a press conference to claim they had successfully achieved cold fusion. (Highlighting the complexity of defining the term, Pons and Fleischman technically had an accepted paper in press at a peer-reviewed journal at the time of their press conference, though that was not widely acknowledged at the time, and the quality of the paper and its review were later criticized.)
In 1998, Andrew Wakefield held a press conference to claim that the MMR vaccine caused autism. In January 2011, an article by Brian Deer and its accompanying editorial in BMJ identified Wakefield's work as an "elaborate fraud".
In 2002, a group called Clonaid held a press conference to announce they had successfully achieved human cloning.
In 2005, the European Ramazzini Foundation of Oncology and Environmental Sciences (ERF) reported their findings from testing aspartame on rats. Their studies were widely criticized and later discounted.
In September 2012, Gilles-Éric Séralini held a press conference to claim that genetically modified food caused terrible cancers in rats, on the eve of the publication of a scientific paper, a book publication, and a movie release, and in the runup to the vote on California Proposition 37, a GM food-labeling initiative. As the Séralini affair unfolded, it was revealed that Séralini required journalists to sign confidentiality agreements in order to receive pre-prints of the paper, to prevent them from discussing the paper with independent scientists. The scientific paper was retracted in 2013.
These cases became notorious examples of "science by press conference" precisely because they were widely reported in the press, but were later rebuffed, debunked, or found to be outright fraud.
Motivations
Competition for publicity, between scientific institutions or just individual researchers, is considered a driving force behind premature press conferences. Pressure to announce research findings quickly enough to "avoid losing credit" for any scientific advances may be enhanced by limited or highly competitive funding.
Science by press conference does not have to involve a groundbreaking announcement. A manufacturer may desire to publicize results of research that suggest their product is safe. Science by press conference does not necessarily have to be directed at the general public. In some cases, it may be directed at a target market such as opinion leaders, a specific industry, potential investors, or a specific group of consumers. Biotechnology companies, for example, have financial incentives to utilize premature press conferences to gain favorable media coverage.
In recent years, sociologists of science have recast discussion about "science by press conference". They point to the increasing presence of media conversation across all aspects of culture, and argue that science is subject to many of the same social forces as other aspects of culture. They have described the increased "medialization" of science, and suggest that both science and society are changed by this process.
Responsibility
While the phrase tends to criticize scientists involved in creating the publicity, it has also been used to assert that the media bear responsibility in many instances. Even well-intentioned scientists can sometimes unintentionally create truth-distorting media firestorms because of journalists' difficulty in remaining critical and balanced, the media's interest in controversy, and the general tendency of science reporting to focus on apparent "groundbreaking findings" rather than on the larger context of a research field. Further, when results are released with great fanfare and limited peer review, basic journalism skills require skepticism and further investigation, the frequent lack of which can be seen as a problem with the media as much as with scientists who seek to exploit their power.
Common examples of science by press conference are media reports that a certain product or activity affects health or safety. For instance, the media frequently report findings that a certain food causes or prevents a disease. These reports sometimes contradict earlier reports. In some cases, it is later learned that a group interested in influencing opinion had a hand in publicizing a specific report.
The phrase also condemns different behavior in different fields. For instance, scientists working in fields that put an emphasis on the value of fast dissemination of research, such as HIV treatment research, often first and most visibly disseminate research results via conferences or talks rather than through printed publication. In these areas of science, printed publication occurs later in the process of dissemination of results than in some other fields. In the case of HIV, this is partly the result of AIDS activism in which people with AIDS and their allies criticized the slow pace of research. In particular, they characterized researchers who kept quiet before publication as being more interested in their careers than in the well-being of people with AIDS. On the other hand, over-hyped early findings can inspire activists' ire and even their direct and critical use of the phrase "science by press conference". AIDS denialist groups have claimed that press conferences announcing findings in HIV and AIDS research, particularly Robert Gallo's April 23, 1984, announcement of the discovery of the probable AIDS virus, inhibited research into non-HIV etiologies of AIDS.
Similarly, clinical trials and other kinds of important medical research may release preliminary results to the media before a journal article is printed. In this case, the justification can be that clinicians and patients will benefit from the information even knowing that the data are preliminary and require further review. For instance, researchers did not wait to publish journal articles about the SARS outbreak before notifying the media about many of their findings, for obvious reasons.
Another example might be the termination of a clinical trial because it has yielded early benefit. Publicizing this kind of result has obvious value; a delay of a few months might have terrible consequences when the results concern life-threatening conditions. On the other hand, the latter practice is especially vulnerable to abuse for self-serving ends and thus has drawn criticism similar to that implied by the phrase "science by press conference".
These examples illustrate that the derision in the term "science by press conference" does not necessarily reflect an absolute rule to publish before publicizing. Rather, it illustrates the value that publicity should be a byproduct of science rather than its objective.
See also
Fringe science
Medical journalism
Science journalism
Predatory publishing
References
Science in society
Science writing
Medical journalism
Scientific controversies
Cold fusion | Science by press conference | [
"Physics",
"Chemistry"
] | 1,596 | [
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
1,174,501 | https://en.wikipedia.org/wiki/Narrow-spectrum%20antibiotic | A narrow-spectrum antibiotic is an antibiotic that is only able to kill or inhibit limited species of bacteria. Examples of narrow-spectrum antibiotics include fidaxomicin and sarecycline.
Advantages
Narrow-spectrum antibiotic allow to kill or inhibit only those bacteria species that are unwanted (i.e. causing disease). As such, it leaves most of the beneficial bacteria unaffected, hence minimizing the collateral damage on the microbiota.
Low propensity for bacterial resistance development.
Disadvantages
Often, the exact species of bacteria causing the illness is unknown, in which case narrow-spectrum antibiotics can't be used, and broad-spectrum antibiotics are used instead. To know the exact species of bacteria causing the illness, clinical specimens need to be taken for antimicrobial susceptibility testing in a clinical microbiology laboratory.
See also
Antimicrobial spectrum
Broad-spectrum antibiotics
References
Further reading
Repurposing CRISPR-Cas systems as DNA-based smart antimicrobials
Antibiotics | Narrow-spectrum antibiotic | [
"Biology"
] | 210 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
1,174,560 | https://en.wikipedia.org/wiki/Open%20reading%20frame | In molecular biology, reading frames are defined as spans of DNA sequence between the start and stop codons. Usually, this is considered within a studied region of a prokaryotic DNA sequence, where only one of the six possible reading frames will be "open" (the "reading", however, refers to the RNA produced by transcription of the DNA and its subsequent interaction with the ribosome in translation). Such an open reading frame (ORF) may contain a start codon (usually AUG in terms of RNA) and by definition cannot extend beyond a stop codon (usually UAA, UAG or UGA in RNA). That start codon (not necessarily the first) indicates where translation may start. The transcription termination site is located after the ORF, beyond the translation stop codon. If transcription were to cease before the stop codon, an incomplete protein would be made during translation.
In eukaryotic genes with multiple exons, introns are removed and exons are then joined together after transcription to yield the final mRNA for protein translation. In the context of gene finding, the start-stop definition of an ORF therefore only applies to spliced mRNAs, not genomic DNA, since introns may contain stop codons and/or cause shifts between reading frames. An alternative definition says that an ORF is a sequence that has a length divisible by three and is bounded by stop codons. This more general definition can be useful in the context of transcriptomics and metagenomics, where a start or stop codon may not be present in the obtained sequences. Such an ORF corresponds to parts of a gene rather than the complete gene.
Biological significance
One common use of open reading frames (ORFs) is as one piece of evidence to assist in gene prediction. Long ORFs are often used, along with other evidence, to initially identify candidate protein-coding regions or functional RNA-coding regions in a DNA sequence. The presence of an ORF does not necessarily mean that the region is always translated. For example, in a randomly generated DNA sequence with an equal percentage of each nucleotide, a stop-codon would be expected once every 21 codons. A simple gene prediction algorithm for prokaryotes might look for a start codon followed by an open reading frame that is long enough to encode a typical protein, where the codon usage of that region matches the frequency characteristic for the given organism's coding regions. Therefore, some authors say that an ORF should have a minimal length, e.g. 100 codons or 150 codons. By itself even a long open reading frame is not conclusive evidence for the presence of a gene.
Short open reading frames
Some short open reading frames, also named small open reading frames, abbreviated as sORFs or smORFs, usually < 100 codons in length, that lack the classical hallmarks of protein-coding genes (both from ncRNAs and mRNAs) can produce functional peptides. They encode microproteins or sORF‐encoded proteins (SEPs). The 5’-UTR of about 50% of mammal mRNAs are known to contain one or several sORFs, also called upstream ORFs or uORFs. However, less than 10% of the vertebrate mRNAs surveyed in an older study contained AUG codons in front of the major ORF. Interestingly, uORFs were found in two thirds of proto-oncogenes and related proteins. 64–75% of experimentally found translation initiation sites of sORFs are conserved in the genomes of human and mouse and may indicate that these elements have function. However, sORFs can often be found only in the minor forms of mRNAs and avoid selection; the high conservation of initiation sites may be connected with their location inside promoters of the relevant genes. This is characteristic of SLAMF1 gene, for example.
Six-frame translation
Since DNA is interpreted in groups of three nucleotides (codons), a DNA strand has three distinct reading frames. The double helix of a DNA molecule has two anti-parallel strands; with the two strands having three reading frames each, there are six possible frame translations.
Software
Finder
The ORF Finder (Open Reading Frame Finder) is a graphical analysis tool which finds all open reading frames of a selectable minimum size in a user's sequence or in a sequence already in the database. This tool identifies all open reading frames using the standard or alternative genetic codes. The deduced amino acid sequence can be saved in various formats and searched against the sequence database using the basic local alignment search tool (BLAST) server. The ORF Finder should be helpful in preparing complete and accurate sequence submissions. It is also packaged with the Sequin sequence submission software (sequence analyser).
Investigator
ORF Investigator is a program which not only gives information about the coding and non coding sequences but also can perform pairwise global alignment of different gene/DNA regions sequences. The tool efficiently finds the ORFs for corresponding amino acid sequences and converts them into their single letter amino acid code, and provides their locations in the sequence. The pairwise global alignment between the sequences makes it convenient to detect the different mutations, including single nucleotide polymorphism. Needleman–Wunsch algorithms are used for the gene alignment. The ORF Investigator is written in the portable Perl programming language, and is therefore available to users of all common operating systems.
Predictor
OrfPredictor is a web server designed for identifying protein-coding regions in expressed sequence tag (EST)-derived sequences. For query sequences with a hit in BLASTX, the program predicts the coding regions based on the translation reading frames identified in BLASTX alignments, otherwise, it predicts the most probable coding region based on the intrinsic signals of the query sequences. The output is the predicted peptide sequences in the FASTA format, and a definition line that includes the query ID, the translation reading frame and the nucleotide positions where the coding region begins and ends. OrfPredictor facilitates the annotation of EST-derived sequences, particularly, for large-scale EST projects.
ORF Predictor uses a combination of the two different ORF definitions mentioned above. It searches stretches starting with a start codon and ending at a stop codon. As an additional criterion, it searches for a stop codon in the 5' untranslated region (UTR or NTR, nontranslated region).
ORFik
ORFik is a R-package in Bioconductor for finding open reading frames and using Next generation sequencing technologies for justification of ORFs.
orfipy
orfipy is a tool written in Python / Cython to extract ORFs in an extremely and fast and flexible manner. orfipy can work with plain or gzipped FASTA and FASTQ sequences, and provides several options to fine-tune ORF searches; these include specifying the start and stop codons, reporting partial ORFs, and using custom translation tables. The results can be saved in multiple formats, including the space-efficient BED format. orfipy is particularly faster for data containing multiple smaller FASTA sequences, such as de-novo transcriptome assemblies.
See also
Coding region
Putative gene
Sequerome – A sequence profiling tool that links each BLAST record to the NCBI ORF enabling complete ORF analysis of a BLAST report.
Micropeptide
References
External links
Translation and Open Reading Frames
hORFeome V5.1 - A web-based interactive tool for CCSB Human ORFeome Collection
ORF Marker - A free, fast and multi-platform desktop GUI tool for predicting and analyzing ORFs
StarORF - A multi-platform, java-based, GUI tool for predicting and analyzing ORFs and obtaining reverse complement sequence
ORFPredictor - A webserver designed for ORF prediction and translation of a batch of EST or cDNA sequences
Molecular genetics
Bioinformatics
he:מסגרת קריאה#מסגרת קריאה פתוחה | Open reading frame | [
"Chemistry",
"Engineering",
"Biology"
] | 1,673 | [
"Bioinformatics",
"Biological engineering",
"Molecular genetics",
"Molecular biology"
] |
1,174,850 | https://en.wikipedia.org/wiki/Log-space%20reduction | In computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logarithmic number of fixed-size integers. It is possible that such a machine may not have space to write down its own output, so the only requirement is that any given bit of the output be computable in log-space. Formally, this reduction is executed via a log-space transducer.
Such a machine has polynomially-many configurations, so log-space reductions are also polynomial-time reductions. However, log-space reductions are probably weaker than polynomial-time reductions; while any non-empty, non-full language in P is polynomial-time reducible to any other non-empty, non-full language in P, a log-space reduction from an NL-complete language to a language in L, both of which would be languages in P, would imply the unlikely L = NL. It is an open question if the NP-complete problems are different with respect to log-space and polynomial-time reductions.
Log-space reductions are normally used on languages in P, in which case it usually does not matter whether many-one reductions or Turing reductions are used, since it has been verified that L, SL, NL, and P are all closed under Turing reductions, meaning that Turing reductions can be used to show a problem is in any of these classes. However, other subclasses of P such as NC may not be closed under Turing reductions, and so many-one reductions must be used.
Just as polynomial-time reductions are useless within P and its subclasses, log-space reductions are useless to distinguish problems in L and its subclasses; in particular, every non-empty, non-full problem in L is trivially L-complete under log-space reductions. While even weaker reductions exist, they are not often used in practice, because complexity classes smaller than L (that is, strictly contained or thought to be strictly contained in L) receive relatively little attention.
The tools available to designers of log-space reductions have been greatly expanded by the result that L = SL; see SL for a list of some SL-complete problems that can now be used as subroutines in log-space reductions.
Notes
References
Further reading
Reduction (complexity) | Log-space reduction | [
"Mathematics"
] | 501 | [
"Reduction (complexity)",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
3,378,256 | https://en.wikipedia.org/wiki/Batch%20reactor | A batch reactor is a chemical reactor in which a non-continuous reaction is conducted, i.e., one where the reactants, products and solvent do not flow in or out of the vessel during the reaction until the target reaction conversion is achieved. By extension, the expression is somehow inappropriately used for other batch fluid processing operations that do not involve a chemical reaction, such as solids dissolution, product mixing, batch distillation, crystallization, and liquid/liquid extraction. In such cases, however, they may not be referred to as reactors but rather with a term specific to the function they perform (such as crystallizer, bioreactor, etc.).
Many batch processes are designed on the basis of a scale-up from the laboratory, particularly for the manufacture of specialty chemicals and pharmaceuticals. If this is the case, the process development will produce a recipe for the manufacturing process, which has many similarities to a recipe used in cookery. A typical batch reactor consists of a pressure vessel with an agitator and integral heating/cooling system. The vessels may vary in size from less than 1 L to more than 15,000 L. They are usually fabricated in steel, stainless steel, glass-lined steel, glass or exotic alloys. Liquids and solids are usually charged via connections in the top cover of the reactor. Vapors and gases also discharge through connections in the top. Liquids are usually discharged out of the bottom.
The advantages of the batch reactor lie with its versatility. A single vessel can carry out a sequence of different operations without the need to break containment. This is particularly useful when processing toxic or highly potent compounds.
Agitation
The usual agitator arrangement is a centrally mounted driveshaft with an overhead drive unit. Impeller blades are mounted on the shaft. A wide variety of blade designs are used and typically the blades cover about two thirds of the diameter of the reactor. Where viscous products are handled, anchor shaped paddles are often used which have a close clearance between the blade and the vessel walls.
Most batch reactors also use baffles. These are stationary blades which break up flow caused by the rotating agitator. These may be fixed to the vessel cover or mounted on the interior of the side walls.
Despite significant improvements in agitator blade and baffle design, mixing in large batch reactors is ultimately constrained by the amount of energy that can be applied. On large vessels, mixing energies of more than 5 W/L can put an unacceptable burden on the cooling system. High agitator loads can also create shaft stability problems. Where mixing is a critical parameter, the batch reactor is not the ideal solution. Much higher mixing rates can be achieved by using smaller flowing systems with high-speed agitators, ultrasonic mixing or static mixers.
Heating and cooling systems
Products within batch reactors usually liberate or absorb heat during processing. Even the action of stirring stored liquids generates heat. In order to hold the reactor contents at the desired temperature, heat has to be added or removed by a cooling jacket or cooling pipe. Heating/cooling coils or external jackets are used for heating and cooling batch reactors. Heat transfer fluid passes through the jacket or coils to add or remove heat.
Within the chemical and pharmaceutical industries, external cooling jackets are generally preferred as they make the vessel easier to clean. The performance of these jackets can be defined by three parameters:
Response time to modify the jacket temperature.
Uniformity of jacket temperature.
Stability of jacket temperature.
It can be argued that heat transfer coefficient is also an important parameter. It has to be recognized however that large batch reactors with external cooling jackets have severe heat transfer constraints by virtue of design. It is difficult to achieve better than 100 W/L even with ideal heat transfer conditions. By contrast, continuous reactors can deliver cooling capacities in excess of 10,000 W/L. For processes with very high heat loads, there are better solutions than batch reactors.
Fast temperature control response and uniform jacket heating and cooling is particularly important for crystallization processes or operations where the product or process is very temperature sensitive. There are several types of batch reactor cooling jackets, including single external jacket, half-coil jacket, and constant flux heat jacket.
Single external jacket
The single jacket design consists of an outer jacket which surrounds the vessel. Heat transfer fluid flows around the jacket and is injected at high velocity via nozzles. The temperature in the jacket is regulated to control heating or cooling.
The single jacket is probably the oldest design of external cooling jacket. Despite being a tried and tested solution, it has some limitations. On large vessels, it can take many minutes to adjust the temperature of the fluid in the cooling jacket. This results in sluggish temperature control. The distribution of heat transfer fluid is also far from ideal and the heating or cooling tends to vary between the side walls and bottom dish. Another issue to consider is the inlet temperature of the heat transfer fluid which can oscillate (in response to the temperature control valve) over a wide temperature range to cause hot or cold spots at the jacket inlet points.
Half-coil jacket
The half-coil jacket is made by welding a half pipe around the outside of the vessel to create a semi circular flow channel. The heat transfer fluid passes through the channel in a plug flow fashion. A large reactor may use several coils to deliver the heat transfer fluid. Like the single jacket, the temperature in the jacket is regulated to control heating or cooling.
The plug flow characteristics of a half coil jacket permits faster displacement of the heat transfer fluid in the jacket (typically less than 60 s). This is desirable for good temperature control. It also provides good distribution of heat transfer fluid which avoids the problems of non-uniform heating or cooling between the side walls and bottom dish. Like the single jacket design however the inlet heat transfer fluid is also vulnerable to large oscillations (in response to the temperature control valve) in temperature.
Constant flux cooling jacket
The constant flux cooling jacket is a relatively recent development. It is not a single jacket but has a series of 20 or more small jacket elements. The temperature control valve operates by opening and closing these channels as required. By varying the heat transfer area in this way, the process temperature can be regulated without altering the jacket temperature.
The constant flux jacket has very fast temperature control response (typically less than 5 s) due to the short length of the flow channels and high velocity of the heat transfer fluid. Like the half coil jacket the heating/cooling flux is uniform. Because the jacket operates at substantially constant temperature however the inlet temperature oscillations seen in other jackets are absent. An unusual feature of this type jacket is that process heat can be measured very sensitively. This allows the user to monitor the rate of reaction for detecting end points, controlling addition rates, controlling crystallization etc.
Applications
Batch reactors are often used in the process industry; in wastewater treatment, as they are effective in reducing biological oxygen demand (BOD) of influent untreated water; in the pharmaceutical industry; in laboratory applications, such as small-scale production, inducing fermentation for beverage products, and for experiments of reaction kinetics and thermodynamics; etc. Common issues ascribed to batch reactors are their relatively high cost and unreliability in terms of product quality.
See also
Continuous reactor
References
External links
Batch Reactor
Jacketed Vessel Design
Chemical reactors | Batch reactor | [
"Chemistry",
"Engineering"
] | 1,499 | [
"Chemical reactors",
"Chemical reaction engineering",
"Chemical equipment"
] |
3,378,468 | https://en.wikipedia.org/wiki/Tunnel%20of%20Eupalinos | The Tunnel of Eupalinos or Eupalinian aqueduct () is a tunnel of length running through Mount Kastro in Samos, Greece, built in the 6th century BC to serve as an aqueduct. The tunnel is the second known tunnel in history which was excavated from both ends (, "having two openings"), and the first with a geometry-based approach in doing so. Today it is a popular tourist attraction. The tunnel is inscribed on the UNESCO World Heritage List along with the nearby Pythagoreion and Heraion of Samos, and it was designated as an International Historic Civil Engineering Landmark in 2017.
Early history
The Eupalinian aqueduct is described by Herodotus (Histories 3.60):
I have dwelt longer upon the history of the Samians than I should otherwise have done, because they are responsible for three of the greatest building and engineering feats in the Greek world: the first is a tunnel nearly a mile long, eight feet wide and eight feet high, driven clean through the base of a hill nine hundred feet in height. The whole length of it carries a second cutting thirty feet deep and three broad, along which water from an abundant source is led through pipes into the town. This was the work of a Megarian named Eupalinus, son of Naustrophus.
The tunnel might also be referred to in the Homeric Hymn to Apollo, which mentions "watered Samos." The tunnel was dug in the mid-6th century BC, by two groups working under the direction of the engineer Eupalinos from Megara, in order to supply the ancient capital of Samos (today called Pythagoreion) with fresh water. This was necessary for demographic reasons: the city of Samos had outgrown the capacity of the wells and cisterns within the city's limits, but the main source of fresh water on the island was on the other side of Mount Kastro from the city. It was of the utmost defensive importance; because the aqueduct ran underground, it could not easily be found by an enemy, who might otherwise cut off the water supply. The date of construction is not entirely clear. Herodotus mentions the tunnel in the context of his account of the tyrant Polycrates, who ruled c. 540–522 BC, but he does not explicitly say that Polycrates was responsible for its construction. Aideen Carty suggests that it should be connected with the regime that overthrew the Geomori in the early sixth century BC, which granted citizenship to a large number of Megarians, perhaps including Eupalinos. The Eupalinian aqueduct was used as an aqueduct for 1100 years, before it began to silt up. In the seventh century AD, the south end was used as a defensive refuge.
Description
Spring and reservoir
The tunnel took water from an inland spring, located about above sea level near the modern village of Ayiades. It discharges about 400 m3 of water per day. This spring was covered over. Two rectangular openings, each measuring , feed the water into a large reservoir with a roughly elliptical ground plan. Fifteen large stone pillars support a roof of massive stone slabs. The spring was thus completely concealed from enemies. The construction of this reservoir seems to have caused the outlet of the spring to subside by several metres. At some point before the nineteenth century, a church dedicated to St John was built over the top of this reservoir, further hiding the spring's location.
North channel
From the spring, a buried channel winds along the hillside to the northern tunnel mouth. The channel is long, although the distance from the spring to the tunnel mouth as the crow flies is only . The channel is wide and about deep. After it had been cut out of the bedrock, it was covered over with stone slabs and then buried. There are inspection shafts at regular intervals along the channel's course. The last of this channel pass under a small hill. Vertical shafts were dug from the surface at intervals of and then linked up to create a short tunnel, which brings the water.
Tunnel of Eupalinos
The tunnel through Mount Kastro carried the water for a distance of . The tunnel is generally . The southern half of the tunnel was dug to larger dimensions than the northern half, which in places is just wide enough for one person to squeeze through. The southern half, by contrast, benefits from being dug through a more stable rock stratum. In three sections, a pointed roof of stone slabs was installed to prevent rockfalls. Two of these sections, covering , are near the north end of the tunnel; the third section is at the southern end of the tunnel. The walls of the tunnel were also faced with masonry in these sections, using polygonal masonry at the south end and large slabs at the northern end. In the Roman Imperial period, barrel vaults were built with small stones and plaster to reinforce other sections of the tunnel.
The width of the tunnel means that there would have been space for only two diggers to work at a time. To speed up the process, the tunnel was dug from both ends simultaneously. H. J. Kienast calculates that such workers would have been able to dig out of stone per day, meaning that the entire tunnel took at least eight years to dig.
The floor of the tunnel is nearly horizontal and roughly above the level of the water at its source. Apparently, the subsidence at the spring lowered the level of the water after work had begun, leaving the tunnel too high. A separate channel had to be dug below the east half of the tunnel to carry the water itself. It increases in depth over the course of the tunnel, from m deep at the north end to at the southern end. Vertical shafts link this channel to the main tunnel roughly every ten metres. These were dug from the tunnel and then linked together to create the channel; once construction was finished, they served as inspection shafts. Debris from this channel was simply dumped in the main tunnel.
A number of symbols and letters painted on the wall testify to a wide range of measurements. Three of them (Κ, Ε, and ΚΒ on the east wall), clearly mark the points where vertical shafts were cut. On the west wall, there are letters in alphabetical order at a regular interval of , which indicate that this was the basic unit of measurement used by Eupalinos (it is one fiftieth of the planned course through the mountain). The meanings of the other symbols have not yet been determined.
Within the channel, the water was transported in a pipe made from terracotta sections, which were long and in diameter. The full pipe must have required around 5,000 of these sections. They were joined to one another with lime mortar. The top quarter of the pipes was cut open to allow sediment and other detritus to be removed, so that the aqueduct did not silt up. A break in the pipe near the north entrance of tunnel led to large amounts of mud entering the pipe, which had to be cleared out regularly.
In the seventh century AD, when the aqueduct had ceased to operate, the southern section of the tunnel was converted to serve as refuge. This included the construction of a cistern from the southern entrance to collect water dripping from a vein in the rock.
Southern channel
Shortly before the southern mouth of the tunnel, the water channel diverges from the main tunnel and heads through the rock in a hidden channel like that to the north of the tunnel, which is buried just below the surface of the ground. It carries the water eastwards to the town of Pythagoreion. Only about of this channel have been excavated, but its total length must have been around . Two monumental fountains on the hillside inside the city seem to be on the line of this channel. They contained a reservoir and basins from which people could collect the water and carry it to their homes.
Surveying techniques and construction
In order to align the two tunnels, Eupalinos first constructed a "mountain line", running over the top of the mountain at the easiest part of the summit even though this gave a non-optimal position both for feeding water into the tunnel and for water delivery to the city. He connected a “south line” to the mountain line at the south side going straight into the mountain, which formed the south tunnel. At the north side a “north line” is connected to the mountain line, guiding the cut into the mountain from the north side. As the workers dug, they checked that their course remained straight by making sightings back towards the entrance of the tunnel. This is shown by a point in the southern half of the tunnel where the course accidentally diverged to the west and had to be corrected; a notch has been cut out of the rock on the inside of the curve, in order to restore the sight line.
After from the northern end, an area full of water, weak rock and mud forced Eupalinos to modify his plan and direct the tunnel to the west. When leaving the line Eupalinos planned his diversion as an isosceles triangle, with angles 22.5, 45, and 22.5 degrees. Measuring errors occurred and Eupalinos slightly overshot. When this was realised, the north tunnel was redirected to the east once more. The cutting of the south tunnel was completely straight, but stopped after .
Eupalinos used a unit of for distance measurements and a unit of 7.5 degrees (1/12 of a right angle) for setting out directions.
Meeting point
The north and south halves of the tunnel meet in the middle of the mountain at a dog-leg, a technique to assure they did not miss each other (This method is documented by Hermann J. Kienast and other researchers). In planning the dig, Eupalinos used now well-known principles of geometry, codified by Euclid several centuries later. With a length of , the Eupalinian subterranean aqueduct is famous today as one of the masterpieces of ancient engineering. When the two tunnels reach within earshot, which can be estimated for this type of rock to approximate , the tunnels could be directed towards each other, but a high level of accuracy was required to reach that point. Errors in measurement and staking could cause Eupalinos to miss the meeting point of the two teams, either horizontally or vertically. He therefore employed the following techniques.
In the horizontal plane
Eupalinos calculated the expected position of the meeting point in the mountain. Since two parallel lines never meet, an error of more than horizontally meant that the north and south tunnels would never meet. Therefore, Eupalinos changed the direction of both tunnels, as shown in the picture (the north tunnel to its left and the south tunnel to its right). This gave a catching width that was wider by , so that a crossing point would be guaranteed, even if the tunnels were previously parallel and far away. They thus meet at nearly a right angle.
In the vertical plane
At the start of work, Eupalinos levelled around the mountain probably following a contour line in order to ensure that both tunnels were started at the same altitude. The possibility of vertical deviations in the process of excavation remained, however. He increased the possibility of the two tunnels meeting each other, by increasing the height of both tunnels at the point near the join. In the north tunnel he kept the floor horizontal and increased the height of the roof by , while in the south tunnel he kept the roof horizontal and lowered the level of the floor by . His precautions as to vertical deviation proved unnecessary, however, since measurements show that there was very little error. At the rendezvous, the closing error in altitude for the two tunnels was a few decimeters.
Rediscovery and excavation
Scholars began searching for the tunnel in the 19th century, inspired by the reference to it in Herodotus. The French archaeologist, Victor Guérin identified the spring that feeds the aqueduct in 1853 and the beginnings of the channel. In 1882, work began on clearing the tunnel with the goal of bringing it back into use. This proved too difficult and the effort was called off, but it allowed Ernst Fabricius to investigate the tunnel on behalf of the German Archaeological Institute. He published the results in 1884 as "." Full excavations of the tunnel were carried out by Ulf Jantzen from 1971–1973, who finally cleared the full length of the tunnel, which had become filled with silt. A full survey of the tunnel with detailed geodetic measurements was carried out by Hermann J. Kienast. Portions of the tunnel are open to the public.
References
Literature
External links
Olson, Åke: (2012). "How Eupalinos navigated his way through the mountain-An empirical approach to the geometry of Eupalinos"
Dan Hughes: The Tunnel of Eupalinos
Michael Lahanas: The Eupalinos Tunnel of Samos
The Tunnel of Eupalinos - Samos Explore
Tunnel of Eupalinos - Hellenic Ministry of Culture and Tourism
Tom M. Apostol: The Tunnel of Samos
Buildings and structures completed in the 6th century BC
Aqueducts in Greece
Ancient Greek buildings and structures
Ancient Greek technology
Ancient Samos
Water tunnels
Tunnels in Greece
Historic Civil Engineering Landmarks | Tunnel of Eupalinos | [
"Engineering"
] | 2,714 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
3,378,540 | https://en.wikipedia.org/wiki/Resolvent%20formalism | In mathematics, the resolvent formalism is a technique for applying concepts from complex analysis to the study of the spectrum of operators on Banach spaces and more general spaces. Formal justification for the manipulations can be found in the framework of holomorphic functional calculus.
The resolvent captures the spectral properties of an operator in the analytic structure of the functional. Given an operator , the resolvent may be defined as
Among other uses, the resolvent may be used to solve the inhomogeneous Fredholm integral equations; a commonly used approach is a series solution, the Liouville–Neumann series.
The resolvent of can be used to directly obtain information about the spectral decomposition
of . For example, suppose is an isolated eigenvalue in the
spectrum of . That is, suppose there exists a simple closed curve
in the complex plane that separates from the rest of the spectrum of .
Then the residue
defines a projection operator onto the eigenspace of .
The Hille–Yosida theorem relates the resolvent through a Laplace transform to an integral over the one-parameter group of transformations generated by . Thus, for example, if is a skew-Hermitian matrix, then is a one-parameter group of unitary operators. Whenever , the resolvent of A at z can be expressed as the Laplace transform
where the integral is taken along the ray .
History
The first major use of the resolvent operator as a series in (cf. Liouville–Neumann series) was by Ivar Fredholm, in a landmark 1903 paper in Acta Mathematica that helped establish modern operator theory.
The name resolvent was given by David Hilbert.
Resolvent identity
For all in , the resolvent set of an operator , we have that the first resolvent identity (also called Hilbert's identity) holds:
(Note that Dunford and Schwartz, cited, define the resolvent as , instead, so that the formula above differs in sign from theirs.)
The second resolvent identity is a generalization of the first resolvent identity, above, useful for comparing the resolvents of two distinct operators. Given operators and , both defined on the same linear space, and in the following identity holds,
A one-line proof goes as follows:
Compact resolvent
When studying a closed unbounded operator : → on a Hilbert space , if there exists such that is a compact operator, we say that has compact resolvent. The spectrum of such is a discrete subset of . If furthermore is self-adjoint, then and there exists an orthonormal basis of eigenvectors of with eigenvalues respectively. Also, has no finite accumulation point.
See also
Resolvent set
Stone's theorem on one-parameter unitary groups
Holomorphic functional calculus
Spectral theory
Compact operator
Laplace transform
Fredholm theory
Liouville–Neumann series
Decomposition of spectrum (functional analysis)
Limiting absorption principle
References
.
.
Fredholm theory
Formalism (deductive)
Mathematical physics | Resolvent formalism | [
"Physics",
"Mathematics"
] | 605 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
3,382,377 | https://en.wikipedia.org/wiki/Smart%20environment | Smart environments link computers and other smart devices to everyday settings and tasks. Smart environments include smart homes, smart cities, and smart manufacturing.
Introduction
Smart environments are an extension of pervasive computing. According to Mark Weiser, pervasive computing promotes the idea of a world that is connected to sensors and computers. These sensors and computers are integrated with everyday objects in peoples' lives and are connected through networks.
Definition
Cook and Das, define a smart environment as "a small world where different kinds of smart devices are continuously working to make inhabitants' lives more comfortable." Smart environments aim to satisfy the experience of individuals from every environment, by replacing hazardous work, physical labor, and repetitive tasks with automated agents.
Poslad
differentiates three different kinds of smart environments for systems, services, and devices: virtual (or distributed) computing environments, physical environments, and human environments, or a hybrid combination of these:
Virtual computing environments enable smart devices to access pertinent services anywhere and anytime.
Physical environments may be embedded with various smart devices of different types including tags, sensors, and controllers, and have different form factors ranging from nano- to micro- to macro-sized.
Human environments: humans, either individually or collectively, inherently form a smart environment for devices. However, humans themselves may be accompanied by smart devices such as mobile phones, use surface-mounted devices (wearable computing), and contain embedded devices (e.g., pacemakers to maintain a healthy heart operation or AR contact lenses)
Features
Smart environments encompass a range of features and services across various domains, including smart homes, smart cities, smart health, and smart factories. Some of the key features of smart environments are:
Sensors and Actuators: Smart environments are equipped with an assembly of sensors and actuators that collect data and initiate actions to provide services for the betterment of human life.
Interconnected Systems: These environments consist of interconnected systems that enable seamless communication and coordination among various devices and components.
Data-Driven Technologies: Smart environments leverage data-driven technologies, such as the Internet of Things (IoT), to obtain information from the physical world, process it, and perform actions accordingly.
Efficiency and Sustainability: They are designed to improve efficiency, sustainable practices, and resource management across different settings, such as energy efficiency in smart homes and environmental quality management in smart cities.
Diverse Requirements: Different types of smart environments have diverse requirements and technology choices, influencing the processing and utilization of data within a specific environment.
Technologies
Building a smart environment involves technologies of
Wireless communication
Algorithm design, signal prediction & classification, information theory
Multilayered software architecture, Corba, middleware
Speech recognition
Image processing, image recognition
Sensors design, calibration, motion detection, temperature, pressure sensors, accelerometers
Semantic Web and knowledge graphs
Adaptive control, Kalman filters
Computer networking
Parallel processing
Operating systems
Existing projects
The Aware Home Research Initiative at Georgia Tech "is devoted to the multidisciplinary exploration of emerging technologies and services based in the home" and was launched in 1998 as one of the first "living laboratories."
The Mav Home (Managing an Adaptive Versatile Home) project, at UT Arlington, is a smart environment-lab with state-of-the-art algorithms and protocols used to provide a customized, personal environment to the users of this space. The Mav Home project, in addition to providing a safe environment, wants to reduce the energy consumption of the inhabitants.
Other projects include House at the MIT Media Lab and many others.
See also
Building automation
Device ecology
Home robot
Intelligent building
List of home automation topics
Smart, connected products
Ubiquitous computing
References
Automation
Building engineering
Ubiquitous computing | Smart environment | [
"Engineering"
] | 740 | [
"Building engineering",
"Automation",
"Control engineering",
"Civil engineering",
"Architecture"
] |
97,574 | https://en.wikipedia.org/wiki/Coenzyme%20Q10 | Coenzyme Q10 (CoQ10 ), also known as ubiquinone, is a naturally occurring biochemical cofactor (coenzyme) and an antioxidant produced by the human body. It can also be obtained from dietary sources, such as meat, fish, seed oils, vegetables, and dietary supplements. CoQ10 is found in many organisms, including animals and bacteria.
CoQ10 plays a role in mitochondrial oxidative phosphorylation, aiding in the production of adenosine triphosphate (ATP), which is involved in energy transfer within cells. The structure of CoQ10 consists of a benzoquinone moiety and an isoprenoid side chain, with the "10" referring to the number of isoprenyl chemical subunits in its tail.
Although a ubiquitous molecule in human tissues, CoQ10 is not a dietary nutrient and does not have a recommended intake level, and its use as a supplement is not approved in the United States for any health or anti-disease effect.
Biological functions
CoQ10 is a component of the mitochondrial electron transport chain (ETC), where it plays a role in oxidative phosphorylation, a process required for the biosynthesis of adenosine triphosphate, the primary energy source of cells.
CoQ10 is a lipophilic molecule that is located in all biological membranes of human body and serves as a component for the synthesis of ATP and is a life-sustaining cofactor for the three complexes (complex I, complex II, and complex III) of the ETC in the mitochondria. CoQ10 has a role in the transport of protons across lysosomal membranes to regulate pH in lysosome functions.
The mitochondrial oxidative phosphorylation process takes place in the inner mitochondrial membrane of eukaryotic cells. This membrane is highly folded into structures called cristae, which increase the surface area available for oxidative phosphorylation. CoQ10 plays a role in this process as an essential cofactor of the ETC located in the inner mitochondrial membrane and serves the following functions:
electron transport in the mitochondrial ETC, by shuttling electrons from mitochondrial complexes like nicotinamide adenine dinucleotide (NADH), ubiquinone reductase (complex I), and succinate ubiquinone reductase (complex II), the fatty acids and branched-chain amino acids oxidation (through flavin-linked dehydrogenases) to ubiquinol–cytochrome-c reductase (complex III) of the ETC: CoQ10 participates in fatty acid and glucose metabolism by transferring electrons generated from the reduction of fatty acids and glucose to electron acceptors;
antioxidant activity as a lipid-soluble antioxidant together with vitamin E, scavenging reactive oxygen species and protecting cells against oxidative stress, inhibiting the oxidation of proteins, DNA, and use of vitamin E.
Biochemistry
Coenzymes Q is a coenzyme family that is ubiquitous in animals and many Pseudomonadota, a group of gram-negative bacteria. The fact that the coenzyme is ubiquitous gives the origin of its other name, ubiquinone. In humans, the most common form of coenzymes Q is coenzyme Q10, also called CoQ10 () or ubiquinone-10.
Coenzyme Q10 is a 1,4-benzoquinone, in which "Q" refers to the quinone chemical group and "10" refers to the number of isoprenyl chemical subunits (shown enclosed in brackets in the diagram) in its tail. In natural ubiquinones, there are from six to ten subunits in the tail, with humans having a tail of 10 isoprene units (50 carbon atoms) connected to its benzoquinone "head".
This family of fat-soluble substances is present in all respiring eukaryotic cells, primarily in the mitochondria. Ninety-five percent of the human body's energy is generated this way. Organs with the highest energy requirements—such as the heart, liver, and kidney—have the highest CoQ10 concentrations.
There are three redox states of CoQ: fully oxidized (ubiquinone), semiquinone (ubisemiquinone), and fully reduced (ubiquinol). The capacity of this molecule to act as a two-electron carrier (moving between the quinone and quinol form) and a one-electron carrier (moving between the semiquinone and one of these other forms) is central to its role in the electron transport chain due to the iron–sulfur clusters that can only accept one electron at a time, and as a free radical–scavenging antioxidant.
Deficiency
There are two major pathways of deficiency of CoQ10 in humans: reduced biosynthesis, and increased use by the body. Biosynthesis is the major source of CoQ10. Biosynthesis requires at least 15 genes, and mutations in any of them can cause CoQ deficiency. CoQ10 levels also may be affected by other genetic defects (such as mutations of mitochondrial DNA, ETFDH, APTX, FXN, and BRAF, genes that are not directly related to the CoQ10 biosynthetic process). Some of these, such as mutations in COQ6, can lead to serious diseases such as steroid-resistant nephrotic syndrome with sensorineural deafness.
Assessment
Although CoQ10 may be measured in blood plasma, these measurements reflect dietary intake rather than tissue status. Currently, most clinical centers measure CoQ10 levels in cultured skin fibroblasts, muscle biopsies, and blood mononuclear cells. Culture fibroblasts can be used also to evaluate the rate of endogenous CoQ10 biosynthesis, by measuring the uptake of 14C-labeled p-hydroxybenzoate.
CoQ10 is studied as an adjunctive therapy to reduce inflammation in periodontitis.
Statins
Although statins may reduce CoQ10 in the blood it is unclear if they reduce CoQ10 in muscle. Evidence does not support that supplementation improves side effects from statins.
Chemical properties
The oxidized structure of CoQ10 is shown below. The various kinds of coenzyme Q may be distinguished by the number of isoprenoid subunits in their side-chains. The most common coenzyme Q in human mitochondria is CoQ10. Q refers to the quinone head and "10" refers to the number of isoprene repeats in the tail. The molecule below has three isoprenoid units and would be called Q3.
In its pure state, it is an orange-colored lipophile powder, and has no taste nor odor.
Biosynthesis
Biosynthesis occurs in most human tissue. There are three major steps:
Creation of the benzoquinone structure (using phenylalanine or tyrosine, via 4-hydroxybenzoate)
Creation of the isoprene side chain (using acetyl-CoA)
The joining or condensation of the above two structures
The initial two reactions occur in mitochondria, the endoplasmic reticulum, and peroxisomes, indicating multiple sites of synthesis in animal cells.
An important enzyme in this pathway is HMG-CoA reductase, usually a target for intervention in cardiovascular complications. The "statin" family of cholesterol-reducing medications inhibits HMG-CoA reductase. One possible side effect of statins is decreased production of CoQ10, which may be connected to the development of myopathy and rhabdomyolysis. However, the role statins play in CoQ deficiency is controversial. Although statins reduce blood levels of CoQ, studies on the effects of muscle levels of CoQ are yet to come. CoQ supplementation also does not reduce side effects of statin medications.
Genes involved include PDSS1, PDSS2, COQ2, and ADCK3 (COQ8, CABC1).
Organisms other than humans produce the benzoquinone and isoprene structures from somewhat different source chemicals. For example, the bacteria E. coli produces the former from chorismate and the latter from a non-mevalonate source. The common yeast S. cerevisiae, however, derives the former from either chorismate or tyrosine and the latter from mevalonate. Most organisms share the common 4-hydroxybenzoate intermediate, yet again uses different steps to arrive at the "Q" structure.
Dietary supplement
Although neither a prescription drug nor an essential nutrient, CoQ10 is commonly used as a dietary supplement with the intent to prevent or improve disease conditions, such as cardiovascular disorders. CoQ10 is naturally produced by the body and plays a crucial role in cell growth and protection. Despite its significant role in the body, it is not used as a drug for the treatment of any specific disease.
Nevertheless, CoQ10 is widely available as an over-the-counter dietary supplement and is recommended by some healthcare professionals, despite a lack of definitive scientific evidence supporting these recommendations, especially when it comes to cardiovascular diseases.
Regulation and composition
CoQ10 is not approved by the U.S. Food and Drug Administration (FDA) for the treatment of any medical condition. However, it is sold as a dietary supplement not subject to the same regulations as medicinal drugs, and is an ingredient in some cosmetics. The manufacture of CoQ10 is not regulated, and different batches and brands may vary significantly.
Research
A 2014 Cochrane review found insufficient evidence to make a conclusion about its use for the prevention of heart disease. A 2016 Cochrane review concluded that CoQ10 had no effect on blood pressure. A 2021 Cochrane review found "no convincing evidence to support or refute" the use of CoQ10 for the treatment of heart failure.
A 2017 meta-analysis of people with heart failure taking 30–100 mg/d of CoQ10 found a 31% lower mortality and increased exercise capacity, with no significant difference in the endpoints of left heart ejection fraction. A 2021 meta-analysis found that coenzyme Q10 was associated with a 31% lower all-cause mortality in HF patients. In a 2023 meta-analysis of older people, ubiquinone had evidence of a cardiovascular effect, but ubiquinol did not.
Although CoQ10 has been studied as a potential remedy to treat purported muscle-related side effects of statin medications, the results were mixed. Although a 2018 meta-analysis concluded that there was preliminary evidence for oral CoQ10 reducing statin-associated muscle symptoms, including muscle pain, muscle weakness, muscle cramps and muscle tiredness, 2015 and 2024 meta-analysis found that CoQ10 had no effect on statin myopathy.
CoQ10 is studied as an adjunctive therapy to reduce inflammation in periodontitis.
Pharmacology
Absorption
CoQ10 in the pure form is a crystalline powder insoluble in water. Absorption as a pharmacological substance follows the same process as that of lipids; the uptake mechanism appears to be similar to that of vitamin E, another lipid-soluble nutrient. This process in the human body involves secretion into the small intestine of pancreatic enzymes and bile, which facilitates emulsification and micelle formation required for absorption of lipophilic substances. Food intake (and the presence of lipids) stimulates bodily biliary excretion of bile acids and greatly enhances absorption of CoQ10. Exogenous CoQ10 is absorbed from the small intestine and is best absorbed if taken with a meal. Serum concentration of CoQ10 in fed condition is higher than in fasting conditions.
Metabolism
CoQ10 is metabolized in all tissues, with the metabolites being phosphorylated in cells. CoQ10 is reduced to ubiquinol during or after absorption in the small intestine. It is absorbed by chylomicrons, and redistributed in the blood within lipoproteins. Its elimination occurs via biliary and fecal excretion.
Pharmacokinetics
Some reports have been published on the pharmacokinetics of CoQ10. The plasma peak can be observed 6–8 hours after oral administration when taken as a pharmacological substance. In some studies, a second plasma peak also was observed at approximately 24 hours after administration, probably due to both enterohepatic recycling and redistribution from the liver to circulation.
Deuterium-labeled crystalline CoQ10 was used to investigate pharmacokinetics in humans to determine an elimination half-time of 33 hours.
Bioavailability
In contrast to intake of CoQ10 as a constituent of food, such as nuts or meat, from which CoQ10 is normally absorbed, there is a concern about CoQ10 bioavailability when it is taken as a dietary supplement. Bioavailability of CoQ10 supplements may be reduced due to the lipophilic nature of its molecule and large molecular weight.
Reduction of particle size
Nanoparticles have been explored as a delivery system for various drugs, such as improving the oral bioavailability of drugs with poor absorption characteristics. However, this has not proved successful with CoQ10, although reports have differed widely. The use of aqueous suspension of finely powdered CoQ10 in pure water also reveals only a minor effect.
Water-solubility
Facilitating drug absorption by increasing its solubility in water is a common pharmaceutical strategy and also has been shown to be successful for CoQ10. Various approaches have been developed to achieve this goal, with many of them producing significantly better results over oil-based softgel capsules in spite of the many attempts to optimize their composition. Examples of such approaches are use of the aqueous dispersion of solid CoQ10 with the polymer tyloxapol, formulations based on various solubilising agents, such as hydrogenated lecithin, and complexation with cyclodextrins; among the latter, the complex with β-cyclodextrin has been found to have highly increased bioavailability and also is used in pharmaceutical and food industries for CoQ10-fortification.
Adverse effects and precautions
Generally, oral CoQ10 supplementation is well tolerated. The most common side effects are gastrointestinal symptoms (nausea, vomiting, appetite suppression, and abdominal pain), rashes, and headaches. Some adverse effects, largely gastrointestinal, are reported with intakes. Doses of 100–300 mg per day may induce insomnia or elevate liver enzymes. The observed safe level risk assessment method indicated that the evidence of safety is acceptable at intakes up to 1200 mg per day.
Caution should be observed in the use of CoQ10 supplementation in people with bile duct obstruction, and during pregnancy or breastfeeding.
Potential drug interactions
CoQ10 taken as a pharmacological substance has potential to inhibit the effects of theophylline as well as the anticoagulant warfarin; CoQ10 may interfere with warfarin's actions by interacting with cytochrome p450 enzymes thereby reducing the INR, a measure of blood clotting. The structure of CoQ10 is similar to that of vitamin K, which competes with and counteracts warfarin's anticoagulation effects. CoQ10 is not recommended in people taking warfarin due to the increased risk of clotting.
Dietary concentrations
Detailed reviews on occurrence of CoQ10 and dietary intake were published in 2010. Besides the endogenous synthesis within organisms, CoQ10 also is supplied by various foods. CoQ10 concentrations in various foods are:
Vegetable oils, meat and fish are quite rich in CoQ10 levels. Dairy products are much poorer sources of CoQ10 than animal tissues. Among vegetables, broccoli and cauliflower are good sources of CoQ10. Most fruit and berries are poor sources of CoQ10, with the exception of avocados, which have a relatively high oil and CoQ10 content.
Intake
In the developed world, the estimated daily intake of CoQ10 has been determined at 3–6 mg per day, derived primarily from meat.
South Koreans have an estimated average daily CoQ (Q9 + Q10) intake of 11.6 mg/d, derived primarily from kimchi.
Effect of heat and processing
Cooking by frying reduces CoQ10 content by 14–32%.
History
In 1950, a small amount of CoQ10 was isolated from the lining of a horse's gut, a compound initially called substance SA, but later deemed to be quinone found in many animal tissues. In 1957, the same compound was isolated from mitochondrial membranes of beef heart, with research showing that it transported electrons within mitochondria. It was called Q-275 as a quinone. The Q-275/substance SA was later renamed ubiquinone as it was a ubiquitous quinone found in all animal tissues. In 1958, its full chemical structure was reported. Ubiquinone was later called either mitoquinone or coenzyme Q due to its participation to the mitochondrial electron transport chain. In 1966, a study reported that reduced CoQ6 was an effective antioxidant in cells.
See also
Idebenone – synthetic analog with reduced oxidant generating properties
Mitoquinone mesylate – synthetic analog with improved mitochondrial permeability
References
{{DISPLAYTITLE:Coenzyme Q10}}
Antioxidants
1,4-Benzoquinones
Cellular respiration
Coenzymes
Glycolysis
Meroterpenoids
Phenol ethers
Polyenes | Coenzyme Q10 | [
"Chemistry",
"Biology"
] | 3,742 | [
"Carbohydrate metabolism",
"Cellular respiration",
"Coenzymes",
"Glycolysis",
"Organic compounds",
"Biochemistry",
"Metabolism"
] |
97,644 | https://en.wikipedia.org/wiki/Xanthine%20oxidase | Xanthine oxidase (XO or XAO) is a form of xanthine oxidoreductase, a type of enzyme that generates reactive oxygen species. These enzymes catalyze the oxidation of hypoxanthine to xanthine and can further catalyze the oxidation of xanthine to uric acid. These enzymes play an important role in the catabolism of purines in some species, including humans.
Xanthine oxidase is defined as an enzyme activity (EC 1.17.3.2). The same protein, which in humans has the HGNC approved gene symbol XDH, can also have xanthine dehydrogenase activity (EC 1.17.1.4). Most of the protein in the liver exists in a form with xanthine dehydrogenase activity, but it can be converted to xanthine oxidase by reversible sulfhydryl oxidation or by irreversible proteolytic modification.
Reaction
The following chemical reactions are catalyzed by xanthine oxidase:
hypoxanthine + H2O + O2 xanthine + H2O2
xanthine + H2O + O2 uric acid + H2O2
Xanthine oxidase can also act on certain other purines, pterins, and aldehydes. For example, it efficiently converts 1-methylxanthine (a metabolite of caffeine) to 1-methyluric acid, but has little activity on 3-methylxanthine.
Under some circumstances it can produce superoxide ions: RH + H2O + 2 O2 ROH + 2 + 2 H+.
Other reactions
Because XO is a superoxide-producing enzyme, with general low specificity, it can be combined with other compounds and enzymes and create reactive oxidants, as well as oxidize other substrates.
Bovine xanthine oxidase (from milk) was originally thought to have a binding site to reduce cytochrome c with, but it has been found that the mechanism to reduce this protein is through XO's superoxide anion byproduct, with competitive inhibition by carbonic anhydrase.
Another reaction catalyzed by xanthine oxidase is the decomposition of S-nitrosothiols (RSNO), a class of reactive nitrogen species, to nitric oxide (NO), which reacts with a superoxide anion to form peroxynitrite under aerobic conditions.
XO has also been found to produce the strong one-electron oxidant carbonate radical anion from oxidation with acetaldehyde in the presence of catalase and bicarbonate. It was suggested that the carbonate radical was likely produced in one of the enzyme's redox centers with a peroxymonocarbonate intermediate.
Here is a diagram highlighting the pathways catalyzed by xanthine oxidase.
It is suggested that xanthine oxidoreductase, along with other enzymes, participates in the conversion of nitrate to nitrite in mammalian tissues.
Protein structure
The protein is large, having a molecular weight of 270 kDa, and has two flavin molecules (bound as FAD), 2 molybdenum atoms, and 8 iron atoms bound per enzymatic unit. The molybdenum atoms are contained as molybdopterin cofactors and are the active sites of the enzyme. The iron atoms are part of [2Fe-2S] ferredoxin iron-sulfur clusters and participate in electron transfer reactions.
Catalytic mechanism
The active site of XO is composed of a molybdopterin unit with the molybdenum atom also coordinated by terminal oxygen (oxo), sulfur atoms and a terminal hydroxide. In the reaction with xanthine to form uric acid, the S=MoVIO-H group ionizes and the resulting MoVI-O− attacks carbon concomitant with transfer of H− to Mo=S. The resulting HS-MoIV-O-C center then undergoes 2e oxidation with hydrolysis of the MoVI-O-C group, giving back S=MoVI-OH, together with xanthine. Like other known molybdenum-containing oxidoreductases, the oxygen atom introduced to the substrate by XO originates from water rather than from dioxygen (O2).
Clinical significance
Xanthine oxidase is a superoxide-producing enzyme found normally in serum and the lungs, and its activity is increased during influenza A infection.
During severe liver damage, xanthine oxidase is released into the blood, so a blood assay for XO is a way to determine if liver damage has happened.
Because xanthine oxidase is a metabolic pathway for uric acid formation, the xanthine oxidase inhibitor allopurinol is used in the treatment of gout. Since xanthine oxidase is involved in the metabolism of 6-mercaptopurine, caution should be taken before administering allopurinol and 6-mercaptopurine, or its prodrug azathioprine, in conjunction.
Xanthinuria is a rare genetic disorder where the lack of xanthine oxidase leads to high concentration of xanthine in blood and can cause health problems such as renal failure. There is no specific treatment, affected people are advised by doctors to avoid foods high in purine and to maintain a high fluid intake. Type I xanthinuria has been traced directly to mutations of the XDH gene which mediates xanthine oxidase activity. Type II xanthinuria may result from a failure of the mechanism which inserts sulfur into the active sites of xanthine oxidase and aldehyde oxidase, a related enzyme with some overlapping activities (such as conversion of allopurinol to oxypurinol).
Inhibition of xanthine oxidase has been proposed as a mechanism for improving cardiovascular health. A study found that patients with chronic obstructive pulmonary disease (COPD) had a decrease in oxidative stress, including glutathione oxidation and lipid peroxidation, when xanthine oxidase was inhibited using allopurinol. Oxidative stress can be caused by hydroxyl free radicals and hydrogen peroxide, both of which are byproducts of XO activity.
Increased concentration of serum uric acid has been under research as an indicator for cardiovascular health factors, and has been used to strongly predict mortality, heart transplant, and more in patients. But it is not clear whether this could be a direct or casual association or link between serum uric acid concentration (and by proxy, xanthine oxidase activity) and cardiovascular health. States of high cell turnover and alcohol ingestion are some of the most prominent cases of high serum uric acid concentrations.
Reactive nitrogen species, such as peroxynitrite that xanthine oxidase can form, have been found to react with DNA, proteins, and cells, causing cellular damage or even toxicity. Reactive nitrogen signaling, coupled with reactive oxygen species, have been found to be a central part of myocardial and vascular function, explaining why xanthine oxidase is being researched for links to cardiovascular health.
Both xanthine oxidase and xanthine oxidoreductase are also present in corneal epithelium and endothelium and may be involved in oxidative eye injury.
Inhibitors
Inhibitors of XO include allopurinol, oxypurinol, and phytic acid. It has also been found to be inhibited by flavonoids, including those found in Bougainvillea spectabilis (Nyctaginaceae) leaves (with an IC50 of 7.23 μM), typically used in folk medicine.
See also
Xanthine dehydrogenase
Sodium molybdate
References
External links
EC 1.17.3
Metalloproteins
Molybdenum enzymes
Iron-sulfur enzymes
Superoxide generating substances | Xanthine oxidase | [
"Chemistry"
] | 1,710 | [
"Superoxide generating substances",
"Metalloproteins",
"Bioinorganic chemistry"
] |
97,830 | https://en.wikipedia.org/wiki/Nuclear%20technology | Nuclear technology is technology that involves the nuclear reactions of atomic nuclei. Among the notable nuclear technologies are nuclear reactors, nuclear medicine and nuclear weapons. It is also used, among other things, in smoke detectors and gun sights.
History and scientific background
Discovery
The vast majority of common, natural phenomena on Earth only involve gravity and electromagnetism, and not nuclear reactions. This is because atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other.
In 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity. He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process, they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of three distinct sorts, which they labeled alpha, beta, and gamma after the first three Greek letters. Some of these kinds of radiation could pass through ordinary matter, and all of them could be harmful in large amounts. All of the early researchers received various radiation burns, much like sunburn, and thought little of it.
The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and a number of patent medicines and treatments involving radioactivity were put forward.
Gradually it was realized that the radiation produced by radioactive decay was ionizing radiation, and that even quantities too small to burn could pose a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters.
As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements.
It has also become clear that the ultimate source of most terrestrial energy is nuclear, either through radiation from the Sun caused by stellar thermonuclear reactions or by radioactive decay of uranium within the Earth, the principal source of geothermal energy.
Nuclear fission
In natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. Nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. If these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. The average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. Values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self-sustaining chain reaction. A mass of fissile material large enough (and in a suitable configuration) to induce a self-sustaining chain reaction is called a critical mass.
When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion.
When discovered on the eve of World War II, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb — a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945 at Hiroshima and Nagasaki. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity.
In 1951, the first nuclear fission power plant was the first to produce electricity at the Experimental Breeder Reactor No. 1 (EBR-1), in Arco, Idaho, ushering in the "Atomic Age" of more intensive human energy use.
However, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. Today, this type of fission is commonly used to generate electricity.
Nuclear fusion
If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, which derive their energy from hydrogen and helium. They form, through stellar nucleosynthesis, the light elements (lithium to calcium) as well as some of the heavy elements (beyond iron and nickel, via the S-process). The remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the R-process.
Of course, these natural processes of astrophysics are not examples of nuclear "technology". Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs obtain their enormous destructive power from fusion, but their energy cannot be controlled. Controlled fusion is achieved in particle accelerators; this is how many synthetic elements are produced. A fusor can also produce controlled fusion and is a useful neutron source. However, both of these devices operate at a net energy loss. Controlled, viable fusion power has proven elusive, despite the occasional hoax. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world.
Nuclear fusion was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated it as a method to build a bomb. The project abandoned fusion after concluding that it would require a fission reaction to detonate. It took until 1952 for the first full hydrogen bomb to be detonated, so-called because it used reactions between deuterium and tritium. Fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult.
Nuclear weapons
A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. Both reactions release vast quantities of energy from relatively small amounts of matter. Even small nuclear devices can devastate a city by blast, fire and radiation. Nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut.
The design of a nuclear weapon is more complicated than it might seem. Such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality (create a critical mass) for detonation. It also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on Earth in suitable amounts.
One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. The latter accounts for more than 99% of the weight of natural uranium. Therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich (isolate) uranium-235.
Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor.
Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these elements. They detonated the first nuclear weapon in a test code-named "Trinity", near Alamogordo, New Mexico, on July 16, 1945. The test was conducted to ensure that the implosion method of detonation would work, which it did. A uranium bomb, Little Boy, was dropped on the Japanese city Hiroshima on August 6, 1945, followed three days later by the plutonium-based Fat Man on Nagasaki. In the wake of unprecedented devastation and casualties from a single weapon, the Japanese government soon surrendered, ending World War II.
Since these bombings, no nuclear weapons have been deployed offensively. Nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. Just over four years later, on August 29, 1949, the Soviet Union detonated its first fission weapon. The United Kingdom followed on October 2, 1952; France, on February 13, 1960; and China component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. A radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. Such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. A radiological weapon has never been deployed. While considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism.
There have been over 2,000 nuclear tests conducted since 1945. In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, while China continued up until 1980. The last underground test by the United States was in 1992, the Soviet Union in 1990, the United Kingdom in 1991, and both France and China continued testing until 1996. After signing the Comprehensive Test Ban Treaty in 1996 (which had as of 2011 not entered into force), all of these states have pledged to discontinue all nuclear testing. Non-signatories India and Pakistan last tested nuclear weapons in 1998.
Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation, portrayed in films such as Dr. Strangelove and The Atomic Cafe.
However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source.
Civilian uses
Nuclear power
Nuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine is used to generate electricity and/or to do mechanical work.
Currently nuclear power provides approximately 15.7% of the world's electricity (in 2004) and is used to propel aircraft carriers, icebreakers and submarines (so far economics and fears in some ports have prevented the use of nuclear power in transport ships). All nuclear power plants use fission. No man-made fusion reaction has resulted in a viable source of electricity.
Medical applications
The medical applications of nuclear technology are divided into diagnostics and radiation treatment.
Imaging - The largest use of ionizing radiation in medicine is in medical radiography to make images of the inside of the human body using x-rays. This is the largest artificial source of radiation exposure for humans. Medical and dental x-ray imagers use of cobalt-60 or other x-ray sources. A number of radiopharmaceuticals are used, sometimes attached to organic molecules, to act as radioactive tracers or contrast agents in the human body. Positron emitting nucleotides are used for high resolution, short time span imaging in applications known as Positron emission tomography.
Radiation is also used to treat diseases in radiation therapy.
Industrial applications
Since some ionizing radiation can penetrate matter, they are used for a variety of measuring methods. X-rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. The piece to be radiographed is placed between the source and a photographic film in a cassette. After a certain exposure time, the film is developed and it shows any internal defects of the material.
Gauges - Gauges use the exponential absorption law of gamma rays
Level indicators: Source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. Beta or gamma sources are used, depending on the thickness and the density of the material to be measured. The method is used for containers of liquids or of grainy substances
Thickness gauges: if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. This is useful for continuous production, like of paper, rubber, etc.
Electrostatic control - To avoid the build-up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon-shaped source of the alpha emitter 241Am can be placed close to the material at the end of the production line. The source ionizes the air to remove electric charges on the material.
Radioactive tracers - Since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. Examples:
Adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube.
Adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil.
Oil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography.
Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a cesium-137 source is used.
Commercial applications
radioluminescence
tritium illumination: Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Some runway markers and building exit signs use the same technology, to remain illuminated during blackouts.
Betavoltaics.
Smoke detector: An ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Two ionisation chambers are placed next to each other. Both contain a small source of 241Am that gives rise to a small constant current. One is closed and serves for comparison, the other is open to ambient air; it has a gridded electrode. When smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. This reduces the current in the open chamber. When the current drops below a certain threshold, the alarm is triggered.
Food processing and agriculture
In biology and agriculture, radiation is used to induce mutations to produce new or improved species, such as in atomic gardening. Another use in insect control is the sterile insect technique, where male insects are sterilized by radiation and released, so they have no offspring, to reduce the population.
In industrial and food applications, radiation is used for sterilization of tools and equipment. An advantage is that the object may be sealed in plastic before sterilization. An emerging use in food production is the sterilization of food using food irradiation.
Food irradiation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The radiation sources used include radioisotope gamma ray sources, X-ray generators and electron accelerators. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer.
The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogenic activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of procreation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise.
The amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C (4.5 °F).
The specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar.
Detractors of food irradiation have concerns about the health hazards of induced radioactivity. A report for the industry advocacy group American Council on Science and Health entitled "Irradiated Foods" states: "The types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. Food undergoing irradiation does not become any more radioactive than luggage passing through an airport X-ray scanner or teeth that have been X-rayed."
Food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed annually worldwide.
Food irradiation is essentially a non-nuclear technology; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma-rays from nuclear decay. There is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. Food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc.
Accidents
Nuclear accidents, because of the powerful forces involved, are often very dangerous. Historically, the first incidents involved fatal radiation exposure. Marie Curie died from aplastic anemia which resulted from her high levels of exposure. Two scientists, an American and Canadian respectively, Harry Daghlian and Louis Slotin, died after mishandling the same plutonium mass. Unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure.
Civilian nuclear and radiological accidents primarily involve nuclear power plants. Most common are nuclear leaks that expose workers to hazardous material. A nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. The most significant meltdowns occurred at Three Mile Island in Pennsylvania and Chernobyl in the Soviet Ukraine. The earthquake and tsunami on March 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the Fukushima Daiichi nuclear power plant in Japan. Military reactors that experienced similar accidents were Windscale in the United Kingdom and SL-1 in the United States.
Military accidents usually involve the loss or unexpected detonation of nuclear weapons. The Castle Bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a Japanese fishing boat (with one fatality), and raised concerns about contaminated fish in Japan. In the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. The last twenty years have seen a marked decline in such accidents.
Examples of environmental benefits
Proponents of nuclear energy note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. Proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents of nuclear energy is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant.
See also
Atomic age
Lists of nuclear disasters and radioactive incidents
Nuclear power debate
Outline of nuclear technology
Radiology
References
External links
Nuclear Energy Institute – Beneficial Uses of Radiation
Nuclear Technology
National Isotope Development Center – U.S. Government source of isotopes for basic and applied nuclear science and nuclear technology – production, research, development, distribution, and information | Nuclear technology | [
"Physics"
] | 4,663 | [
"Nuclear technology",
"Nuclear physics"
] |
97,835 | https://en.wikipedia.org/wiki/Zener%20diode | A Zener diode is a special type of diode designed to reliably allow current to flow "backwards" (inverted polarity) when a certain set reverse voltage, known as the Zener voltage, is reached.
Zener diodes are manufactured with a great variety of Zener voltages and some are even variable. Some Zener diodes have an abrupt, heavily doped p–n junction with a low Zener voltage, in which case the reverse conduction occurs due to electron quantum tunnelling in the short distance between p and n regions − this is known as the Zener effect, after Clarence Zener. Diodes with a higher Zener voltage have lighter doped junctions which causes their mode of operation to involve avalanche breakdown. Both breakdown types are present in Zener diodes with the Zener effect predominating at lower voltages and avalanche breakdown at higher voltages.
They are used to generate low-power stabilized supply rails from a higher voltage and to provide reference voltages for circuits, especially stabilized power supplies. They are also used to protect circuits from overvoltage, especially electrostatic discharge.
History
The device is named after American physicist Clarence Zener, who first described the Zener effect in 1934 in his primarily theoretical studies of the breakdown of electrical insulator properties. Later, his work led to the Bell Labs implementation of the effect in the form of an electronic device, the Zener diode.
Operation
A conventional solid-state diode allows significant current if it is reverse biased above its reverse-breakdown voltage. When the reverse-bias breakdown voltage is exceeded, a conventional diode will conduct a high current due to avalanche breakdown. Unless this current is limited by external circuits, the diode may be permanently damaged due to overheating at the small (localized) areas of the semiconductor junction where avalanche breakdown conduction is occurring. A Zener diode exhibits almost the same properties, except the device is specially designed so as to have a reduced breakdown voltage, the Zener voltage. By contrast with the conventional device, a reverse-biased Zener diode exhibits a controlled breakdown and allows the current to keep the voltage across the Zener diode close to the Zener breakdown voltage. For example, a diode with a Zener breakdown voltage of 3.2 V exhibits a voltage drop of very nearly 3.2 V across a wide range of reverse currents. The Zener diode is therefore well suited for applications such as the generation of a reference voltage (e.g. for an amplifier stage), or as a voltage stabilizer for low-current applications.
Another mechanism that produces a similar effect is the avalanche effect as in the avalanche diode. The two types of diode are in fact constructed in similar ways and both effects are present in diodes of this type. In silicon diodes up to about 5.6 volts, the Zener effect is the predominant effect and shows a marked negative temperature coefficient. Above 5.6 volts, the avalanche effect dominates and exhibits a positive temperature coefficient.
In a 5.6 V diode, the two effects occur together, and their temperature coefficients nearly cancel each other out, thus the 5.6 V diode is useful in temperature-critical applications. An alternative, which is used for voltage references that need to be highly stable over long periods of time, is to use a Zener diode with a temperature coefficient (TC) of +2 mV/°C (breakdown voltage 6.2–6.3 V) connected in series with a forward-biased silicon diode (or a transistor B–E junction) manufactured on the same chip. The forward-biased diode has a temperature coefficient of −2 mV/°C, causing the TCs to cancel out for a near-zero net temperature coefficient.
It is also worth noting that the temperature coefficient of a 4.7 V Zener diode is close to that of the emitter-base junction of a silicon transistor at around −2 mV/°C, so in a simple regulating circuit where the 4.7 V diode sets the voltage at the base of an NPN transistor (i.e. their coefficients are acting in parallel), the emitter will be at around 4 V and quite stable with temperature.
Modern designs have produced devices with voltages lower than 5.6 V with negligible temperature coefficients,. Higher-voltage devices have temperature coefficients that are approximately proportional to the amount by which the breakdown voltage exceeds 5 V. Thus a 75 V diode has about ten times the coefficient of a 12 V diode.
Zener and avalanche diodes, regardless of breakdown voltage, are usually marketed under the umbrella term of "Zener diode".
Under 5.6 V, where the Zener effect dominates, the IV curve near breakdown is much more rounded, which calls for more care in choosing its biasing conditions. The IV curve for Zeners above 5.6 V (being dominated by avalanche), is much more precise at breakdown.
Construction
The Zener diode's operation depends on the heavy doping of its p–n junction. The depletion region formed in the diode is very thin (< 1 μm) and the electric field is consequently very high (about 500 kV/m) even for a small reverse bias voltage of about 5 V, allowing electrons to tunnel from the valence band of the p-type material to the conduction band of the n-type material.
At the atomic scale, this tunneling corresponds to the transport of valence-band electrons into the empty conduction-band states, as a result of the reduced barrier between these bands and high electric fields that are induced due to the high levels of doping on both sides. The breakdown voltage can be controlled quite accurately by the doping process. Adding impurities, or doping, changes the behaviour of the semiconductor material in the diode. In the case of Zener diodes, this heavy doping creates a situation where the diode can operate in the breakdown region. While tolerances within 0.07% are available, commonly available tolerances are 5% and 10%. Breakdown voltage for commonly available Zener diodes can vary from 1.2 V to 200 V.
For diodes that are lightly doped, the breakdown is dominated by the avalanche effect rather than the Zener effect. Consequently, the breakdown voltage is higher (over 5.6 V) for these devices.
Surface Zeners
The emitter–base junction of a bipolar NPN transistor behaves as a Zener diode, with breakdown voltage at about 6.8 V for common bipolar processes and about 10 V for lightly doped base regions in BiCMOS processes. Older processes with poor control of doping characteristics had the variation of Zener voltage up to ±1 V, newer processes using ion implantation can achieve no more than ±0.25 V. The NPN transistor structure can be employed as a surface Zener diode, with collector and emitter connected together as its cathode and base region as anode. In this approach the base doping profile usually narrows towards the surface, creating a region with intensified electric field where the avalanche breakdown occurs. Hot carriers produced by acceleration in the intense field can inject into the oxide layer above the junction and become trapped there. The accumulation of trapped charges can then cause 'Zener walkout', a corresponding change of the Zener voltage of the junction. The same effect can be achieved by radiation damage.
The emitter–base Zener diodes can handle only low currents as the energy is dissipated in the base depletion region which is very small. Higher amounts of dissipated energy (higher current for longer time, or a short very high current spike) causes thermal damage to the junction and/or its contacts. Partial damage of the junction can shift its Zener voltage. Total destruction of the Zener junction by overheating it and causing migration of metallization across the junction ("spiking") can be used intentionally as a 'Zener zap' antifuse.
Subsurface Zeners
A subsurface Zener diode, also called a buried Zener, is a device similar to the surface Zener, but the doping and design is such that the avalanche region is located deeper in the structure, typically several micrometers below the oxide. Hot carriers then lose energy by collisions with the semiconductor lattice before reaching the oxide layer and cannot be trapped there. The Zener walkout phenomenon therefore does not occur here, and the buried Zeners have stable voltage over their entire lifetime. Most buried Zeners have breakdown voltage of 5–7 volts. Several different junction structures are used.
Uses
Zener diodes are widely used as voltage references and as shunt regulators to regulate the voltage across small circuits. When connected in parallel with a variable voltage source so that it is reverse biased, a Zener diode conducts when the voltage reaches the diode's reverse breakdown voltage. From that point on, the low impedance of the diode keeps the voltage across the diode at that value.
In this circuit, a typical voltage reference or regulator, an input voltage, Uin (with + on the top), is regulated down to a stable output voltage Uout. The breakdown voltage of diode D is stable over a wide current range and holds Uout approximately constant even though the input voltage may fluctuate over a wide range. Because of the low impedance of the diode when operated like this, resistor R is used to limit current through the circuit.
In the case of this simple reference, the current flowing in the diode is determined using Ohm's law and the known voltage drop across the resistor R;
The value of R must satisfy two conditions:
R must be small enough that the current through D keeps D in reverse breakdown. The value of this current is given in the data sheet for D. For example, the common BZX79C5V6 device, a 5.6 V 0.5 W Zener diode, has a recommended reverse current of 5mA. If insufficient current exists through D, then Uout is unregulated and less than the nominal breakdown voltage (this differs from voltage-regulator tubes where the output voltage is higher than nominal and could rise as high as Uin). When calculating R, allowance must be made for any current through the external load, not shown in this diagram, connected across Uout.
R must be large enough that the current through D does not destroy the device. If the current through D is ID, its breakdown voltage VB and its maximum power dissipation Pmax correlate as such: .
A load may be placed across the diode in this reference circuit, and as long as the Zener stays in reverse breakdown, the diode provides a stable voltage source to the load. Zener diodes in this configuration are often used as stable references for more advanced voltage regulator circuits.
Shunt regulators are simple, but the requirements that the ballast resistor be small enough to avoid excessive voltage drop during worst-case operation (low input voltage concurrent with high load current) tends to leave a lot of current flowing in the diode much of the time, making for a fairly wasteful regulator with high quiescent power dissipation, suitable only for smaller loads.
These devices are also encountered, typically in series with a base–emitter junction, in transistor stages where selective choice of a device centered on the avalanche or Zener point can be used to introduce compensating temperature co-efficient balancing of the transistor p–n junction. An example of this kind of use would be a DC error amplifier used in a regulated power supply circuit feedback loop system.
Zener diodes are also used in surge protectors to limit transient voltage spikes.
Noise generator
Another application of the Zener diode is using its avalanche breakdown noise (see ), which for instance can be used for dithering in an analog-to-digital converter when at a rms level equivalent to to 1 lsb or to create a random number generator.
Waveform clipper
Two Zener diodes facing each other in series clip both halves of an input signal. Waveform clippers can be used not only to reshape a signal, but also to prevent voltage spikes from affecting circuits that are connected to the power supply.
Voltage shifter
A Zener diode can be applied to a circuit with a resistor to act as a voltage shifter. This circuit lowers the output voltage by a quantity that is equal to the Zener diode's breakdown voltage.
Voltage regulator
A Zener diode can be applied in a voltage regulator circuit to regulate the voltage applied to a load, such as in a linear regulator.
See also
Backward diode
E series of preferred numbers
Transient voltage suppression diode
BZX79 voltage regulator diodes
References
Further reading
TVS/Zener Theory and Design Considerations; ON Semiconductor; 127 pages; 2005; HBD854/D. (Free PDF download)
External links
Zener Diode Axial Part Number Table
Patent US4138280A
Diodes
Voltage stability | Zener diode | [
"Physics"
] | 2,699 | [
"Voltage",
"Voltage stability",
"Physical quantities"
] |
97,911 | https://en.wikipedia.org/wiki/Size-exclusion%20chromatography | Size-exclusion chromatography, also known as molecular sieve chromatography, is a chromatographic method in which molecules in solution are separated by their shape, and in some cases size. It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers. Typically, when an aqueous solution is used to transport the sample through the column, the technique is known as gel-filtration chromatography, versus the name gel permeation chromatography, which is used when an organic solvent is used as a mobile phase. The chromatography column is packed with fine, porous beads which are commonly composed of dextran, agarose, or polyacrylamide polymers. The pore sizes of these beads are used to estimate the dimensions of macromolecules. SEC is a widely used polymer characterization method because of its ability to provide good molar mass distribution (Mw) results for polymers.
Size exclusion chromatography (SEC) is fundamentally different from all other chromatographic techniques in that separation is based on a simple procedure of classifying molecule sizes rather than any type of interaction.
Applications
The main application of size-exclusion chromatography is the fractionation of proteins and other water-soluble polymers, while gel permeation chromatography is used to analyze the molecular weight distribution of organic-soluble polymers. Either technique should not be confused with gel electrophoresis, where an electric field is used to "pull" molecules through the gel depending on their electrical charges. The amount of time a solute remains within a pore is dependent on the size of the pore. Larger solutes will have access to a smaller volume and vice versa. Therefore, a smaller solute will remain within the pore for a longer period of time compared to a larger solute.
Even though size exclusion chromatography is widely utilized to study natural organic material, there are limitations. One of these limitations include that there is no standard molecular weight marker; thus, there is nothing to compare the results back to. If precise molecular weight is required, other methods should be used.
Advantages
The advantages of this method include good separation of large molecules from the small molecules with a minimal volume of eluate, and that various solutions can be applied without interfering with the filtration process, all while preserving the biological activity of the particles to separate. The technique is generally combined with others that further separate molecules by other characteristics, such as acidity, basicity, charge, and affinity for certain compounds. With size exclusion chromatography, there are short and well-defined separation times and narrow bands, which lead to good sensitivity. There is also no sample loss because solutes do not interact with the stationary phase.
The other advantage to this experimental method is that in certain cases, it is feasible to determine the approximate molecular weight of a compound. The shape and size of the compound (eluent) determine how the compound interacts with the gel (stationary phase). To determine approximate molecular weight, the elution volumes of compounds with their corresponding molecular weights are obtained and then a plot of “Kav” vs “log(Mw)” is made, where and Mw is the molecular mass. This plot acts as a calibration curve, which is used to approximate the desired compound's molecular weight. The Ve component represents the volume at which the intermediate molecules elute such as molecules that have partial access to the beads of the column. In addition, Vt is the sum of the total volume between the beads and the volume within the beads. The Vo component represents the volume at which the larger molecules elute, which elute in the beginning. Disadvantages are, for example, that only a limited number of bands can be accommodated because the time scale of the chromatogram is short, and, in general, there must be a 10% difference in molecular mass to have a good resolution.
Discovery
The technique was invented in 1955 by Grant Henry Lathe and Colin R Ruthven, working at Queen Charlotte's Hospital, London. They later received the John Scott Award for this invention. While Lathe and Ruthven used starch gels as the matrix, Jerker Porath and Per Flodin later introduced dextran gels; other gels with size fractionation properties include agarose and polyacrylamide. A short review of these developments has appeared.
There were also attempts to fractionate synthetic high polymers; however, it was not until 1964, when J. C. Moore of the Dow Chemical Company published his work on the preparation of gel permeation chromatography (GPC) columns based on cross-linked polystyrene with controlled pore size, that a rapid increase of research activity in this field began. It was recognized almost immediately that with proper calibration, GPC was capable to provide molar mass and molar mass distribution information for synthetic polymers. Because the latter information was difficult to obtain by other methods, GPC came rapidly into extensive use.
Theory and method
SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works by trapping smaller molecules in the pores of the adsorbent ("stationary phase"). This process is usually performed within a column, which typically consists of a hollow tube tightly packed with micron-scale polymer beads containing pores of different sizes. These pores may be depressions on the surface or channels through the bead. As the solution travels down the column some particles enter into the pores. Larger particles cannot enter into as many pores. The larger the particles, the faster the elution. The larger molecules simply pass by the pores because those molecules are too large to enter the pores. Larger molecules therefore flow through the column more quickly than smaller molecules, that is, the smaller the molecule, the longer the retention time.
One requirement for SEC is that the analyte does not interact with the surface of the stationary phases, with differences in elution time between analytes ideally being based solely on the solute volume the analytes can enter, rather than chemical or electrostatic interactions with the stationary phases. Thus, a small molecule that can penetrate every region of the stationary phase pore system can enter a total volume equal to the sum of the entire pore volume and the interparticle volume. This small molecule elutes late (after the molecule has penetrated all of the pore- and interparticle volume—approximately 80% of the column volume). At the other extreme, a very large molecule that cannot penetrate any the smaller pores can enter only the interparticle volume (~35% of the column volume) and elutes earlier when this volume of mobile phase has passed through the column. The underlying principle of SEC is that particles of different sizes elute (filter) through a stationary phase at different rates. This results in the separation of a solution of particles based on size. Provided that all the particles are loaded simultaneously or near-simultaneously, particles of the same size should elute together.
However, as there are various measures of the size of a macromolecule (for instance, the radius of gyration and the hydrodynamic radius), a fundamental problem in the theory of SEC has been the choice of a proper molecular size parameter by which molecules of different kinds are separated. Experimentally, Benoit and co-workers found an excellent correlation between elution volume and a dynamically based molecular size, the hydrodynamic volume, for several different chain architecture and chemical compositions. The observed correlation based on the hydrodynamic volume became accepted as the basis of universal SEC calibration.
Still, the use of the hydrodynamic volume, a size based on dynamical properties, in the interpretation of SEC data is not fully understood. This is because SEC is typically run under low flow rate conditions where hydrodynamic factor should have little effect on the separation. In fact, both theory and computer simulations assume a thermodynamic separation principle: the separation process is determined by the equilibrium distribution (partitioning) of solute macromolecules between two phases: a dilute bulk solution phase located at the interstitial space and confined solution phases within the pores of column packing material. Based on this theory, it has been shown that the relevant size parameter to the partitioning of polymers in pores is the mean span dimension (mean maximal projection onto a line). Although this issue has not been fully resolved, it is likely that the mean span dimension and the hydrodynamic volume are strongly correlated.
Each size exclusion column has a range of molecular weights that can be separated. The exclusion limit defines the molecular weight at the upper end of the column 'working' range and is where molecules are too large to get trapped in the stationary phase. The lower end of the range is defined by the permeation limit, which defines the molecular weight of a molecule that is small enough to penetrate all pores of the stationary phase. All molecules below this molecular mass are so small that they elute as a single band.
The filtered solution that is collected at the end is known as the eluate. The void volume includes any particles too large to enter the medium, and the solvent volume is known as the column volume.
Following are the materials which are commonly used for porous gel beads in size exclusion chromatography
Factors affecting filtration
In real-life situations, particles in solution do not have a fixed size, resulting in the probability that a particle that would otherwise be hampered by a pore passing right by it. Also, the stationary-phase particles are not ideally defined; both particles and pores may vary in size. Elution curves, therefore, resemble Gaussian distributions. The stationary phase may also interact in undesirable ways with a particle and influence retention times, though great care is taken by column manufacturers to use stationary phases that are inert and minimize this issue.
Like other forms of chromatography, increasing the column length enhances resolution, and increasing the column diameter increases column capacity. Proper column packing is important for maximum resolution: An over-packed column can collapse the pores in the beads, resulting in a loss of resolution. An under-packed column can reduce the relative surface area of the stationary phase accessible to smaller species, resulting in those species spending less time trapped in pores. Unlike affinity chromatography techniques, a solvent head at the top of the column can drastically diminish resolution as the sample diffuses prior to loading, broadening the downstream elution.
Analysis
In simple manual columns, the eluent is collected in constant volumes, known as fractions. The more similar the particles are in size the more likely they are in the same fraction and not detected separately. More advanced columns overcome this problem by constantly monitoring the eluent.
The collected fractions are often examined by spectroscopic techniques to determine the concentration of the particles eluted. Common spectroscopy detection techniques are refractive index (RI) and ultraviolet (UV). When eluting spectroscopically similar species (such as during biological purification), other techniques may be necessary to identify the contents of each fraction. It is also possible to analyze the eluent flow continuously with RI, LALLS, Multi-Angle Laser Light Scattering MALS, UV, and/or viscosity measurements.
The elution volume (Ve) decreases roughly linear with the logarithm of the molecular hydrodynamic volume. Columns are often calibrated using 4-5 standard samples (e.g., folded proteins of known molecular weight), and a sample containing a very large molecule such as thyroglobulin to determine the void volume. (Blue dextran is not recommended for Vo determination because it is heterogeneous and may give variable results) The elution volumes of the standards are divided by the elution volume of the thyroglobulin (Ve/Vo) and plotted against the log of the standards' molecular weights.
Applications
Biochemical applications
In general, SEC is considered a low-resolution chromatography as it does not discern similar species very well, and is therefore often reserved for the final step of a purification. The technique can determine the quaternary structure of purified proteins that have slow exchange times, since it can be carried out under native solution conditions, preserving macromolecular interactions. SEC can also assay protein tertiary structure, as it measures the hydrodynamic volume (not molecular weight), allowing folded and unfolded versions of the same protein to be distinguished. For example, the apparent hydrodynamic radius of a typical protein domain might be 14 Å and 36 Å for the folded and unfolded forms, respectively. SEC allows the separation of these two forms, as the folded form elutes much later due to its smaller size.
Polymer synthesis
SEC can be used as a measure of both the size and the polydispersity of a synthesized polymer, that is, the ability to find the distribution of the sizes of polymer molecules. If standards of a known size are run previously, then a calibration curve can be created to determine the sizes of polymer molecules of interest in the solvent chosen for analysis (often THF). In alternative fashion, techniques such as light scattering and/or viscometry can be used online with SEC to yield absolute molecular weights that do not rely on calibration with standards of known molecular weight. Due to the difference in size of two polymers with identical molecular weights, the absolute determination methods are, in general, more desirable. A typical SEC system can quickly (in about half an hour) give polymer chemists information on the size and polydispersity of the sample. The preparative SEC can be used for polymer fractionation on an analytical scale.
Drawbacks
In SEC, mass is not measured so much as the hydrodynamic volume of the polymer molecules, that is, how much space a particular polymer molecule takes up when it is in solution. However, the approximate molecular weight can be calculated from SEC data because the exact relationship between molecular weight and hydrodynamic volume for polystyrene can be found. For this, polystyrene is used as a standard. But the relationship between hydrodynamic volume and molecular weight is not the same for all polymers, so only an approximate measurement can be obtained.
Another drawback is the possibility of interaction between the stationary phase and the analyte. Any interaction leads to a later elution time and thus mimics a smaller analyte size.
When performing this method, the bands of the eluting molecules may be broadened. This can occur by turbulence caused by the flow of the mobile phase molecules passing through the molecules of the stationary phase. In addition, molecular thermal diffusion and friction between the molecules of the glass walls and the molecules of the eluent contribute to the broadening of the bands. Besides broadening, the bands also overlap with each other. As a result, the eluent usually gets considerably diluted. A few precautions can be taken to prevent the likelihood of the bands broadening. For instance, one can apply the sample in a narrow, highly concentrated band on the top of the column. The more concentrated the eluent is, the more efficient the procedure would be. However, it is not always possible to concentrate the eluent, which can be considered as one more disadvantage.
Absolute size-exclusion chromatography
Absolute size-exclusion chromatography (ASEC) is a technique that couples a light scattering instrument, most commonly multi-angle light scattering (MALS) or another form of static light scattering (SLS), but possibly a dynamic light scattering (DLS) instrument, to a size-exclusion chromatography system for absolute molar mass and/or size measurements of proteins and macromolecules as they elute from the chromatography system.
The definition of “absolute” in this case is that calibration of retention time on the column with a set of reference standards is not required to obtain molar mass or the hydrodynamic size, often referred to as hydrodynamic diameter (DH in units of nm). Non-ideal column interactions, such as electrostatic or hydrophobic surface interactions that modulate retention time relative to standards, do not impact the final result. Likewise, differences between conformation of the analyte and the standard have no effect on an absolute measurement; for example, with MALS analysis, the molar mass of inherently disordered proteins are characterized accurately even though they elute at much earlier times than globular proteins with the same molar mass, and the same is true of branched polymers which elute late compared to linear reference standards with the same molar mass. Another benefit of ASEC is that the molar mass and/or size is determined at each point in an eluting peak, and therefore indicates homogeneity or polydispersity within the peak. For example, SEC-MALS analysis of a monodisperse protein will show that the entire peak consists of molecules with the same molar mass, something that is not possible with standard SEC analysis.
Determination of molar mass with SLS requires combining the light scattering measurements with concentration measurements. Therefore SEC-MALS typically includes the light scattering detector and either a differential refractometer or UV/Vis absorbance detector. In addition, MALS determines the rms radius Rg of molecules above a certain size limit, typically 10 nm. SEC-MALS can therefore analyze the conformation of polymers via the relationship of molar mass to Rg. For smaller molecules, either DLS or, more commonly, a differential viscometer is added to determine hydrodynamic radius and evaluate molecular conformation in the same manner.
In SEC-DLS, the sizes of the macromolecules are measured as they elute into the flow cell of the DLS instrument from the size exclusion column set. The hydrodynamic size of the molecules or particles are measured and not their molecular weights. For proteins a Mark-Houwink type of calculation can be used to estimate the molecular weight from the hydrodynamic size.
A major advantage of DLS coupled with SEC is the ability to obtain enhanced DLS resolution. Batch DLS is quick and simple and provides a direct measure of the average size, but the baseline resolution of DLS is a ratio of 3:1 in diameter. Using SEC, the proteins and protein oligomers are separated, allowing oligomeric resolution. Aggregation studies can also be done using ASEC. Though the aggregate concentration may not be calculated with light scattering (an online concentration detector such as that used in SEC-MALS for molar mass measurement also determines aggregate concentration), the size of the aggregate can be measured, only limited by the maximum size eluting from the SEC columns.
Limitations of ASEC with DLS detection include flow-rate, concentration, and precision. Because a correlation function requires anywhere from 3–7 seconds to properly build, a limited number of data points can be collected across the peak. ASEC with SLS detection is not limited by flow rate and measurement time is essentially instantaneous, and the range of concentration is several orders of magnitude larger than for DLS. However, molar mass analysis with SEC-MALS does require accurate concentration measurements. MALS and DLS detectors are often combined in a single instrument for more comprehensive absolute analysis following separation by SEC.
See also
PEGylation
Gel permeation chromatography
Protein purification
References
External links
Chromatography
Biochemistry methods
Polymers
Polyolefins | Size-exclusion chromatography | [
"Chemistry",
"Materials_science",
"Biology"
] | 4,046 | [
"Biochemistry methods",
"Chromatography",
"Separation processes",
"Polymer chemistry",
"Biochemistry",
"Polymers"
] |
97,914 | https://en.wikipedia.org/wiki/Differential%20scanning%20calorimetry | Differential scanning calorimetry (DSC) is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment.
Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well-defined heat capacity over the range of temperatures to be scanned.
Additionally, the reference sample must be stable, of high purity, and must not experience much change across the temperature scan. Typically, reference standards have been metals such as indium, tin, bismuth, and lead, but other standards such as polyethylene and fatty acids have been proposed to study polymers and organic compounds, respectively.
The technique was developed by E. S. Watson and M. J. O'Neill in 1962, and introduced commercially at the 1963 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy.
The first adiabatic differential scanning calorimeter that could be used in biochemistry was developed by P. L. Privalov and D. R. Monaselidze in 1964 at Institute of Physics in Tbilisi, Georgia. The term DSC was coined to describe this instrument, which measures energy directly and allows precise measurements of heat capacity.
Types
There are two main types of DSC: Heat-flux DSC which measures the difference in heat flux between the sample and a reference (which gives it the alternative name Multi-Cell DSC) and Power differential DSC which measures the difference in power supplied to the sample and a reference.
Heat-flux DSC
With Heat-flux DSC, the changes in heat flow are calculated by integrating the ΔTref- curve. For this kind of experiment, a sample and a reference crucible are placed on a sample holder with integrated temperature sensors for temperature measurement of the crucibles. This arrangement is located in a temperature-controlled oven. Unlike the traditional design, the special feature of heat-flux DSC is that it uses flat temperature sensors placed vertically around a flat heater. This setup makes it possible to have a small, light, and low-heat capacity structure while still working like a regular DSC oven.
Power differential DSC
For this kind of setup, also known as Power compensating DSC, the sample and reference crucible are placed in thermally insulated furnaces and not next to each other in the same furnace as in heat-flux-DSC experiments. Then the temperature of both chambers is controlled so that the same temperature is always present on both sides. The electrical power that is required to obtain and maintain this state is then recorded rather than the temperature difference between the two crucibles.
Fast-scan DSC
The 2000s have witnessed the rapid development of Fast-scan DSC (FSC), a novel calorimetric technique that employs micromachined sensors. The key advances of this technique are the ultrahigh scanning rate, which can be as high as 106 K/s, and the ultrahigh sensitivity, with a heat capacity resolution typically better than 1 nJ/K.
Nanocalorimetry has attracted much attention in materials science, where it is applied to perform quantitative analysis of rapid phase transitions, particularly on fast cooling. Another emerging area of application of FSC is physical chemistry, with a focus on the thermophysical properties of thermally labile compounds. Quantities like fusion temperature, fusion enthalpy, sublimation, and vaporization pressures, and enthalpies of such molecules became available.
Temperature Modulated DSC
When performing Temperature Modulated DSC (TMDSC, MDSC), the underlying linear heating rate is superimposed by a sinusoidal temperature variation. The benefit of this procedure is the ability to separate overlapping DSC effects by calculating the reversing and the non-reversing signals. The reversing heat flow is related to the changes in specific heat capacity (→ glass transition) while the non-reversing heat flow corresponds to time-dependent phenomena such as curing, dehydration and relaxation.
Detection of phase transitions
The basic principle underlying this technique is that when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic.
For example, as a solid sample melts to a liquid, it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle physical changes, such as glass transitions. It is widely used in industrial settings as a quality control instrument due to its applicability in evaluating sample purity and for studying polymer curing.
DTA
An alternative technique, which shares much in common with DSC, is differential thermal analysis (DTA). In this technique it is the heat flow to the sample and reference that remains the same rather than the temperature. When the sample and reference are heated identically, phase changes and other thermal processes cause a difference in temperature between the sample and reference. Both DSC and DTA provide similar information. DSC measures the energy required to keep both the reference and the sample at the same temperature whereas DTA measures the difference in temperature between the sample and the reference when the same amount of energy has been introduced into both.
DSC curves
The result of a DSC experiment is a curve of heat flux versus temperature or versus time. There are two different conventions: exothermic reactions in the sample shown with a positive or negative peak, depending on the kind of technology used in the experiment. This curve can be used to calculate enthalpies of transitions. This is done by integrating the peak corresponding to a given transition. It can be shown that the enthalpy of transition can be expressed using the following equation:
where is the enthalpy of transition, is the calorimetric constant, and is the area under the curve. The calorimetric constant will vary from instrument to instrument, and can be determined by analyzing a well-characterized sample with known enthalpies of transition.
Applications
Differential scanning calorimetry can be used to measure a number of characteristic properties of a sample. Using this technique it is possible to observe fusion and crystallization events as well as glass transition temperatures Tg. DSC can also be used to study oxidation, as well as other chemical reactions.
Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change occurs.
As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature (Tc). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.
Differential scanning calorimetry can also be used to obtain valuable thermodynamics information about proteins. The thermodynamics analysis of proteins can reveal important information about the global structure of proteins, and protein/ligand interaction. For example, many mutations lower the stability of proteins, while ligand binding usually increases protein stability. Using DSC, this stability can be measured by obtaining Gibbs Free Energy values at any given temperature. This allows researchers to compare the free energy of unfolding between ligand-free protein and protein-ligand complex, or wild type and mutant proteins. DSC can also be used in studying protein/lipid interactions, nucleotides, drug-lipid interactions. In studying protein denaturation using DSC, the thermal melt should be at least to some degree reversible, as the thermodynamics calculations rely on chemical equilibrium.
Experimental considerations
There are various experimental and environmental parameters to consider during DSC measurements. Exemplary potential issues are briefly discussed in the following sections. All statements in these paragraphs are based on the books of Gabbott and Brown.
Crucibles
DSC measurements without crucibles promote the thermal transfer towards the sample and are possible if the DSC is designed for this purpose. Measurements without crucible should only be conducted with chemically stable materials at low temperatures, as otherwise there may be contamination or damage of the calorimeter. The safer way is to use a crucible, which is specified for the desired temperatures and does not react with the sample material (e.g. alumina, gold or platinum crucibles). If the sample is likely to evolve volatiles or is in the liquid state, the crucible should be sealed to prevent contamination. However, if the crucible is sealed, increasing pressure and possible measurement artefacts due to deformation of the crucible must be considered. In this case, crucibles with very small holes (∅~50 μm) or crucibles that can withstand very high pressures should be used.
Sample condition
The sample should be in good contact with the crucible surface. Therefore, the contact surface of a solid bulk sample should be plane parallel. For DSC measurements with powders, stronger signal might be observed for finer powders due to the enlarged contact surface. The minimum sample mass depends on the transformation to be analyzed. A small sample mass (~10 mg) is sufficient if the released or consumed heat during the transformation is high enough. Heavier samples could be used to obtain transformation associated with low heat release or consumption, as larger samples also enlarge the obtained peaks. However, the increasing sample size might worsen the resolution due to thermal gradients which may evolve during heating.
Temperature and scan rates
If the peaks are very small, it is possible to enlarge them by increasing the scan rate. Due to the faster scan rate, more energy is released or consumed in a shorter time which leads to higher and therefore more distinct peaks. However, faster scan rates lead to poor temperature resolution because of thermal lag. Due to this thermal lag, two phase transformations (or chemical reactions) occurring in a narrow temperature range might overlap. Generally, heating or cooling rates are too high to detect equilibrium transitions, so there is always a shift to higher or lower temperatures compared to phase diagrams representing equilibrium conditions.
Purge gas
Purge gas is used to control the sample environment, in order to reduce signal noise and to prevent contamination. Mostly nitrogen is used and for temperatures above 600 °C, argon can be utilized to minimize heat loss due to the low thermal conductivity of argon. Air or pure oxygen can be used for oxidative tests like oxidative induction time and He is used for very low temperatures due to the low boiling temperature (~4.2K at 101.325 kPa ).
Examples
The technique is widely used across a range of applications, both as a routine quality test and as a research tool. The equipment is easy to calibrate, using low melting indium at 156.5985 °C for example, and is a rapid and reliable method of thermal analysis.
Polymers
DSC is used widely for examining polymeric materials to determine their thermal transitions. Important thermal transitions include the glass transition temperature (Tg), crystallization temperature (Tc), and melting temperature (Tm). The observed thermal transitions can be utilized to compare materials, although the transitions alone do not uniquely identify composition. The composition of unknown materials may be completed using complementary techniques such as IR spectroscopy. Melting points and glass transition temperatures for most polymers are available from standard compilations, and the method can show polymer degradation by the lowering of the expected melting temperature. Tm depends on the molecular weight of the polymer and thermal history.
The percent crystalline content of a polymer can be estimated from the crystallization/melting peaks of the DSC graph using reference heats of fusion found in the literature. DSC can also be used to study thermal degradation of polymers using an approach such as Oxidative Onset Temperature/Time (OOT); however, the user risks contamination of the DSC cell, which can be problematic. Thermogravimetric Analysis (TGA) may be more useful for decomposition behavior determination. Impurities in polymers can be determined by examining thermograms for anomalous peaks, and plasticisers can be detected at their characteristic boiling points. In addition, examination of minor events in first heat thermal analysis data can be useful as these apparently "anomalous peaks" can in fact also be representative of process or storage thermal history of the material or polymer physical aging. Comparison of first and second heat data collected at consistent heating rates can allow the analyst to learn about both polymer processing history and material properties. (see J.H.Flynn.(1993) Analysis of DSC results by integration. Thermochimica Acta, 217, 129-149.)
Liquid crystals
DSC is used in the study of liquid crystals. As some forms of matter go from solid to liquid they go through a third state, which displays properties of both phases. This anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is possible to observe the small energy changes that occur as matter transitions from a solid to a liquid crystal and from a liquid crystal to an isotropic liquid.
Oxidative stability
Using differential scanning calorimetry to study the stability to oxidation of samples generally requires an airtight sample chamber. It can be used to determine the oxidative-induction time (OIT) of a sample. Such tests are usually done isothermally (at constant temperature) by changing the atmosphere of the sample. First, the sample is brought to the desired test temperature under an inert atmosphere, usually nitrogen. Oxygen is then added to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such analysis can be used to determine the stability and optimum storage conditions for a material or compound. DSC equipment can also be used to determine the Oxidative-Onset Temperature (OOT) of a material. In this test a sample (and a reference) are exposed to an oxygen atmosphere and subjected to a constant rate of heating (typically from 50 to 300 °C). The DSC heat flow curve will deviate when the reaction with oxygen begins (the reaction being either exothermic or endothermic). Both OIT and OOT tests are used as a tools for determining the activity of antioxidants.
Safety screening
DSC makes a reasonable initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C/min, due to much heavier crucible) and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximal temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of a 3 °C increment per half-hour.
Drug analysis
DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist, DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer properties. The cross-linking of polymer molecules that occurs in the curing process is exothermic, resulting in a negative peak in the DSC curve that usually appears soon after the glass transition.
In the pharmaceutical industry it is necessary to have well-characterized drug compounds in order to define processing parameters. For instance, if it is necessary to deliver a drug in the amorphous form, it is desirable to process the drug at temperatures below those at which crystallization can occur.
General chemical analysis
Freezing-point depression can be used as a purity analysis tool when analysed by differential scanning calorimetry. This is possible because the temperature range over which a mixture of compounds melts is dependent on their relative amounts. Consequently, less pure compounds will exhibit a broadened melting peak that begins at lower temperature than a pure compound.
See also
Chemical thermodynamics
Calorimetry
Endothermic
Exothermic
Forensic engineering
Forensic polymer engineering
Glass transition temperature
Phase transitions
Polymer
Pressure perturbation calorimetry
Thermal and Evolved Gas Analyzer
References
Further reading
External links
The result of a DSC experiment is a curve of heat flux versus temperature or versus time.
Materials science
Biophysics
Scientific techniques
Calorimetry | Differential scanning calorimetry | [
"Physics",
"Materials_science",
"Engineering",
"Biology"
] | 3,628 | [
"nan",
"Applied and interdisciplinary physics",
"Materials science",
"Biophysics"
] |
98,093 | https://en.wikipedia.org/wiki/Transcranial%20magnetic%20stimulation | Transcranial magnetic stimulation (TMS) is a noninvasive neurotherapy, a form of brain stimulation in which a changing magnetic field is used to induce an electric current at a specific area of the brain through electromagnetic induction. An electric pulse generator, or stimulator, is connected to a magnetic coil connected to the scalp. The stimulator generates a changing electric current within the coil which creates a varying magnetic field, inducing a current within a region in the brain itself.
TMS has shown diagnostic and therapeutic potential in the central nervous system with a wide variety of disease states in neurology and mental health, but has not demonstrated clinical worth for treatment of any other condition.
Adverse effects of TMS appear rare and include fainting and seizure.
Medical uses
TMS does not require surgery or electrode implantation.
Its use can be diagnostic and/or therapeutic. Effects vary based on frequency and intensity of the magnetic pulses as well as the length of treatment, which dictates the total number of pulses given. TMS treatments are approved by the FDA in the US and by NICE in the UK for the treatment of depression and are provided by private clinics and some VA medical centers. TMS stimulates cortical tissue without the pain sensations produced in transcranial electrical stimulation.
Diagnosis
TMS can be used clinically to measure activity and function of specific brain circuits in humans, most commonly with single or paired magnetic pulses. The most widely accepted use is in measuring the connection between the primary motor cortex of the central nervous system and the peripheral nervous system to evaluate damage related to past or progressive neurologic insult. TMS has utility as a diagnostic instrument for myelopathy, amyotrophic lateral sclerosis, and multiple sclerosis.
Treatment
There is some evidence that TMS may have applications for a number of conditions including depression, fibromyalgia and neuropathic pain, and TMS treatment is covered by most private insurance plans as well as by traditional Medicare, but for no condition does the evidence rise to the level of showing clinical relevance.
Adverse effects
TMS is generally advertised as a safe alternative to medications such as SSRI's. The greatest immediate risk from TMS is fainting, though this is uncommon. Seizures have been reported, but are rare.
Risks are higher for therapeutic repetitive TMS (rTMS) than for single or paired diagnostic TMS. Adverse effects generally increase with higher frequency stimulation.
Procedure
During the procedure, a magnetic coil is positioned at the head of the person receiving the treatment using anatomical landmarks on the skull, in particular the inion and nasion. The coil is then connected to a pulse generator, or stimulator, that delivers electric current to the coil.
Physics
TMS uses electromagnetic induction to generate an electric current across the scalp and skull. A plastic-enclosed coil of wire is held next to the skull and when activated, produces a varying magnetic field oriented orthogonally to the plane of the coil. The changing magnetic field then induces an electric current in the brain that activates nearby nerve cells in a manner similar to a current applied superficially at the cortical surface.
The magnetic field is about the same strength as magnetic resonance imaging (MRI), and the pulse generally reaches no more than 5 centimeters into the brain unless using a modified coil and technique for deeper stimulation.
Transcranial magnetic stimulation is achieved by quickly discharging current from a large capacitor into a coil to produce pulsed magnetic fields between 2 and 3 teslas in strength. Directing the magnetic field pulse at a targeted area in the brain causes a localized electrical current which can then either depolarize or hyperpolarize neurons at that site.
The induced electric field inside the brain tissue causes a change in transmembrane potentials resulting in depolarization or hyperpolarization of neurons, causing them to be more or less excitable, respectively.
TMS usually stimulates to a depth from 2 to 4 cm below the surface, depending on the coil and intensity used. Consequently, only superficial brain areas can be affected. Deep TMS can reach up to 6 cm into the brain to stimulate deeper layers of the motor cortex, such as that which controls leg motion. The path of this current can be difficult to model because the brain is irregularly shaped with variable internal density and water content, leading to a nonuniform magnetic field strength and conduction throughout its tissues.
Frequency and duration
The effects of TMS can be divided based on frequency, duration and intensity (amplitude) of stimulation:
Single or paired pulse TMS causes neurons in the neocortex under the site of stimulation to depolarize and discharge an action potential. If used in the primary motor cortex, it produces muscle activity referred to as a motor evoked potential (MEP) which can be recorded on electromyography. If used on the occipital cortex, 'phosphenes' (flashes of light) might be perceived by the subject. In most other areas of the cortex, there is no conscious effect, but behaviour may be altered (e.g., slower reaction time on a cognitive task), or changes in brain activity may be detected using diagnostic equipment.
Repetitive TMS (rTMS) produces longer-lasting effects which persist past the period of stimulation. rTMS can increase or decrease the excitability of the corticospinal tract depending on the intensity of stimulation, coil orientation, and frequency. Low frequency rTMS with a stimulus frequency less than 1 Hz is believed to inhibit cortical firing, while a stimulus frequency greater than 1 Hz, referred to as high frequency, is believed to provoke it. Though its mechanism is not clear, it has been suggested as being due to a change in synaptic efficacy related to long-term potentiation (LTP) and long-term depression like plasticity (LTD-like plasticity).
Coil types
Most devices use a coil shaped like a figure-eight to deliver a shallow magnetic field that affects more superficial neurons in the brain. Differences in magnetic coil design are considered when comparing results, with important elements including the type of material, geometry and specific characteristics of the associated magnetic pulse.
The core material may be either a magnetically inert substrate ('air core'), or a solid, ferromagnetically active material ('solid core'). Solid cores result in more efficient transfer of electrical energy to a magnetic field and reduce energy loss to heat, and so can be operated with the higher volume of therapy protocols without interruption due to overheating. Varying the geometric shape of the coil itself can cause variations in focality, shape, and depth of penetration. Differences in coil material and its power supply also affect magnetic pulse width and duration.
A number of different types of coils exist, each of which produce different magnetic fields. The round coil is the original used in TMS. Later, the figure-eight (butterfly) coil was developed to provide a more focal pattern of activation in the brain, and the four-leaf coil for focal stimulation of peripheral nerves. The double-cone coil conforms more to the shape of the head. The Hesed (H-core), circular crown and double cone coils allow more widespread activation and a deeper magnetic penetration. They are supposed to impact deeper areas in the motor cortex and cerebellum controlling the legs and pelvic floor, for example, though the increased depth comes at the cost of a less focused magnetic pulse.
Research directions
For Parkinson's disease, early results suggest that low frequency stimulation may have an effect on medication associated dyskinesia, and that high frequency stimulation improves motor function.
History
Luigi Galvani (1737–1798) undertook research on the effects of electricity on the body in the late-eighteenth century and laid the foundations for the field of electrophysiology. In the 1830s Michael Faraday (1791–1867) discovered that an electrical current had a corresponding magnetic field, and that changing one could induce its counterpart.
Work to directly stimulate the human brain with electricity started in the late 1800s, and by the 1930s the Italian physicians Cerletti and Bini had developed electroconvulsive therapy (ECT). ECT became widely used to treat mental illness, and ultimately overused, as it began to be seen as a panacea. This led to a backlash in the 1970s.
In 1980 Merton and Morton successfully used transcranial electrical stimulation (TES) to stimulate the motor cortex. However, this process was very uncomfortable, and subsequently Anthony T. Barker began to search for an alternative to TES. He began exploring the use of magnetic fields to alter electrical signaling within the brain, and the first stable TMS devices were developed in 1985. They were originally intended as diagnostic and research devices, with evaluation of their therapeutic potential being a later development. The United States' FDA first approved TMS devices in October 2008.
Regulatory status
Speech mapping prior to neurosurgery
Nexstim obtained United States Federal Food, Drug, and Cosmetic Act§Section 510(k) clearance for the assessment of the primary motor cortex for pre-procedural planning in December 2009 and for neurosurgical planning in June 2011.
Depression
TMS is approved as a Class II medical device under the "de novo pathway".
Obsessive–compulsive disorder (OCD)
In August 2018, the US Food and Drug Administration (US FDA) authorized the use of TMS developed by the Israeli company Brainsway in the treatment of obsessive–compulsive disorder (OCD).
In 2020, US FDA authorized the use of TMS developed by the U.S. company MagVenture Inc. in the treatment of OCD.
In 2023, US FDA authorized the use of TMS developed by the U.S. company Neuronetics Inc. in the treatment of OCD.
Other neurological areas
In the European Economic Area, various versions of deep TMS H-coils have CE marking for
Alzheimer's disease,
autism,
bipolar disorder,
epilepsy,
chronic pain,
major depressive disorder,
Parkinson's disease,
post-traumatic stress disorder (PTSD),
schizophrenia (negative symptoms)
and to aid smoking cessation.
One review found tentative benefit for cognitive enhancement in healthy people.
Coverage by health services and insurers
United Kingdom
The United Kingdom's National Institute for Health and Care Excellence (NICE) issues guidance to the National Health Service (NHS) in England, Wales, Scotland and Northern Ireland (UK). NICE guidance does not cover whether or not the NHS should fund a procedure. Local NHS bodies (primary care trusts and hospital trusts) make decisions about funding after considering the clinical effectiveness of the procedure and whether the procedure represents value for money for the NHS.
NICE evaluated TMS for severe depression in 2007, finding that TMS was safe, but with insufficient evidence for its efficacy. Guidance was updated and replaced in 2015, concluding that evidence for short‑term efficacy of repetitive transcranial magnetic stimulation (rTMS) for depression was adequate, although the clinical response is variable, and ruling that rTMS for depression may be used with arrangements for clinical governance and audit.
In January 2014, NICE reported the results of an evaluation of TMS for treating and preventing migraine (IPG 477). NICE found that short-term TMS is safe but there is insufficient evidence to evaluate safety for long-term and frequent uses. It found that evidence on the efficacy of TMS for the treatment of migraine is limited in quantity, that evidence for the prevention of migraine is limited in both quality and quantity.
use of rTMS in the UK was reported to have remained limited due to the cost of equipment and establishing treatment centres. Camilla Nord, head of the Mental Health Neuroscience Lab at the University of Cambridge said "The NHS has unfortunately been far behind the US and Canada on rTMS, which is at least as effective as antidepressants, if not more".
United States
Commercial health insurance
In 2013, several commercial health insurance plans in the United States, including Anthem, Health Net, Kaiser Permanente, and Blue Cross Blue Shield of Nebraska and of Rhode Island, covered TMS for the treatment of depression for the first time. In contrast, UnitedHealthcare issued a medical policy for TMS in 2013 that stated there is insufficient evidence that the procedure is beneficial for health outcomes in patients with depression. UnitedHealthcare noted that methodological concerns raised about the scientific evidence studying TMS for depression include small sample size, lack of a validated sham comparison in randomized controlled studies, and variable uses of outcome measures. Other commercial insurance plans whose 2013 medical coverage policies stated that the role of TMS in the treatment of depression and other disorders had not been clearly established or remained investigational included Aetna, Cigna and Regence.
Medicare
Policies for Medicare coverage vary among local jurisdictions within the Medicare system, and Medicare coverage for TMS has varied among jurisdictions and with time. For example:
In early 2012 in New England, Medicare covered TMS for the first time in the United States. However, that jurisdiction later decided to end coverage after October, 2013.
In August 2012, the jurisdiction covering Arkansas, Louisiana, Mississippi, Colorado, Texas, Oklahoma, and New Mexico determined that there was insufficient evidence to cover the treatment, but the same jurisdiction subsequently determined that Medicare would cover TMS for the treatment of depression after December 2013.
Limitations
There are serious concerns about stimulating brain tissue using non-invasive magnetic field methods such as uncertainty in the dose and localisation of the stimulation effect.
See also
Cortical stimulation mapping
Cranial electrotherapy stimulation
Electrical brain stimulation
Electroconvulsive therapy
Low field magnetic stimulation
My Beautiful Broken Brain
Neuromodulation
Neurostimulation
Neurotechnology
Neurotherapy
Non-invasive cerebellar stimulation
Transcranial alternating current stimulation
Transcranial direct-current stimulation
Transcranial random noise stimulation
Vagus nerve stimulation
References
Diagnostic neurology
Physical psychiatric treatments
Electrotherapy
Magnetic devices
Neurophysiology
Neuropsychology
Neurotechnology
Treatment of bipolar disorder
Treatment of depression
Medical devices
1985 introductions
2008 introductions
Bioelectromagnetics | Transcranial magnetic stimulation | [
"Biology"
] | 2,896 | [
"Medical devices",
"Medical technology"
] |
98,292 | https://en.wikipedia.org/wiki/Impossible%20cube | The impossible cube or irrational cube is an impossible object invented by M.C. Escher for his print Belvedere. It is a two-dimensional figure that superficially resembles a perspective drawing of a three-dimensional cube, with its features drawn inconsistently from the way they would appear in an actual cube.
Usage in art
In Escher's Belvedere a boy seated at the foot of a building holds an impossible cube. A drawing of the related Necker cube (with its crossings circled) lies at his feet, while the building itself shares some of the same impossible features as the cube.
Other artists than Escher, including Jos De Mey, have also made artworks featuring the impossible cube.
A doctored photograph purporting to be of an impossible cube was published in the June 1966 issue of Scientific American, where it was called a "Freemish crate". An impossible cube has also been featured on an Austrian postage stamp.
Explanation
The impossible cube draws upon the ambiguity present in a Necker cube illustration, in which a cube is drawn with its edges as line segments, and can be interpreted as being in either of two different three-dimensional orientations.
An impossible cube is usually rendered as a Necker cube in which the line segments representing the edges have been replaced by what are apparently solid beams.
In Escher's print, the top four joints of the cube, and the upper of the two crossings between its beams, match one of the two interpretations of the Necker cube, while the bottom four joints and the bottom crossing match the other interpretation. Other variations of the impossible cube combine these features in different ways; for instance, the one shown in Escher's painting draws all eight joints according to one interpretation of the Necker cube and both crossings according to the other interpretation.
The apparent solidity of the beams gives the impossible cube greater visual ambiguity than the Necker cube, which is less likely to be perceived as an impossible object. The illusion plays on the human eye's interpretation of two-dimensional pictures as three-dimensional objects. It is possible for three-dimensional objects to have the visual appearance of the impossible cube when seen from certain angles, either by making carefully placed cuts in the supposedly solid beams or by using forced perspective, but human experience with right-angled objects makes the impossible appearance seem more likely than the reality.
See also
Penrose triangle
Blivet
References
Optical illusions
Impossible objects
Cubes
M. C. Escher | Impossible cube | [
"Physics"
] | 503 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
99,293 | https://en.wikipedia.org/wiki/Shape%20of%20the%20universe | In physical cosmology, the shape of the universe refers to both its local and global geometry. Local geometry is defined primarily by its curvature, while the global geometry is characterised by its topology (which itself is constrained by curvature). General relativity explains how spatial curvature (local geometry) is constrained by gravity. The global topology of the universe cannot be deduced from measurements of curvature inferred from observations within the family of homogeneous general relativistic models alone, due to the existence of locally indistinguishable spaces with varying global topological characteristics. For example; a multiply connected space like a 3 torus has everywhere zero curvature but is finite in extent, whereas a flat simply connected space is infinite in extent (such as Euclidean space).
Current observational evidence (WMAP, BOOMERanG, and Planck for example) imply that the observable universe is spatially flat to within a 0.4% margin of error of the curvature density parameter with an unknown global topology. It is currently unknown whether the universe is simply connected like euclidean space or multiply connected like a torus. To date, no compelling evidence has been found suggesting the topology of the universe is not simply connected, though it has not been ruled out by astronomical observations.
Shape of the observable universe
The universe's structure can be examined from two angles:
Local geometry: This relates to the curvature of the universe, primarily concerning what we can observe.
Global geometry: This pertains to the universe's overall shape and structure.
The observable universe (of a given current observer) is a roughly spherical region extending about 46 billion light-years in all directions (from that observer, the observer being the current Earth, unless specified otherwise). It appears older and more redshifted the deeper we look into space. In theory, we could look all the way back to the Big Bang, but in practice, we can only see up to the cosmic microwave background (CMB) (roughly years after the Big Bang) as anything beyond that is opaque. Studies show that the observable universe is isotropic and homogeneous on the largest scales.
If the observable universe encompasses the entire universe, we might determine its structure through observation. However, if the observable universe is smaller, we can only grasp a portion of it, making it impossible to deduce the global geometry through observation. Different mathematical models of the universe's global geometry can be constructed, all consistent with current observations and general relativity. Hence, it is unclear whether the observable universe matches the entire universe or is significantly smaller, though it is generally accepted that the universe is larger than the observable universe.
The universe may be compact in some dimensions and not in others, similar to how a cuboid is longer in one dimension than the others. Scientists test these models by looking for novel implications – phenomena not yet observed but necessary if the model is accurate. For instance, a small closed universe would produce multiple images of the same object in the sky, though not necessarily of the same age. As of 2024, current observational evidence suggests that the observable universe is spatially flat with an unknown global structure.
Curvature of the universe
The curvature is a quantity describing how the geometry of a space differs locally from flat space. The curvature of any locally isotropic space (and hence of a locally isotropic universe) falls into one of the three following cases:
Zero curvature (flat)a drawn triangle's angles add up to 180° and the Pythagorean theorem holds; such 3-dimensional space is locally modeled by Euclidean space .
Positive curvaturea drawn triangle's angles add up to more than 180°; such 3-dimensional space is locally modeled by a region of a 3-sphere .
Negative curvaturea drawn triangle's angles add up to less than 180°; such 3-dimensional space is locally modeled by a region of a hyperbolic space .
Curved geometries are in the domain of non-Euclidean geometry. An example of a positively curved space would be the surface of a sphere such as the Earth. A triangle drawn from the equator to a pole will have at least two angles equal 90°, which makes the sum of the 3 angles greater than 180°. An example of a negatively curved surface would be the shape of a saddle or mountain pass. A triangle drawn on a saddle surface will have the sum of the angles adding up to less than 180°.
General relativity explains that mass and energy bend the curvature of spacetime and is used to determine what curvature the universe has by using a value called the density parameter, represented with Omega (). The density parameter is the average density of the universe divided by the critical energy density, that is, the mass energy needed for a universe to be flat. Put another way,
If , the universe is flat.
If , there is positive curvature.
If , there is negative curvature.
Scientists could experimentally calculate to determine the curvature two ways. One is to count all the mass–energy in the universe and take its average density, then divide that average by the critical energy density. Data from the Wilkinson Microwave Anisotropy Probe (WMAP) as well as the Planck spacecraft give values for the three constituents of all the mass–energy in the universe – normal mass (baryonic matter and dark matter), relativistic particles (predominantly photons and neutrinos), and dark energy or the cosmological constant:
Ωmass ≈
Ωrelativistic ≈
ΩΛ ≈
Ωtotal = Ωmass + Ωrelativistic + ΩΛ =
The actual value for critical density value is measured as ρcritical = . From these values, within experimental error, the universe seems to be spatially flat.
Another way to measure Ω is to do so geometrically by measuring an angle across the observable universe. This can be done by using the CMB and measuring the power spectrum and temperature anisotropy. For instance, one can imagine finding a gas cloud that is not in thermal equilibrium due to being so large that light speed cannot propagate the thermal information. Knowing this propagation speed, we then know the size of the gas cloud as well as the distance to the gas cloud, we then have two sides of a triangle and can then determine the angles. Using a method similar to this, the BOOMERanG experiment has determined that the sum of the angles to 180° within experimental error, corresponding to .
These and other astronomical measurements constrain the spatial curvature to be very close to zero, although they do not constrain its sign. This means that although the local geometries of spacetime are generated by the theory of relativity based on spacetime intervals, we can approximate 3-space by the familiar Euclidean geometry.
The Friedmann–Lemaître–Robertson–Walker (FLRW) model using Friedmann equations is commonly used to model the universe. The FLRW model provides a curvature of the universe based on the mathematics of fluid dynamics, that is, modeling the matter within the universe as a perfect fluid. Although stars and structures of mass can be introduced into an "almost FLRW" model, a strictly FLRW model is used to approximate the local geometry of the observable universe. Another way of saying this is that, if all forms of dark energy are ignored, then the curvature of the universe can be determined by measuring the average density of matter within it, assuming that all matter is evenly distributed (rather than the distortions caused by 'dense' objects such as galaxies). This assumption is justified by the observations that, while the universe is "weakly" inhomogeneous and anisotropic (see the large-scale structure of the cosmos), it is on average homogeneous and isotropic when analyzed at a sufficiently large spatial scale.
Global universal structure
Global structure covers the geometry and the topology of the whole universe—both the observable universe and beyond. While the local geometry does not determine the global geometry completely, it does limit the possibilities, particularly a geometry of a constant curvature. The universe is often taken to be a geodesic manifold, free of topological defects; relaxing either of these complicates the analysis considerably. A global geometry is a local geometry plus a topology. It follows that a topology alone does not give a global geometry: for instance, Euclidean 3-space and hyperbolic 3-space have the same topology but different global geometries.
As stated in the introduction, investigations within the study of the global structure of the universe include:
whether the universe is infinite or finite in extent,
whether the geometry of the global universe is flat, positively curved, or negatively curved, and,
whether the topology is simply connected (for example, like a sphere) or else multiply connected (for example, like a torus).
Infinite or finite
One of the unanswered questions about the universe is whether it is infinite or finite in extent. For intuition, it can be understood that a finite universe has a finite volume that, for example, could be in theory filled with a finite amount of material, while an infinite universe is unbounded and no numerical volume could possibly fill it. Mathematically, the question of whether the universe is infinite or finite is referred to as boundedness. An infinite universe (unbounded metric space) means that there are points arbitrarily far apart: for any distance , there are points that are of a distance at least apart. A finite universe is a bounded metric space, where there is some distance such that all points are within distance of each other. The smallest such is called the diameter of the universe, in which case the universe has a well-defined "volume" or "scale".
With or without boundary
Assuming a finite universe, the universe can either have an edge or no edge. Many finite mathematical spaces, e.g., a disc, have an edge or boundary. Spaces that have an edge are difficult to treat, both conceptually and mathematically. Namely, it is difficult to state what would happen at the edge of such a universe. For this reason, spaces that have an edge are typically excluded from consideration.
However, there exist many finite spaces, such as the 3-sphere and 3-torus, that have no edges. Mathematically, these spaces are referred to as being compact without boundary. The term compact means that it is finite in extent ("bounded") and complete. The term "without boundary" means that the space has no edges. Moreover, so that calculus can be applied, the universe is typically assumed to be a differentiable manifold. A mathematical object that possesses all these properties, compact without boundary and differentiable, is termed a closed manifold. The 3-sphere and 3-torus are both closed manifolds.
Observational methods
In the 1990s and early 2000s, empirical methods for determining the global topology using measurements on scales that would show multiple imaging were proposed and applied to cosmological observations.
In the 2000s and 2010s, it was shown that, since the universe is inhomogeneous as shown in the cosmic web of large-scale structure, acceleration effects measured on local scales in the patterns of the movements of galaxies should, in principle, reveal the global topology of the universe.
Curvature
The curvature of the universe places constraints on the topology. If the spatial geometry is spherical, i.e., possess positive curvature, the topology is compact. For a flat (zero curvature) or a hyperbolic (negative curvature) spatial geometry, the topology can be either compact or infinite. Many textbooks erroneously state that a flat or hyperbolic universe implies an infinite universe; however, the correct statement is that a flat universe that is also simply connected implies an infinite universe. For example, Euclidean space is flat, simply connected, and infinite, but there are tori that are flat, multiply connected, finite, and compact (see flat torus).
In general, local to global theorems in Riemannian geometry relate the local geometry to the global geometry. If the local geometry has constant curvature, the global geometry is very constrained, as described in Thurston geometries.
The latest research shows that even the most powerful future experiments (like the SKA) will not be able to distinguish between a flat, open and closed universe if the true value of cosmological curvature parameter is smaller than 10−4. If the true value of the cosmological curvature parameter is larger than 10−3 we will be able to distinguish between these three models even now.
Final results of the Planck mission, released in 2018, show the cosmological curvature parameter, , to be , consistent with a flat universe. (i.e. positive curvature: , , , negative curvature: , , , zero curvature: , , ).
Universe with zero curvature
In a universe with zero curvature, the local geometry is flat. The most familiar such global structure is that of Euclidean space, which is infinite in extent. Flat universes that are finite in extent include the torus and Klein bottle. Moreover, in three dimensions, there are 10 finite closed flat 3-manifolds, of which 6 are orientable and 4 are non-orientable. These are the Bieberbach manifolds. The most familiar is the aforementioned 3-torus universe.
In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows down, due to the effect of gravity, but eventually increases. The ultimate fate of the universe is the same as that of an open universe in the sense that space will continue expanding forever.
A flat universe can have zero total energy.
Universe with positive curvature
A positively curved universe is described by elliptic geometry, and can be thought of as a three-dimensional hypersphere, or some other spherical 3-manifold (such as the Poincaré dodecahedral space), all of which are quotients of the 3-sphere.
Poincaré dodecahedral space is a positively curved space, colloquially described as "soccerball-shaped", as it is the quotient of the 3-sphere by the binary icosahedral group, which is very close to icosahedral symmetry, the symmetry of a soccer ball. This was proposed by Jean-Pierre Luminet and colleagues in 2003 and an optimal orientation on the sky for the model was estimated in 2008.
Universe with negative curvature
A hyperbolic universe, one of a negative spatial curvature, is described by hyperbolic geometry, and can be thought of locally as a three-dimensional analog of an infinitely extended saddle shape. There are a great variety of hyperbolic 3-manifolds, and their classification is not completely understood. Those of finite volume can be understood via the Mostow rigidity theorem. For hyperbolic local geometry, many of the possible three-dimensional spaces are informally called "horn topologies", so called because of the shape of the pseudosphere, a canonical model of hyperbolic geometry. An example is the Picard horn, a negatively curved space, colloquially described as "funnel-shaped".
Curvature: open or closed
When cosmologists speak of the universe as being "open" or "closed", they most commonly are referring to whether the curvature is negative or positive, respectively. These meanings of open and closed are different from the mathematical meaning of open and closed used for sets in topological spaces and for the mathematical meaning of open and closed manifolds, which gives rise to ambiguity and confusion. In mathematics, there are definitions for a closed manifold (i.e., compact without boundary) and open manifold (i.e., one that is not compact and without boundary). A "closed universe" is necessarily a closed manifold. An "open universe" can be either a closed or open manifold. For example, in the Friedmann–Lemaître–Robertson–Walker (FLRW) model, the universe is considered to be without boundaries, in which case "compact universe" could describe a universe that is a closed manifold.
See also
—A string-theory-related model depicting a five-dimensional, membrane-shaped universe; an alternative to the Hot Big Bang Model, whereby the universe is described to have originated when two membranes collided at the fifth dimension
for 6 or 7 extra space-like dimensions all with a compact topology
—The "remarkable theorem" discovered by Gauss, which showed there is an intrinsic notion of curvature for surfaces. This is used by Riemann to generalize the (intrinsic) notion of curvature to higher-dimensional spaces
References
External links
Geometry of the Universe at icosmos.co.uk
Possible wrap-around dodecahedral shape of the universe
Classification of possible universes in the Lambda-CDM model.
What do you mean the universe is flat? Scientific American Blog explanation of a flat universe and the curved spacetime in the universe.
Differential geometry
General relativity
Physical cosmological concepts
Unsolved problems in astronomy
Big Bang | Shape of the universe | [
"Physics",
"Astronomy"
] | 3,489 | [
"Physical cosmological concepts",
"Cosmogony",
"Unsolved problems in astronomy",
"Concepts in astrophysics",
"Concepts in astronomy",
"Big Bang",
"General relativity",
"Astronomical controversies",
"Theory of relativity"
] |
99,358 | https://en.wikipedia.org/wiki/Biogeography | Biogeography is the study of the distribution of species and ecosystems in geographic space and through geological time. Organisms and biological communities often vary in a regular fashion along geographic gradients of latitude, elevation, isolation and habitat area. Phytogeography is the branch of biogeography that studies the distribution of plants. Zoogeography is the branch that studies distribution of animals. Mycogeography is the branch that studies distribution of fungi, such as mushrooms.
Knowledge of spatial variation in the numbers and types of organisms is as vital to us today as it was to our early human ancestors, as we adapt to heterogeneous but geographically predictable environments. Biogeography is an integrative field of inquiry that unites concepts and information from ecology, evolutionary biology, taxonomy, geology, physical geography, palaeontology, and climatology.
Modern biogeographic research combines information and ideas from many fields, from the physiological and ecological constraints on organismal dispersal to geological and climatological phenomena operating at global spatial scales and evolutionary time frames.
The short-term interactions within a habitat and species of organisms describe the ecological application of biogeography. Historical biogeography describes the long-term, evolutionary periods of time for broader classifications of organisms. Early scientists, beginning with Carl Linnaeus, contributed to the development of biogeography as a science.
The scientific theory of biogeography grows out of the work of Alexander von Humboldt (1769–1859), Francisco Jose de Caldas (1768–1816), Hewett Cottrell Watson (1804–1881), Alphonse de Candolle (1806–1893), Alfred Russel Wallace (1823–1913), Philip Lutley Sclater (1829–1913) and other biologists and explorers.
Introduction
The patterns of species distribution across geographical areas can usually be explained through a combination of historical factors such as: speciation, extinction, continental drift, and glaciation. Through observing the geographic distribution of species, we can see associated variations in sea level, river routes, habitat, and river capture. Additionally, this science considers the geographic constraints of landmass areas and isolation, as well as the available ecosystem energy supplies.
Over periods of ecological changes, biogeography includes the study of plant and animal species in: their past and/or present living refugium habitat; their interim living sites; and/or their survival locales. As writer David Quammen put it, "...biogeography does more than ask Which species? and Where. It also asks Why? and, what is sometimes more crucial, Why not?."
Modern biogeography often employs the use of Geographic Information Systems (GIS), to understand the factors affecting organism distribution, and to predict future trends in organism distribution.
Often mathematical models and GIS are employed to solve ecological problems that have a spatial aspect to them.
Biogeography is most keenly observed on the world's islands. These habitats are often much more manageable areas of study because they are more condensed than larger ecosystems on the mainland. Islands are also ideal locations because they allow scientists to look at habitats that new invasive species have only recently colonized and can observe how they disperse throughout the island and change it. They can then apply their understanding to similar but more complex mainland habitats. Islands are very diverse in their biomes, ranging from the tropical to arctic climates. This diversity in habitat allows for a wide range of species study in different parts of the world.
One scientist who recognized the importance of these geographic locations was Charles Darwin, who remarked in his journal "The Zoology of Archipelagoes will be well worth examination". Two chapters in On the Origin of Species were devoted to geographical distribution.
History
18th century
The first discoveries that contributed to the development of biogeography as a science began in the mid-18th century, as Europeans explored the world and described the biodiversity of life. During the 18th century most views on the world were shaped around religion and for many natural theologists, the bible. Carl Linnaeus, in the mid-18th century, improved our classifications of organisms through the exploration of undiscovered territories by his students and disciples. When he noticed that species were not as perpetual as he believed, he developed the Mountain Explanation to explain the distribution of biodiversity; when Noah's ark landed on Mount Ararat and the waters receded, the animals dispersed throughout different elevations on the mountain. This showed different species in different climates proving species were not constant. Linnaeus' findings set a basis for ecological biogeography. Through his strong beliefs in Christianity, he was inspired to classify the living world, which then gave way to additional accounts of secular views on geographical distribution. He argued that the structure of an animal was very closely related to its physical surroundings. This was important to a George Louis Buffon's rival theory of distribution.
Closely after Linnaeus, Georges-Louis Leclerc, Comte de Buffon observed shifts in climate and how species spread across the globe as a result. He was the first to see different groups of organisms in different regions of the world. Buffon saw similarities between some regions which led him to believe that at one point continents were connected and then water separated them and caused differences in species. His hypotheses were described in his work, the 36 volume Histoire Naturelle, générale et particulière, in which he argued that varying geographical regions would have different forms of life. This was inspired by his observations comparing the Old and New World, as he determined distinct variations of species from the two regions. Buffon believed there was a single species creation event, and that different regions of the world were homes for varying species, which is an alternate view than that of Linnaeus. Buffon's law eventually became a principle of biogeography by explaining how similar environments were habitats for comparable types of organisms. Buffon also studied fossils which led him to believe that the Earth was over tens of thousands of years old, and that humans had not lived there long in comparison to the age of the Earth.
19th century
Following the period of exploration came the Age of Enlightenment in Europe, which attempted to explain the patterns of biodiversity observed by Buffon and Linnaeus. At the birth of the 19th century, Alexander von Humboldt, known as the "founder of plant geography", developed the concept of physique generale to demonstrate the unity of science and how species fit together. As one of the first to contribute empirical data to the science of biogeography through his travel as an explorer, he observed differences in climate and vegetation. The Earth was divided into regions which he defined as tropical, temperate, and arctic and within these regions there were similar forms of vegetation. This ultimately enabled him to create the isotherm, which allowed scientists to see patterns of life within different climates. He contributed his observations to findings of botanical geography by previous scientists, and sketched this description of both the biotic and abiotic features of the Earth in his book, Cosmos.
Augustin de Candolle contributed to the field of biogeography as he observed species competition and the several differences that influenced the discovery of the diversity of life. He was a Swiss botanist and created the first Laws of Botanical Nomenclature in his work, Prodromus. He discussed plant distribution and his theories eventually had a great impact on Charles Darwin, who was inspired to consider species adaptations and evolution after learning about botanical geography. De Candolle was the first to describe the differences between the small-scale and large-scale distribution patterns of organisms around the globe.
Several additional scientists contributed new theories to further develop the concept of biogeography. Charles Lyell developed the Theory of Uniformitarianism after studying fossils. This theory explained how the world was not created by one sole catastrophic event, but instead from numerous creation events and locations. Uniformitarianism also introduced the idea that the Earth was actually significantly older than was previously accepted. Using this knowledge, Lyell concluded that it was possible for species to go extinct. Since he noted that Earth's climate changes, he realized that species distribution must also change accordingly. Lyell argued that climate changes complemented vegetation changes, thus connecting the environmental surroundings to varying species. This largely influenced Charles Darwin in his development of the theory of evolution.
Charles Darwin was a natural theologist who studied around the world, and most importantly in the Galapagos Islands. Darwin introduced the idea of natural selection, as he theorized against previously accepted ideas that species were static or unchanging. His contributions to biogeography and the theory of evolution were different from those of other explorers of his time, because he developed a mechanism to describe the ways that species changed. His influential ideas include the development of theories regarding the struggle for existence and natural selection. Darwin's theories started a biological segment to biogeography and empirical studies, which enabled future scientists to develop ideas about the geographical distribution of organisms around the globe.
Alfred Russel Wallace studied the distribution of flora and fauna in the Amazon Basin and the Malay Archipelago in the mid-19th century. His research was essential to the further development of biogeography, and he was later nicknamed the "father of Biogeography". Wallace conducted fieldwork researching the habits, breeding and migration tendencies, and feeding behavior of thousands of species. He studied butterfly and bird distributions in comparison to the presence or absence of geographical barriers. His observations led him to conclude that the number of organisms present in a community was dependent on the amount of food resources in the particular habitat. Wallace believed species were dynamic by responding to biotic and abiotic factors. He and Philip Sclater saw biogeography as a source of support for the theory of evolution as they used Darwin's conclusion to explain how biogeography was similar to a record of species inheritance. Key findings, such as the sharp difference in fauna either side of the Wallace Line, and the sharp difference that existed between North and South America prior to their relatively recent faunal interchange, can only be understood in this light. Otherwise, the field of biogeography would be seen as a purely descriptive one.
20th and 21st century
Moving on to the 20th century, Alfred Wegener introduced the Theory of Continental Drift in 1912, though it was not widely accepted until the 1960s. This theory was revolutionary because it changed the way that everyone thought about species and their distribution around the globe. The theory explained how continents were formerly joined in one large landmass, Pangea, and slowly drifted apart due to the movement of the plates below Earth's surface. The evidence for this theory is in the geological similarities between varying locations around the globe, the geographic distribution of some fossils (including the mesosaurs) on various continents, and the jigsaw puzzle shape of the landmasses on Earth. Though Wegener did not know the mechanism of this concept of Continental Drift, this contribution to the study of biogeography was significant in the way that it shed light on the importance of environmental and geographic similarities or differences as a result of climate and other pressures on the planet. Importantly, late in his career Wegener recognised that testing his theory required measurement of continental movement rather than inference from fossils species distributions.
In 1958 paleontologist Paul S. Martin published A Biogeography of Reptiles and Amphibians in the Gómez Farias Region, Tamaulipas, Mexico, which has been described as "ground-breaking" and "a classic treatise in historical biogeography". Martin applied several disciplines including ecology, botany, climatology, geology, and Pleistocene dispersal routes to examine the herpetofauna of a relatively small and largely undisturbed area, but ecologically complex, situated on the threshold of temperate – tropical (nearctic and neotropical) regions, including semiarid lowlands at 70 meters elevation and the northernmost cloud forest in the western hemisphere at over 2200 meters.
The publication of The Theory of Island Biogeography by Robert MacArthur and E.O. Wilson in 1967 showed that the species richness of an area could be predicted in terms of such factors as habitat area, immigration rate and extinction rate. This added to the long-standing interest in island biogeography. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
Classic biogeography has been expanded by the development of molecular systematics, creating a new discipline known as phylogeography. This development allowed scientists to test theories about the origin and dispersal of populations, such as island endemics. For example, while classic biogeographers were able to speculate about the origins of species in the Hawaiian Islands, phylogeography allows them to test theories of relatedness between these populations and putative source populations on various continents, notably in Asia and North America.
Biogeography continues as a point of study for many life sciences and geography students worldwide, however it may be under different broader titles within institutions such as ecology or evolutionary biology.
In recent years, one of the most important and consequential developments in biogeography has been to show how multiple organisms, including mammals like monkeys and reptiles like squamates, overcame barriers such as large oceans that many biogeographers formerly believed were impossible to cross. See also Oceanic dispersal.
Modern applications
Biogeography now incorporates many different fields including but not limited to physical geography, geology, botany and plant biology, zoology, general biology, and modelling. A biogeographer's main focus is on how the environment and humans affect the distribution of species as well as other manifestations of Life such as species or genetic diversity. Biogeography is being applied to biodiversity conservation and planning, projecting global environmental changes on species and biomes, projecting the spread of infectious diseases, invasive species, and for supporting planning for the establishment of crops. Technological evolving and advances have allowed for generating a whole suite of predictor variables for biogeographic analysis, including satellite imaging and processing of the Earth. Two main types of satellite imaging that are important within modern biogeography are Global Production Efficiency Model (GLO-PEM) and Geographic Information Systems (GIS). GLO-PEM uses satellite-imaging gives "repetitive, spatially contiguous, and time specific observations of vegetation". These observations are on a global scale. GIS can show certain processes on the earth's surface like whale locations, sea surface temperatures, and bathymetry. Current scientists also use coral reefs to delve into the history of biogeography through the fossilized reefs.
Two global information systems are either dedicated to, or have strong focus on, biogeography (in the form of the spatial location of observations of organisms), namely the Global Biodiversity Information Facility (GBIF: 2.57 billion species occurrence records reported as at August 2023) and, for marine species only, the Ocean Biodiversity Information System (OBIS, originally the Ocean Biogeographic Information System: 116 million species occurrence records reported as at August 2023), while at a national scale, similar compilations of species occurrence records also exist such as the U.K. National Biodiversity Network, the Atlas of Living Australia, and many others. In the case of the oceans, in 2017 Costello et al. analyzed the distribution of 65,000 species of marine animals and plants as then documented in OBIS, and used the results to distinguish 30 distinct marine realms, split between continental-shelf and offshore deep-sea areas.
Since it is self evident that compilations of species occurrence records cannot cover with any completeness, areas that have received either limited or no sampling, a number of methods have been developed to produce arguably more complete "predictive" or "modelled" distributions for species based on their associated environmental or other preferences (such as availability of food or other habitat requirements); this approach is known as either Environmental niche modelling (ENM) or Species distribution modelling (SDM). Depending on the reliability of the source data and the nature of the models employed (including the scales for which data are available), maps generated from such models may then provide better representations of the "real" biogeographic distributions of either individual species, groups of species, or biodiversity as a whole, however it should also be borne in mind that historic or recent human activities (such as hunting of great whales, or other human-induced exterminations) may have altered present-day species distributions from their potential "full" ecological footprint. Examples of predictive maps produced by niche modelling methods based on either GBIF (terrestrial) or OBIS (marine, plus some freshwater) data are the former Lifemapper project at the University of Kansas (now continued as a part of BiotaPhy) and AquaMaps, which as at 2023 contain modelled distributions for around 200,000 terrestrial, and 33,000 species of teleosts, marine mammals and invertebrates, respectively. One advantage of ENM/SDM is that in addition to showing current (or even past) modelled distributions, insertion of changed parameters such as the anticipated effects of climate change can also be used to show potential changes in species distributions that may occur in the future based on such scenarios.
Paleobiogeography
Paleobiogeography goes one step further to include paleogeographic data and considerations of plate tectonics. Using molecular analyses and corroborated by fossils, it has been possible to demonstrate that perching birds evolved first in the region of Australia or the adjacent Antarctic (which at that time lay somewhat further north and had a temperate climate). From there, they spread to the other Gondwanan continents and Southeast Asia – the part of Laurasia then closest to their origin of dispersal – in the late Paleogene, before achieving a global distribution in the early Neogene. Not knowing that at the time of dispersal, the Indian Ocean was much narrower than it is today, and that South America was closer to the Antarctic, one would be hard pressed to explain the presence of many "ancient" lineages of perching birds in Africa, as well as the mainly South American distribution of the suboscines.
Paleobiogeography also helps constrain hypotheses on the timing of biogeographic events such as vicariance and geodispersal, and provides unique information on the formation of regional biotas. For example, data from species-level phylogenetic and biogeographic studies tell us that the Amazonian teleost fauna accumulated in increments over a period of tens of millions of years, principally by means of allopatric speciation, and in an arena extending over most of the area of tropical South America (Albert & Reis 2011). In other words, unlike some of the well-known insular faunas (Galapagos finches, Hawaiian drosophilid flies, African rift lake cichlids), the species-rich Amazonian ichthyofauna is not the result of recent adaptive radiations.
For freshwater organisms, landscapes are divided naturally into discrete drainage basins by watersheds, episodically isolated and reunited by erosional processes. In regions like the Amazon Basin (or more generally Greater Amazonia, the Amazon basin, Orinoco basin, and Guianas) with an exceptionally low (flat) topographic relief, the many waterways have had a highly reticulated history over geological time. In such a context, stream capture is an important factor affecting the evolution and distribution of freshwater organisms. Stream capture occurs when an upstream portion of one river drainage is diverted to the downstream portion of an adjacent basin. This can happen as a result of tectonic uplift (or subsidence), natural damming created by a landslide, or headward or lateral erosion of the watershed between adjacent basins.
Concepts and fields
Biogeography is a synthetic science, related to geography, biology, soil science, geology, climatology, ecology and evolution.
Some fundamental concepts in biogeography include:
allopatric speciation – the splitting of a species by evolution of geographically isolated populations
evolution – change in genetic composition of a population
extinction – disappearance of a species
dispersal – movement of populations away from their point of origin, related to migration
endemic areas
geodispersal – the erosion of barriers to biotic dispersal and gene flow, that permit range expansion and the merging of previously isolated biotas
range and distribution
vicariance – the formation of barriers to biotic dispersal and gene flow, that tend to subdivide species and biotas, leading to speciation and extinction; vicariance biogeography is the field that studies these patterns
Comparative biogeography
The study of comparative biogeography can follow two main lines of investigation:
Systematic biogeography, the study of biotic area relationships, their distribution, and hierarchical classification
Evolutionary biogeography, the proposal of evolutionary mechanisms responsible for organismal distributions. Possible mechanisms include widespread taxa disrupted by continental break-up or individual episodes of long-distance movement.
Biogeographic regionalisations
There are many types of biogeographic units used in biogeographic regionalisation schemes, as there are many criteria (species composition, physiognomy, ecological aspects) and hierarchization schemes: biogeographic realms (ecozones), bioregions (sensu stricto), ecoregions, zoogeographical regions, floristic regions, vegetation types, biomes, etc.
The terms biogeographic unit, biogeographic area can be used for these categories, regardless of rank.
In 2008, an International Code of Area Nomenclature was proposed for biogeography. It achieved limited success; some studies commented favorably on it, but others were much more critical, and it "has not yet gained a significant following". Similarly, a set of rules for paleobiogeography has achieved limited success. In 2000, Westermann suggested that the difficulties in getting formal nomenclatural rules established in this field might be related to "the curious fact that neither paleo- nor neobiogeographers are organized in any formal groupings or societies, nationally (so far as I know) or internationally — an exception among active disciplines."
See also
Allen's rule
Bergmann's rule
Biogeographic realm
Bibliography of biology
Biogeography-based optimization
Center of origin
Concepts and Techniques in Modern Geography
Distance decay
Ecological land classification
Geobiology
Macroecology
Marine ecoregions
Max Carl Wilhelm Weber
Miklos Udvardy
Phytochorion – Plant region
Sky island
Systematic and evolutionary biogeography association
Notes and references
Further reading
Albert, J. S., & R. E. Reis (2011). Historical Biogeography of Neotropical Freshwater Fishes. University of California Press, Berkeley. 424 pp.
Cox, C. B. (2001). The biogeographic regions reconsidered. Journal of Biogeography, 28: 511–523, .
Ebach, M.C. (2015). Origins of biogeography. The role of biological classification in early plant and animal geography. Dordrecht: Springer, xiv + 173 pp., .
Lieberman, B. S. (2001). "Paleobiogeography: using fossils to study global change, plate tectonics, and evolution". Kluwer Academic, Plenum Publishing, .
Lomolino, M. V., & Brown, J. H. (2004). Foundations of biogeography: classic papers with commentaries. University of Chicago Press, .
Millington, A., Blumler, M., & Schickhoff, U. (Eds.). (2011). The SAGE handbook of biogeography. Sage, London, .
Nelson, G.J. (1978). From Candolle to Croizat: Comments on the history of biogeography. Journal of the History of Biology, 11: 269–305.
Udvardy, M. D. F. (1975). A classification of the biogeographical provinces of the world. IUCN Occasional Paper no. 18. Morges, Switzerland: IUCN.
External links
The International Biogeography Society
Systematic & Evolutionary Biogeographical Society (archived 5 December 2008)
Early Classics in Biogeography, Distribution, and Diversity Studies: To 1950
Early Classics in Biogeography, Distribution, and Diversity Studies: 1951–1975
Some Biogeographers, Evolutionists and Ecologists: Chrono-Biographical Sketches
Major journals
Journal of Biogeography homepage (archived 15 December 2004)
Global Ecology and Biogeography homepage. .
Ecography homepage.
Landscape ecology
Physical oceanography
Physical geography
Environmental terminology
Habitat
Earth sciences | Biogeography | [
"Physics",
"Biology"
] | 5,022 | [
"Biogeography",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
99,603 | https://en.wikipedia.org/wiki/Wrought%20iron | Wrought iron is an iron alloy with a very low carbon content (less than 0.05%) in contrast to that of cast iron (2.1% to 4.5%). It is a semi-fused mass of iron with fibrous slag inclusions (up to 2% by weight), which give it a wood-like "grain" that is visible when it is etched, rusted, or bent to failure. Wrought iron is tough, malleable, ductile, corrosion resistant, and easily forge welded, but is more difficult to weld electrically.
Before the development of effective methods of steelmaking and the availability of large quantities of steel, wrought iron was the most common form of malleable iron. It was given the name wrought because it was hammered, rolled, or otherwise worked while hot enough to expel molten slag. The modern functional equivalent of wrought iron is mild steel, also called low-carbon steel. Neither wrought iron nor mild steel contain enough carbon to be hardened by heating and quenching.
Wrought iron is highly refined, with a small amount of silicate slag forged out into fibers. It comprises around 99.4% iron by mass. The presence of slag can be beneficial for blacksmithing operations, such as forge welding, since the silicate inclusions act as a flux and give the material its unique, fibrous structure. The silicate filaments in the slag also protect the iron from corrosion and diminish the effect of fatigue caused by shock and vibration.
Historically, a modest amount of wrought iron was refined into steel, which was used mainly to produce swords, cutlery, chisels, axes, and other edged tools, as well as springs and files. The demand for wrought iron reached its peak in the 1860s, being in high demand for ironclad warships and railway use. However, as properties such as brittleness of mild steel improved with better ferrous metallurgy and as steel became less costly to make thanks to the Bessemer process and the Siemens–Martin process, the use of wrought iron declined.
Many items, before they came to be made of mild steel, were produced from wrought iron, including rivets, nails, wire, chains, rails, railway couplings, water and steam pipes, nuts, bolts, horseshoes, handrails, wagon tires, straps for timber roof trusses, and ornamental ironwork, among many other things.
Wrought iron is no longer produced on a commercial scale. Many products described as wrought iron, such as guard rails, garden furniture, and gates are made of mild steel. They are described as "wrought iron" only because they have been made to resemble objects which in the past were wrought (worked) by hand by a blacksmith (although many decorative iron objects, including fences and gates, were often cast rather than wrought).
Terminology
The word "wrought" is an archaic past participle of the verb "to work", and so "wrought iron" literally means "worked iron". Wrought iron is a general term for the commodity, but is also used more specifically for finished iron goods, as manufactured by a blacksmith. It was used in that narrower sense in British Customs records, such manufactured iron was subject to a higher rate of duty than what might be called "unwrought" iron. Cast iron, unlike wrought iron, is brittle and cannot be worked either hot or cold.
In the 17th, 18th, and 19th centuries, wrought iron went by a wide variety of terms according to its form, origin, or quality.
While the bloomery process produced wrought iron directly from ore, cast iron or pig iron were the starting materials used in the finery forge and puddling furnace. Pig iron and cast iron have higher carbon content than wrought iron, but have a lower melting point than iron or steel. Cast and especially pig iron have excess slag which must be at least partially removed to produce quality wrought iron. At foundries it was common to blend scrap wrought iron with cast iron to improve the physical properties of castings.
For several years after the introduction of Bessemer and open hearth steel, there were different opinions as to what differentiated iron from steel; some believed it was the chemical composition and others that it was whether the iron heated sufficiently to melt and "fuse". Fusion eventually became generally accepted as relatively more important than composition below a given low carbon concentration. Another difference is that steel can be hardened by heat treating.
Historically, wrought iron was known as "commercially pure iron"; however, it no longer qualifies because current standards for commercially pure iron require a carbon content of less than 0.008 wt%.
Types and shapes
Bar iron is a generic term sometimes used to distinguish it from cast iron. It is the equivalent of an ingot of cast metal, in a convenient form for handling, storage, shipping and further working into a finished product.
The bars were the usual product of the finery forge, but not necessarily made by that process:
Rod iron—cut from flat bar iron in a slitting mill provided the raw material for spikes and nails.
Hoop iron—suitable for the hoops of barrels, made by passing rod iron through rolling dies.
Plate iron—sheets suitable for use as boiler plate.
Blackplate—sheets, perhaps thinner than plate iron, from the black rolling stage of tinplate production.
Voyage iron—narrow flat bar iron, made or cut into bars of a particular weight, a commodity for sale in Africa for the Atlantic slave trade. The number of bars per ton gradually increased from 70 per ton in the 1660s to 75–80 per ton in 1685 and "near 92 to the ton" in 1731.
Origin
Charcoal iron—until the end of the 18th century, wrought iron was smelted from ore using charcoal, by the bloomery process. Wrought iron was also produced from pig iron using a finery forge or in a Lancashire hearth. The resulting metal was highly variable, both in chemistry and slag content.
Puddled iron—the puddling process was the first large-scale process to produce wrought iron. In the puddling process, pig iron is refined in a reverberatory furnace to prevent contamination of the iron from the sulfur in the coal or coke. The molten pig iron is manually stirred, exposing the iron to atmospheric oxygen, which decarburizes the iron. As the iron is stirred, globs of wrought iron are collected into balls by the stirring rod (rabble arm or rod) and those are periodically removed by the puddler. Puddling was patented in 1784 and became widely used after 1800. By 1876, annual production of puddled iron in the UK alone was over 4 million tons. Around that time, the open hearth furnace was able to produce steel of suitable quality for structural purposes, and wrought iron production went into decline.
Oregrounds iron—a particularly pure grade of bar iron made ultimately from iron ore from the Dannemora mine in Sweden. Its most important use was as the raw material for the cementation process of steelmaking.
Danks iron—originally iron imported to Great Britain from Gdańsk, but in the 18th century more probably the kind of iron (from eastern Sweden) that once came from Gdańsk.
Forest iron—iron from the English Forest of Dean, where haematite ore enabled tough iron to be produced.
Lukes iron—iron imported from Liège, whose Dutch name is "Luik".
Ames iron or amys iron—another variety of iron imported to England from northern Europe. Its origin has been suggested to be Amiens, but it seems to have been imported from Flanders in the 15th century and Holland later, suggesting an origin in the Rhine valley. Its origins remain controversial.
Botolf iron or Boutall iron—from Bytów (Polish Pomerania) or Bytom (Polish Silesia).
Sable iron (or Old Sable)—iron bearing the mark (a sable) of the Demidov family of Russian ironmasters, one of the better brands of Russian iron.
Quality
Tough iron Also spelled "tuf", is not brittle and is strong enough to be used for tools.
Blend iron Made using a mixture of different types of pig iron.
Best iron Iron put through several stages of piling and rolling to reach the stage regarded (in the 19th century) as the best quality.
Marked bar iron Made by members of the Marked Bar Association and marked with the maker's brand mark as a sign of its quality.
Defects
Wrought iron is a form of commercial iron containing less than 0.10% of carbon, less than 0.25% of impurities total of sulfur, phosphorus, silicon and manganese, and less than 2% slag by weight.
Wrought iron is redshort or hot short if it contains sulfur in excess quantity. It has sufficient tenacity when cold, but cracks when bent or finished at a red heat. Hot short iron was considered unmarketable.
Cold short iron, also known as coldshear, colshire, contains excessive phosphorus. It is very brittle when cold and cracks if bent. It may, however, be worked at high temperature. Historically, coldshort iron was considered sufficient for nails.
Phosphorus is not necessarily detrimental to iron. Ancient Near Eastern smiths did not add lime to their furnaces. The absence of calcium oxide in the slag, and the deliberate use of wood with high phosphorus content during the smelting, induces a higher phosphorus content (typically <0.3%) than in modern iron (<0.02–0.03%). Analysis of the Iron Pillar of Delhi gives 0.11% in the iron. The included slag in wrought iron also imparts corrosion resistance.
Antique music wire, manufactured at a time when mass-produced carbon-steels were available, was found to have low carbon and high phosphorus; iron with high phosphorus content, normally causing brittleness when worked cold, was easily drawn into music wires. Although at the time phosphorus was not an easily identified component of iron, it was hypothesized that the type of iron had been rejected for conversion to steel but excelled when tested for drawing ability.
History
China
During the Han dynasty (202 BC – 220 AD), new iron smelting processes led to the manufacture of new wrought iron implements for use in agriculture, such as the multi-tube seed drill and iron plough. In addition to accidental lumps of low-carbon wrought iron produced by excessive injected air in ancient Chinese cupola furnaces. The ancient Chinese created wrought iron by using the finery forge at least by the 2nd century BC, the earliest specimens of cast and pig iron fined into wrought iron and steel found at the early Han dynasty site at Tieshengguo. Pigott speculates that the finery forge existed in the previous Warring States period (403–221 BC), due to the fact that there are wrought iron items from China dating to that period and there is no documented evidence of the bloomery ever being used in China. The fining process involved liquifying cast iron in a fining hearth and removing carbon from the molten cast iron through oxidation. Wagner writes that in addition to the Han dynasty hearths believed to be fining hearths, there is also pictorial evidence of the fining hearth from a Shandong tomb mural dated 1st to 2nd century AD, as well as a hint of written evidence in the 4th century AD Daoist text Taiping Jing.
Western world
Wrought iron has been used for many centuries, and is the "iron" that is referred to throughout Western history. The other form of iron, cast iron, was in use in China since ancient times but was not introduced into Western Europe until the 15th century; even then, due to its brittleness, it could be used for only a limited number of purposes. Throughout much of the Middle Ages, iron was produced by the direct reduction of ore in manually operated bloomeries, although water power had begun to be employed by 1104.
The raw material produced by all indirect processes is pig iron. It has a high carbon content and as a consequence, it is brittle and cannot be used to make hardware. The osmond process was the first of the indirect processes, developed by 1203, but bloomery production continued in many places. The process depended on the development of the blast furnace, of which medieval examples have been discovered at Lapphyttan, Sweden and in Germany.
The bloomery and osmond processes were gradually replaced from the 15th century by finery processes, of which there were two versions, the German and Walloon. They were in turn replaced from the late 18th century by puddling, with certain variants such as the Swedish Lancashire process. Those, too, are now obsolete, and wrought iron is no longer manufactured commercially.
Bloomery process
Wrought iron was originally produced by a variety of smelting processes, all described today as "bloomeries". Different forms of bloomery were used at different places and times. The bloomery was charged with charcoal and iron ore and then lit. Air was blown in through a tuyere to heat the bloomery to a temperature somewhat below the melting point of iron. In the course of the smelt, slag would melt and run out, and carbon monoxide from the charcoal would reduce the ore to iron, which formed a spongy mass (called a "bloom") containing iron and also molten silicate minerals (slag) from the ore. The iron remained in the solid state. If the bloomery were allowed to become hot enough to melt the iron, carbon would dissolve into it and form pig or cast iron, but that was not the intention. However, the design of a bloomery made it difficult to reach the melting point of iron and also prevented the concentration of carbon monoxide from becoming high.
After smelting was complete, the bloom was removed, and the process could then be started again. It was thus a batch process, rather than a continuous one such as a blast furnace. The bloom had to be forged mechanically to consolidate it and shape it into a bar, expelling slag in the process.
During the Middle Ages, water-power was applied to the process, probably initially for powering bellows, and only later to hammers for forging the blooms. However, while it is certain that water-power was used, the details remain uncertain. That was the culmination of the direct process of ironmaking. It survived in Spain and southern France as Catalan Forges to the mid 19th century, in Austria as the stuckofen to 1775, and near Garstang in England until about 1770; it was still in use with hot blast in New York in the 1880s. In Japan the last of the old tatara bloomeries used in production of traditional tamahagane steel, mainly used in swordmaking, was extinguished only in 1925, though in the late 20th century the production resumed on a low scale to supply the steel to the artisan swordmakers.
Osmond process
Osmond iron consisted of balls of wrought iron, produced by melting pig iron and catching the droplets on a staff, which was spun in front of a blast of air so as to expose as much of it as possible to the air and oxidise its carbon content. The resultant ball was often forged into bar iron in a hammer mill.
Finery process
In the 15th century, the blast furnace spread into what is now Belgium where it was improved. From there, it spread via the Pays de Bray on the boundary of Normandy and then to the Weald in England. With it, the finery forge spread. Those remelted the pig iron and (in effect) burnt out the carbon, producing a bloom, which was then forged into bar iron. If rod iron was required, a slitting mill was used.
The finery process existed in two slightly different forms. In Great Britain, France, and parts of Sweden, only the Walloon process was used. That employed two different hearths, a finery hearth for finishing the iron and a chafery hearth for reheating it in the course of drawing the bloom out into a bar. The finery always burnt charcoal, but the chafery could be fired with mineral coal, since its impurities would not harm the iron when it was in the solid state. On the other hand, the German process, used in Germany, Russia, and most of Sweden used a single hearth for all stages.
The introduction of coke for use in the blast furnace by Abraham Darby in 1709 (or perhaps others a little earlier) initially had little effect on wrought iron production. Only in the 1750s was coke pig iron used on any significant scale as the feedstock of finery forges. However, charcoal continued to be the fuel for the finery.
Potting and stamping
From the late 1750s, ironmasters began to develop processes for making bar iron without charcoal. There were a number of patented processes for that, which are referred to today as potting and stamping. The earliest were developed by John Wood of Wednesbury and his brother Charles Wood of Low Mill at Egremont, patented in 1763. Another was developed for the Coalbrookdale Company by the Cranage brothers. Another important one was that of John Wright and Joseph Jesson of West Bromwich.
Puddling process
A number of processes for making wrought iron without charcoal were devised as the Industrial Revolution began during the latter half of the 18th century. The most successful of those was puddling, using a puddling furnace (a variety of the reverberatory furnace), which was invented by Henry Cort in 1784. It was later improved by others including Joseph Hall, who was the first to add iron oxide to the charge. In that type of furnace, the metal does not come into contact with the fuel, and so is not contaminated by its impurities. The heat of the combustion products passes over the surface of the puddle and the roof of the furnace reverberates (reflects) the heat onto the metal puddle on the fire bridge of the furnace.
Unless the raw material used is white cast iron, the pig iron or other raw product of the puddling first had to be refined into refined iron, or finers metal. That would be done in a refinery where raw coal was used to remove silicon and convert carbon within the raw material, found in the form of graphite, to a combination with iron called cementite.
In the fully developed process (of Hall), this metal was placed into the hearth of the puddling furnace where it was melted. The hearth was lined with oxidizing agents such as haematite and iron oxide. The mixture was subjected to a strong current of air and stirred with long bars, called puddling bars or rabbles, through working doors. The air, the stirring, and the "boiling" action of the metal helped the oxidizing agents to oxidize the impurities and carbon out of the pig iron. As the impurities oxidize, they formed a molten slag or drifted off as gas, while the remaining iron solidified into spongy wrought iron that floated to the top of the puddle and was fished out of the melt as puddle balls, using puddle bars.
Shingling
There was still some slag left in the puddle balls, so while they were still hot they would be shingled to remove the remaining slag and cinder. That was achieved by forging the balls under a hammer, or by squeezing the bloom in a machine. The material obtained at the end of shingling is known as bloom. The blooms are not useful in that form, so they were rolled into a final product.
Sometimes European ironworks would skip the shingling process completely and roll the puddle balls. The only drawback to that is that the edges of the rough bars were not as well compressed. When the rough bar was reheated, the edges might separate and be lost into the furnace.
Rolling
The bloom was passed through rollers and to produce bars. The bars of wrought iron were of poor quality, called muck bars or puddle bars. To improve their quality, the bars were cut up, piled and tied together by wires, a process known as faggoting or piling. They were then reheated to a welding state, forge welded, and rolled again into bars. The process could be repeated several times to produce wrought iron of desired quality. Wrought iron that has been rolled multiple times is called merchant bar or merchant iron.
Lancashire process
The advantage of puddling was that it used coal, not charcoal as fuel. However, that was of little advantage in Sweden, which lacked coal. Gustaf Ekman observed charcoal fineries at Ulverston, which were quite different from any in Sweden. After his return to Sweden in the 1830s, he experimented and developed a process similar to puddling but used firewood and charcoal, which was widely adopted in the Bergslagen in the following decades.
Aston process
In 1925, James Aston of the United States developed a process for manufacturing wrought iron quickly and economically. It involved taking molten steel from a Bessemer converter and pouring it into cooler liquid slag. The temperature of the steel is about 1500 °C and the liquid slag is maintained at approximately 1200 °C. The molten steel contains a large amount of dissolved gases so when the liquid steel hit the cooler surfaces of the liquid slag the gases were liberated. The molten steel then froze to yield a spongy mass having a temperature of about 1370 °C. The spongy mass would then be finished by being shingled and rolled as described under puddling (above). Three to four tons could be converted per batch with the method.
Decline
Steel began to replace iron for railroad rails as soon as the Bessemer process for its manufacture was adopted (1865 on). Iron remained dominant for structural applications until the 1880s, because of problems with brittle steel, caused by introduced nitrogen, high carbon, excess phosphorus, or excessive temperature during or too-rapid rolling. By 1890 steel had largely replaced iron for structural applications.
Sheet iron (Armco 99.97% pure iron) had good properties for use in appliances, being well-suited for enamelling and welding, and being rust-resistant.
In the 1960s, the price of steel production was dropping due to recycling, and even using the Aston process, wrought iron production was labor-intensive. It has been estimated that the production of wrought iron is approximately twice as expensive as that of low-carbon steel. In the United States, the last plant closed in 1969. The last in the world was the Atlas Forge of Thomas Walmsley and Sons in Bolton, Great Britain, which closed in 1973. Its 1860s-era equipment was moved to the Blists Hill site of Ironbridge Gorge Museum for preservation. Some wrought iron is still being produced for heritage restoration purposes, but only by recycling scrap.
Properties
The slag inclusions, or stringers, in wrought iron give it properties not found in other forms of ferrous metal. There are approximately 250,000 inclusions per square inch. A fresh fracture shows a clear bluish color with a high silky luster and fibrous appearance.
Wrought iron lacks the carbon content necessary for hardening through heat treatment, but in areas where steel was uncommon or unknown, tools were sometimes cold-worked (hence cold iron) to harden them. An advantage of its low carbon content is its excellent weldability. Furthermore, sheet wrought iron cannot bend as much as steel sheet metal when cold worked. Wrought iron can be melted and cast; however, the product is no longer wrought iron, since the slag stringers characteristic of wrought iron disappear on melting, so the product resembles impure, cast, Bessemer steel. There is no engineering advantage to melting and casting wrought iron, as compared to using cast iron or steel, both of which are cheaper.
Due to the variations in iron ore origin and iron manufacture, wrought iron can be inferior or superior in corrosion resistance, compared to other iron alloys. There are many mechanisms behind its corrosion resistance. Chilton and Evans found that nickel enrichment bands reduce corrosion. They also found that in puddled, forged, and piled iron, the working-over of the metal spread out copper, nickel, and tin impurities that produce electrochemical conditions that slow down corrosion. The slag inclusions have been shown to disperse corrosion to an even film, enabling the iron to resist pitting. Another study has shown that slag inclusions are pathways to corrosion. Other studies show that sulfur in the wrought iron decreases corrosion resistance, while phosphorus increases corrosion resistance. Chloride ions also decrease wrought iron's corrosion resistance.
Wrought iron may be welded in the same manner as mild steel, but the presence of oxide or inclusions will give defective results.
The material has a rough surface, so it can hold platings and coatings better than smooth steel. For instance, a galvanic zinc finish applied to wrought iron is approximately 25–40% thicker than the same finish on steel. In Table 1, the chemical composition of wrought iron is compared to that of pig iron and carbon steel. Although it appears that wrought iron and plain carbon steel have similar chemical compositions, that is deceptive. Most of the manganese, sulfur, phosphorus, and silicon in the wrought iron are incorporated into the slag fibers, making wrought iron purer than plain carbon steel.
Amongst its other properties, wrought iron becomes soft at red heat and can be easily forged and forge welded. It can be used to form temporary magnets, but it cannot be magnetized permanently, and is ductile, malleable, and tough.
Ductility
For most purposes, ductility rather than tensile strength is a more important measure of the quality of wrought iron. In tensile testing, the best irons are able to undergo considerable elongation before failure. Higher tensile wrought iron is brittle.
Because of the large number of boiler explosions on steamboats in the early 1800s, the U.S. Congress passed legislation in 1830 which approved funds for correcting the problem. The treasury awarded a $1500 contract to the Franklin Institute to conduct a study. As part of the study, Walter R. Johnson and Benjamin Reeves conducted strength tests on boiler iron using a tester they had built in 1832 based on a design by Lagerhjelm in Sweden. Because of misunderstandings about tensile strength and ductility, their work did little to reduce failures.
The importance of ductility was recognized by some very early in the development of tube boilers, evidenced by Thurston's comment:
Various 19th century investigations of boiler explosions, especially those by insurance companies, found causes to be most commonly the result of operating boilers above the safe pressure range, either to get more power, or due to defective boiler pressure relief valves and difficulties of obtaining reliable indications of pressure and water levels. Poor fabrication was also a common problem. Also, the thickness of the iron in steam drums was low, by modern standards.
By the late 19th century, when metallurgists were able to better understand what properties and processes made good iron, iron in steam engines was being displaced by steel. Also, the old cylindrical boilers with fire tubes were displaced by water tube boilers, which are inherently safer.
Purity
In 2010, Gerry McDonnell demonstrated in England by analysis that a wrought iron bloom, from a traditional smelt, could be worked into 99.7% pure iron with no evidence of carbon. It was found that the stringers common to other wrought irons were not present, thus making it very malleable for the smith to work hot and cold. A commercial source of pure iron is available and is used by smiths as an alternative to traditional wrought iron and other new generation ferrous metals.
Applications
Wrought iron furniture has a long history, dating back to Roman times. There are 13th century wrought iron gates in Westminster Abbey in London, and wrought iron furniture seemed to reach its peak popularity in Britain in the 17th century, during the reign of William III and Mary II. However, cast iron and cheaper steel caused a gradual decline in wrought iron manufacture; the last wrought ironworks in Britain closed in 1974.
It is also used to make home decor items such as baker's racks, wine racks, pot racks, etageres, table bases, desks, gates, beds, candle holders, curtain rods, bars, and bar stools.
The vast majority of wrought iron available today is from reclaimed materials. Old bridges and anchor chains dredged from harbors are major sources. The greater corrosion resistance of wrought iron is due to the siliceous impurities (naturally occurring in iron ore), namely ferrous silicate.
Wrought iron has been used for decades as a generic term across the gate and fencing industry, even though mild steel is used for manufacturing these "wrought iron" gates. This is mainly because of the limited availability of true wrought iron. Steel can also be hot-dip galvanised to prevent corrosion, which cannot be done with wrought iron.
See also
Bronze and brass ornamental work
Cast iron
Semi-steel casting
Notes
References
Further reading
External links
Architectural elements
Building materials
Chinese inventions
Ferrous alloys
Han dynasty
Iron
Ironmongery
Metalworking | Wrought iron | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 5,967 | [
"Ferrous alloys",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Architectural elements",
"Alloys",
"Components",
"Matter",
"Building materials"
] |
99,611 | https://en.wikipedia.org/wiki/Appendix%20%28anatomy%29 | The appendix (: appendices or appendixes; also vermiform appendix; cecal (or caecal, cæcal) appendix; vermix; or vermiform process) is a finger-like, blind-ended tube connected to the cecum, from which it develops in the embryo.
The cecum is a pouch-like structure of the large intestine, located at the junction of the small and the large intestines. The term "vermiform" comes from Latin and means "worm-shaped". The appendix was once considered a vestigial organ, but this view has changed since the early 2000s. Research suggests that the appendix may serve an important purpose as a reservoir for beneficial gut bacteria.
Structure
The human appendix averages in length, ranging from . The diameter of the appendix is , and more than is considered a thickened or inflamed appendix. The longest appendix ever removed was long. The appendix is usually located in the lower right quadrant of the abdomen, near the right hip bone. The base of the appendix is located beneath the ileocecal valve that separates the large intestine from the small intestine. Its position within the abdomen corresponds to a point on the surface known as McBurney's point.
The appendix is connected to the mesentery in the lower region of the ileum, by a short region of the mesocolon known as the mesoappendix.
Variation
Some identical twins—known as mirror image twins—can have a mirror-imaged anatomy, a congenital condition with the appendix located in the lower left quadrant of the abdomen instead of the lower right. Intestinal malrotation may also cause displacement of the appendix to the left side.
While the base of the appendix is typically located below the ileocecal valve, the tip of the appendix can be variably located—in the pelvis, outside the peritoneum or behind the cecum. The prevalence of the different positions varies amongst populations with the retrocecal position being most common in Ghana and Sudan, with 67.3% and 58.3% occurrence respectively, in comparison to Iran and Bosnia where the pelvic position is most common, with 55.8% and 57.7% occurrence respectively.
In very rare cases, the appendix may not be present at all (laparotomies for suspected appendicitis have given a frequency of 1 in 100,000).
Sometimes there is a semi-circular fold of mucous membrane at the opening of the appendix. This valve of the vermiform appendix is also called Gerlach's valve.
Functions
Maintaining gut flora
Although it has been long accepted that the immune tissue surrounding the appendix and elsewhere in the gut—called gut-associated lymphoid tissue—carries out a number of important functions, explanations were lacking for the distinctive shape of the appendix and its apparent lack of specific importance and function as judged by an absence of side effects following its removal. Therefore, the notion that the appendix is only vestigial became widely held.
William Parker, Randy Bollinger, and colleagues at Duke University proposed in 2007 that the appendix serves as a haven for useful bacteria when illness flushes the bacteria from the rest of the intestines. This proposition is based on an understanding that emerged by the early 2000s of how the immune system supports the growth of beneficial intestinal bacteria, in combination with many well-known features of the appendix, including its architecture, its location just below the normal one-way flow of food and germs in the large intestine, and its association with copious amounts of immune tissue.
Research performed at Winthrop–University Hospital showed that individuals without an appendix were four times as likely to have a recurrence of Clostridioides difficile colitis. The appendix, therefore, may act as a "safe house" for beneficial bacteria. This reservoir of bacteria could then serve to repopulate the gut flora in the digestive system following a bout of dysentery or cholera or to boost it following a milder gastrointestinal illness.
Immune and lymphatic systems
The appendix has been identified as an important component of mammalian mucosal immune function, particularly B cell-mediated immune responses and extrathymically derived T cells. This structure helps in the proper movement and removal of waste matter in the digestive system, contains lymphatic vessels that regulate pathogens, and lastly, might even produce early defences that prevent deadly diseases. Additionally, it is thought that this may provide more immune defences from invading pathogens and getting the lymphatic system's B and T cells to fight the viruses and bacteria that infect that portion of the bowel and training them so that immune responses are targeted and more able to reliably and less dangerously fight off pathogens. In addition, there are different immune cells called innate lymphoid cells that function in the gut in order to help the appendix maintain digestive health.
Research also shows a positive correlation between the existence of the appendix and the concentration of cecal lymphoid tissue, which supports the suggestion that not only does the appendix evolve as a complex with the cecum but also has major immune benefits.
Clinical significance
Common diseases of the appendix (in humans) are appendicitis and carcinoid tumors (appendiceal carcinoid). Appendix cancer accounts for about 1 in 200 of all gastrointestinal malignancies. In rare cases, adenomas are also present.
Appendicitis
Appendicitis is a condition characterized by inflammation of the appendix. Pain often begins in the center of the abdomen, corresponding to the appendix's development as part of the embryonic midgut. This pain is typically a dull, poorly localized, visceral pain.
As the inflammation progresses, the pain begins to localize more clearly to the right lower quadrant, as the peritoneum becomes inflamed. This peritoneal inflammation, or peritonitis, results in rebound tenderness (pain upon removal of pressure rather than application of pressure). In particular, it presents at McBurney's point, 1/3 of the way along a line drawn from the anterior superior iliac spine to the umbilicus. Typically, point (skin) pain is not present until the parietal peritoneum is inflamed, as well. Fever and an immune system response are also characteristic of appendicitis. Other signs and symptoms may include nausea and vomiting, low-grade fever that may get worse, constipation or diarrhea, abdominal bloating, or flatulence.
Appendicitis usually requires the removal of the inflamed appendix, in an appendectomy either by laparotomy or laparoscopy. Untreated, the appendix may rupture, leading to peritonitis, followed by shock, and, if still untreated, death.
Surgery
The surgical removal of the appendix is called an appendectomy. This removal is normally performed as an emergency procedure when the patient is suffering from acute appendicitis. In the absence of surgical facilities, intravenous antibiotics are used to delay or avoid the onset of sepsis. In some cases, the appendicitis resolves completely; more often, an inflammatory mass forms around the appendix. This is a relative contraindication to surgery.
The appendix is also used for the construction of an efferent urinary conduit, in an operation known as the Mitrofanoff procedure, in people with a neurogenic bladder.
The appendix is also used as a means to access the colon in children with paralysed bowels or major rectal sphincter problems. The appendix is brought out to the skin surface and the child/parent can then attach a catheter and easily wash out the colon (via normal defaecation) using an appropriate solution.
History
Charles Darwin suggested that the appendix was mainly used by earlier hominids for digesting fibrous vegetation, then evolved to take on a new purpose over time. The very long cecum of some herbivorous animals, such as in the horse or the koala, appears to support this hypothesis. The koala's cecum enables it to host bacteria that specifically help to break down cellulose. Human ancestors may have also relied upon this system when they lived on a diet rich in foliage.
As people began to eat more easily digested foods, they may have become less reliant on cellulose-rich plants for energy. As the cecum became less necessary for digestion, mutations that were previously deleterious (and would have hindered evolutionary progress) were no longer important, so the mutations survived. It is suggested that these alleles became more frequent and the cecum continued to shrink. After millions of years, the once-necessary cecum degraded to be the appendix of modern humans.
Dr. Heather F. Smith of Midwestern University and colleagues explained:
Recently ... improved understanding of gut immunity has merged with current thinking in biological and medical science, pointing to an apparent function of the mammalian cecal appendix as a safe-house for symbiotic gut microbes, preserving the flora during times of gastrointestinal infection in societies without modern medicine. This function is potentially a selective force for the evolution and maintenance of the appendix.
Three morphotypes of cecal-appendices can be described among mammals based primarily on the shape of the cecum: a distinct appendix branching from a rounded or sac-like cecum (as in many primate species), an appendix located at the apex of a long and voluminous cecum (as in the rabbit, greater glider and Cape dune mole rat), and an appendix in the absence of a pronounced cecum (as in the wombat). In addition, long narrow appendix-like structures are found in mammals that either lack an apparent cecum (as in monotremes) or lack a distinct junction between the cecum and appendix-like structure (as in the koala). A cecal appendix has evolved independently at least twice, and apparently represents yet another example of convergence in morphology between Australian marsupials and placentals in the rest of the world. Although the appendix has apparently been lost by numerous species, it has also been maintained for more than 80 million years in at least one clade.
In a 2013 paper, the appendix was found to have independently evolved in different animals at least 32 times (and perhaps as many as 38 times) and to have been lost no more than six times over the course of history. A more recent study using similar methods on an updated database yielded similar, though less spectacular results, with at least 29 gains and at the most 12 losses (all of which were ambiguous), and this is still significantly asymmetrical.
This suggests that the cecal appendix has a selective advantage in many situations and argues strongly against its vestigial nature. Given that this organ may have a selective advantage in numerous situations, it appears to be associated with greater maximal longevity, for a given body mass. For example, in a 2023 study, the protective functions conferred against diarrhea were observed in young primates. This complex evolutionary history of the appendix, along with a great heterogeneity in its evolutionary rate in various taxa, suggests that it is a recurrent trait.
Such a function may be useful in a culture lacking modern sanitation and healthcare practice, where diarrhea may be prevalent. Current epidemiological data on the cause of death in developing countries collected by the World Health Organization in 2001 show that acute diarrhea is now the fourth leading cause of disease-related death in developing countries (data summarized by the Bill and Melinda Gates Foundation). Two of the other leading causes of death are expected to have exerted limited or no selection pressure.
Additional images
See also
Meckel's diverticulum
Appendix of the epididymis, a detached efferent duct of the epididymis
Appendix testis, a vestigial remnant of the Müllerian duct
Epiploic appendix, one of several small pouches of fat on the peritoneum along the colon and rectum
Appendix of the laryngeal ventricle, a sac that extends from the laryngeal ventricle
Mesoappendix, the portion of the mesentery that connects the ileum to the vermiform appendix
References
Further reading
Appendix May Actually Have a Purpose—2007 WebMD article
—"Abdominal Cavity: The Cecum and the Vermiform Appendix"
"The vestigiality of the human vermiform appendix: A Modern Reappraisal"—evolutionary biology argument that the appendix is vestigial
Cho, Jinny (August 27, 2009). "Scientists refute Darwin's theory on appendix". The Chronicle (Duke University). (News article on the above journal article.)
External links
Digestive system
Vestigial organs | Appendix (anatomy) | [
"Biology"
] | 2,684 | [
"Digestive system",
"Organ systems"
] |
99,645 | https://en.wikipedia.org/wiki/Early%20modern%20human | Early modern human (EMH), or anatomically modern human (AMH), are terms used to distinguish Homo sapiens (sometimes Homo sapiens sapiens) that are anatomically consistent with the range of phenotypes seen in contemporary humans, from extinct archaic human species (of which some are at times also identified with, but only one, prefix sapiens). This distinction is useful especially for times and regions where anatomically modern and archaic humans co-existed, for example, in Paleolithic Europe. Among the oldest known remains of Homo sapiens are those found at the Omo-Kibish I archaeological site in south-western Ethiopia, dating to about 233,000 to 196,000 years ago, the Florisbad Skull founded at the Florisbad archaeological and paleontological site in South Africa, dating to about 259,000 years ago, and the Jebel Irhoud site in Morocco, dated about 315,000 years ago.
Extinct species of the genus Homo include Homo erectus (extant from roughly 2 to 0.1 million years ago) and a number of other species (by some authors considered subspecies of either H. sapiens or H. erectus). The divergence of the lineage leading to H. sapiens out of ancestral H. erectus (or an intermediate species such as Homo antecessor) is estimated to have occurred in Africa roughly 500,000 years ago. The earliest fossil evidence of early modern humans appears in Africa around 300,000 years ago, with the earliest genetic splits among modern people, according to some evidence, dating to around the same time. Sustained archaic human admixture with modern humans is known to have taken place both in Africa and (following the recent Out-Of-Africa expansion) in Eurasia, between about 100,000 and 30,000 years ago.
Name and taxonomy
The binomial name Homo sapiens was coined by Linnaeus, 1758. The Latin noun homō (genitive hominis) means "human being", while the participle sapiēns means "discerning, wise, sensible".
The species was initially thought to have emerged from a predecessor within the genus Homo around 300,000 to 200,000 years ago. A problem with the morphological classification of "anatomically modern" was that it would not have included certain extant populations. For this reason, a lineage-based (cladistic) definition of H. sapiens has been suggested, in which H. sapiens would by definition refer to the modern human lineage following the split from the Neanderthal lineage. Such a cladistic definition would extend the age of H. sapiens to over 500,000 years.
Estimates for the split between the Homo sapiens line and combined Neanderthal/Denisovan line range from between 503,000 and 565,000 years ago; between 550,000 and 765,000 years ago; and (based on rates of dental evolution) possibly more than 800,000 years ago.
Extant human populations have historically been divided into subspecies, but since around the 1980s all extant groups have tended to be subsumed into a single species, H. sapiens, avoiding division into subspecies altogether.
Some sources show Neanderthals (H. neanderthalensis) as a subspecies (H. sapiens neanderthalensis). Similarly, the discovered specimens of the H. rhodesiensis species have been classified by some as a subspecies (H. sapiens rhodesiensis), although it remains more common to treat these last two as separate species within the genus Homo rather than as subspecies within H. sapiens.
All humans are considered to be a part of the subspecies H. sapiens sapiens, a designation which has been a matter of debate since a species is usually not given a subspecies category unless there is evidence of multiple distinct subspecies.
Age and speciation process
Derivation from H. erectus
The divergence of the lineage that would lead to H. sapiens out of archaic human varieties derived from H. erectus, is estimated as having taken place over 500,000 years ago (marking the split of the H. sapiens lineage from ancestors shared with other known archaic hominins). But the oldest split among modern human populations (such as the Khoisan split from other groups) has been recently dated to between 350,000 and 260,000 years ago, and the earliest known examples of H. sapiens fossils also date to about that period, including the Jebel Irhoud remains from Morocco (ca. 300,000 or 350–280,000 years ago), the Florisbad Skull from South Africa (ca. 259,000 years ago), and the Omo remains from Ethiopia (ca. 195,000, or, as more recently dated, ca. 233,000 years ago).
An mtDNA study in 2019 proposed an origin of modern humans in Botswana (and a Khoisan split) of around 200,000 years. However, this proposal has been widely criticized by scholars, with the recent evidence overall (genetic, fossil, and archaeological) supporting an origin for H. sapiens approximately 100,000 years earlier and in a broader region of Africa than the study proposes.
In September 2019, scientists proposed that the earliest H. sapiens (and last common human ancestor to modern humans) arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa.
An alternative suggestion defines H. sapiens cladistically as including the lineage of modern humans since the split from the lineage of Neanderthals, roughly 500,000 to 800,000 years ago.
The time of divergence between archaic H. sapiens and ancestors of Neanderthals and Denisovans caused by a genetic bottleneck of the latter was dated at 744,000 years ago, combined with repeated early admixture events and Denisovans diverging from Neanderthals 300 generations after their split from H. sapiens, as calculated by Rogers et al. (2017).
The derivation of a comparatively homogeneous single species of H. sapiens from more diverse varieties of archaic humans (all of which were descended from the early dispersal of H. erectus some 1.8 million years ago) was debated in terms of two competing models during the 1980s: "recent African origin" postulated the emergence of H. sapiens from a single source population in Africa, which expanded and led to the extinction of all other human varieties, while the "multiregional evolution" model postulated the survival of regional forms of archaic humans, gradually converging into the modern human varieties by the mechanism of clinal variation, via genetic drift, gene flow and selection throughout the Pleistocene.
Since the 2000s, the availability of data from archaeogenetics and population genetics has led to the emergence of a much more detailed picture, intermediate between the two competing scenarios outlined above: The recent Out-of-Africa expansion accounts for the predominant part of modern human ancestry, while there were also significant admixture events with regional archaic humans.
Since the 1970s, the Omo remains, originally dated to some 195,000 years ago, have often been taken as the conventional cut-off point for the emergence of "anatomically modern humans". Since the 2000s, the discovery of older remains with comparable characteristics, and the discovery of ongoing hybridization between "modern" and "archaic" populations after the time of the Omo remains, have opened up a renewed debate on the age of H. sapiens in journalistic publications. H. s. idaltu, dated to 160,000 years ago, has been postulated as an extinct subspecies of H. sapiens in 2003. H. neanderthalensis, which became extinct about 40,000 years ago, was also at one point considered to be a subspecies, H. s. neanderthalensis.
H. heidelbergensis, dated 600,000 to 300,000 years ago, has long been thought to be a likely candidate for the last common ancestor of the Neanderthal and modern human lineages. However, genetic evidence from the Sima de los Huesos fossils published in 2016 seems to suggest that H. heidelbergensis in its entirety should be included in the Neanderthal lineage, as "pre-Neanderthal" or "early Neanderthal", while the divergence time between the Neanderthal and modern lineages has been pushed back to before the emergence of H. heidelbergensis, to close to 800,000 years ago, the approximate time of disappearance of H. antecessor.
Early Homo sapiens
The term Middle Paleolithic is intended to cover the time between the first emergence of H. sapiens (roughly 300,000 years ago) and the period held by some to mark the emergence of full behavioral modernity (roughly by 50,000 years ago, corresponding to the start of the Upper Paleolithic).
Many of the early modern human finds, like those of Jebel Irhoud, Omo, Herto, Florisbad, Skhul, and Peștera cu Oase exhibit a mix of archaic and modern traits. Skhul V, for example, has prominent brow ridges and a projecting face. However, the brain case is quite rounded and distinct from that of the Neanderthals and is similar to the brain case of modern humans. It is uncertain whether the robust traits of some of the early modern humans like Skhul V reflects mixed ancestry or retention of older traits.
The "gracile" or lightly built skeleton of anatomically modern humans has been connected to a change in behavior, including increased cooperation and "resource transport".
There is evidence that the characteristic human brain development, especially the prefrontal cortex, was due to "an exceptional acceleration of metabolome evolution ... paralleled by a drastic reduction in muscle strength. The observed rapid metabolic changes in brain and muscle, together with the unique human cognitive skills and low muscle performance, might reflect parallel mechanisms in human evolution." The Schöningen spears and their correlation of finds are evidence that complex technological skills already existed 300,000 years ago, and are the first obvious proof of an active (big game) hunt. H. heidelbergensis already had intellectual and cognitive skills like anticipatory planning, thinking and acting that so far have only been attributed to modern man.
The ongoing admixture events within anatomically modern human populations make it difficult to estimate the age of the matrilinear and patrilinear most recent common ancestors of modern populations (Mitochondrial Eve and Y-chromosomal Adam). Estimates of the age of Y-chromosomal Adam have been pushed back significantly with the discovery of an ancient Y-chromosomal lineage in 2013, to likely beyond 300,000 years ago. There have, however, been no reports of the survival of Y-chromosomal or mitochondrial DNA clearly deriving from archaic humans (which would push back the age of the most recent patrilinear or matrilinear ancestor beyond 500,000 years).
Fossil teeth found at Qesem Cave (Israel) and dated to between 400,000 and 200,000 years ago have been compared to the dental material from the younger (120,000–80,000 years ago) Skhul and Qafzeh hominins.
Dispersal and archaic admixture
Dispersal of early H. sapiens begins soon after its emergence, as evidenced by the North African Jebel Irhoud finds (dated to around 315,000 years ago). There is indirect evidence for H. sapiens presence in West Asia around 270,000 years ago.
The Florisbad Skull from Florisbad, South Africa, dated to about 259,000 years ago, has also been classified as representing early H. sapiens.
In September 2019, scientists proposed that the earliest H. sapiens (and last common human ancestor to modern humans) arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa.
Among extant populations, the Khoi-San (or "Capoid") hunters-gatherers of Southern Africa may represent the human population with the earliest possible divergence within the group Homo sapiens sapiens. Their separation time has been estimated in a 2017 study to be between 350 and 260,000 years ago, compatible with the estimated age of early H. sapiens. The study states that the deep split-time estimation of 350 to 260 thousand years ago is consistent with the archaeological estimate for the onset of the Middle Stone Age across sub-Saharan Africa and coincides with archaic H. sapiens in southern Africa represented by, for example, the Florisbad skull dating to 259 (± 35) thousand years ago.
H. s. idaltu, found at Middle Awash in Ethiopia, lived about 160,000 years ago, and H. sapiens lived at Omo Kibish in Ethiopia about 233,000-195,000 years ago. Two fossils from Guomde, Kenya, dated to at least (and likely more than) 180,000 years ago and (more precisely) to 300–270,000 years ago, have been tentatively assigned to H. sapiens and similarities have been noted between them and the Omo Kibbish remains. Fossil evidence for modern human presence in West Asia is ascertained for 177,000 years ago, and disputed fossil evidence suggests expansion as far as East Asia by 120,000 years ago.
In July 2019, anthropologists reported the discovery of 210,000 year old remains of a H. sapiens and 170,000 year old remains of a H. neanderthalensis in Apidima Cave, Peloponnese, Greece, more than 150,000 years older than previous H. sapiens finds in Europe.
A significant dispersal event, within Africa and to West Asia, is associated with the African megadroughts during MIS 5, beginning 130,000 years ago. A 2011 study located the origin of basal population of contemporary human populations at 130,000 years ago, with the Khoi-San representing an "ancestral population cluster" located in southwestern Africa (near the coastal border of Namibia and Angola).
While early modern human expansion in Sub-Saharan Africa before 130 kya persisted, early expansion to North Africa and Asia appears to have mostly disappeared by the end of MIS5 (75,000 years ago), and is known only from fossil evidence and from archaic admixture. Eurasia was re-populated by early modern humans in the so-called "recent out-of-Africa migration" post-dating MIS5, beginning around 70,000–50,000 years ago. In this expansion, bearers of mt-DNA haplogroup L3 left East Africa, likely reaching Arabia via the Bab-el-Mandeb, and in the Great Coastal Migration spread to South Asia, Maritime South Asia and Oceania between 65,000 and 50,000 years ago, while Europe, East and North Asia were reached by about 45,000 years ago. Some evidence suggests that an early wave of humans may have reached the Americas by about 40,000–25,000 years ago.
Evidence for the overwhelming contribution of this "recent" (L3-derived) expansion to all non-African populations was established based on mitochondrial DNA, combined with evidence based on physical anthropology of archaic specimens, during the 1990s and 2000s, and has also been supported by Y DNA and autosomal DNA. The assumption of complete replacement has been revised in the 2010s with the discovery of admixture events (introgression) of populations of H. sapiens with populations of archaic humans over the period of between roughly 100,000 and 30,000 years ago, both in Eurasia and in Sub-Saharan Africa. Neanderthal admixture, in the range of 1–4%, is found in all modern populations outside of Africa, including in Europeans, Asians, Papua New Guineans, Australian Aboriginals, Native Americans, and other non-Africans. This suggests that interbreeding between Neanderthals and anatomically modern humans took place after the recent "out of Africa" migration, likely between 60,000 and 40,000 years ago. Recent admixture analyses have added to the complexity, finding that Eastern Neanderthals derive up to 2% of their ancestry from anatomically modern humans who left Africa some 100 kya. The extent of Neanderthal admixture (and introgression of genes acquired by admixture) varies significantly between contemporary racial groups, being absent in Africans, intermediate in Europeans and highest in East Asians. Certain genes related to UV-light adaptation introgressed from Neanderthals have been found to have been selected for in East Asians specifically from 45,000 years ago until around 5,000 years ago. The extent of archaic admixture is of the order of about 1% to 4% in Europeans and East Asians, and highest among Melanesians (the last also having Denisova hominin admixture at 4% to 6% in addition to neanderthal admixture). Cumulatively, about 20% of the Neanderthal genome is estimated to remain present spread in contemporary populations.
In September 2019, scientists reported the computerized determination, based on 260 CT scans, of a virtual skull shape of the last common human ancestor to modern humans/H. sapiens, representative of the earliest modern humans, and suggested that modern humans arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa while North-African fossils may represent a population which introgressed into Neandertals during the LMP.
According to a study published in 2020, there are indications that 2% to 19% (or about ≃6.6 and ≃7.0%) of the DNA of four West African populations may have come from an unknown archaic hominin which split from the ancestor of humans and Neanderthals between 360 kya to 1.02 mya.
Anatomy
Generally, modern humans are more lightly built (or more "gracile") than the more "robust" archaic humans. Nevertheless, contemporary humans exhibit high variability in many physiological traits, and may exhibit remarkable "robustness". There are still a number of physiological details which can be taken as reliably differentiating the physiology of Neanderthals vs. anatomically modern humans.
Anatomical modernity
The term "anatomically modern humans" (AMH) is used with varying scope depending on context, to distinguish "anatomically modern" Homo sapiens from archaic humans such as Neanderthals and Middle and Lower Paleolithic hominins with transitional features intermediate between H. erectus, Neanderthals and early AMH called archaic Homo sapiens. In a convention popular in the 1990s, Neanderthals were classified as a subspecies of H. sapiens, as H. s. neanderthalensis, while AMH (or European early modern humans, EEMH) was taken to refer to "Cro-Magnon" or H. s. sapiens. Under this nomenclature (Neanderthals considered H. sapiens), the term "anatomically modern Homo sapiens" (AMHS) has also been used to refer to EEMH ("Cro-Magnons"). It has since become more common to designate Neanderthals as a separate species, H. neanderthalensis, so that AMH in the European context refers to H. sapiens, but the question is by no means resolved.
In this more narrow definition of H. sapiens, the subspecies Homo sapiens idaltu, discovered in 2003, also falls under the umbrella of "anatomically modern". The recognition of H. sapiens idaltu as a valid subspecies of the anatomically modern human lineage would justify the description of contemporary humans with the subspecies name Homo sapiens sapiens. However, biological anthropologist Chris Stringer does not consider idaltu distinct enough within H. sapiens to warrant its own subspecies designation.
A further division of AMH into "early" or "robust" vs. "post-glacial" or "gracile" subtypes has since been used for convenience. The emergence of "gracile AMH" is taken to reflect a process towards a smaller and more fine-boned skeleton beginning around 50,000–30,000 years ago.
Braincase anatomy
The cranium lacks a pronounced occipital bun in the neck, a bulge that anchored considerable neck muscles in Neanderthals. Modern humans, even the earlier ones, generally have a larger fore-brain than the archaic people, so that the brain sits above rather than behind the eyes. This will usually (though not always) give a higher forehead, and reduced brow ridge. Early modern people and some living people do however have quite pronounced brow ridges, but they differ from those of archaic forms by having both a supraorbital foramen or notch, forming a groove through the ridge above each eye. This splits the ridge into a central part and two distal parts. In current humans, often only the central section of the ridge is preserved (if it is preserved at all). This contrasts with archaic humans, where the brow ridge is pronounced and unbroken.
Modern humans commonly have a steep, even vertical forehead whereas their predecessors had foreheads that sloped strongly backwards. According to Desmond Morris, the vertical forehead in humans plays an important role in human communication through eyebrow movements and forehead skin wrinkling.
Brain size in both Neanderthals and AMH is significantly larger on average (but overlapping in range) than brain size in H. erectus. Neanderthal and AMH brain sizes are in the same range, but there are differences in the relative sizes of individual brain areas, with significantly larger visual systems in Neanderthals than in AMH.
Jaw anatomy
Compared to archaic people, anatomically modern humans have smaller, differently shaped teeth. This results in a smaller, more receded dentary, making the rest of the jaw-line stand out, giving an often quite prominent chin. The central part of the mandible forming the chin carries a triangularly shaped area forming the apex of the chin called the mental Trigon, not found in archaic humans. Particularly in living populations, the use of fire and tools requires fewer jaw muscles, giving slender, more gracile jaws. Compared to archaic people, modern humans have smaller, lower faces.
Body skeleton structure
The body skeletons of even the earliest and most robustly built modern humans were less robust than those of Neanderthals (and from what little we know from Denisovans), having essentially modern proportions. Particularly regarding the long bones of the limbs, the distal bones (the radius/ulna and tibia/fibula) are nearly the same size or slightly shorter than the proximal bones (the humerus and femur). In ancient people, particularly Neanderthals, the distal bones were shorter, usually thought to be an adaptation to cold climate. The same adaptation is found in some modern people living in the polar regions.
Height ranges overlap between Neanderthals and AMH, with Neanderthal averages cited as and for males and females, respectively, which is largely identical to pre-industrial average heights for AMH. Contemporary national averages range between in males and in females. Neanderthal ranges approximate the contemporary height distribution measured among Malay people, for one.
Recent evolution
Following the peopling of Africa some 130,000 years ago, and the recent Out-of-Africa expansion some 70,000 to 50,000 years ago, some sub-populations of H. sapiens had been essentially isolated for tens of thousands of years prior to the early modern Age of Discovery. Combined with archaic admixture this has resulted in significant genetic variation, which in some instances has been shown to be the result of directional selection taking place over the past 15,000 years, i.e., significantly later than possible archaic admixture events.
Some climatic adaptations, such as high-altitude adaptation in humans, are thought to have been acquired by archaic admixture. Introgression of genetic variants acquired by Neanderthal admixture have different distributions in European and East Asians, reflecting differences in recent selective pressures. A 2014 study reported that Neanderthal-derived variants found in East Asian populations showed clustering in functional groups related to immune and haematopoietic pathways, while European populations showed clustering in functional groups related to the lipid catabolic process. A 2017 study found correlation of Neanderthal admixture in phenotypic traits in modern European populations.
Physiological or phenotypical changes have been traced to Upper Paleolithic mutations, such as the East Asian variant of the EDAR gene, dated to c. 35,000 years ago.
Recent divergence of Eurasian lineages was sped up significantly during the Last Glacial Maximum (LGM), the Mesolithic and the Neolithic, due to increased selection pressures and due to founder effects associated with migration. Alleles predictive of light skin have been found in Neanderthals, but the alleles for light skin in Europeans and East Asians, associated with KITLG and ASIP, are () thought to have not been acquired by archaic admixture but recent mutations since the LGM. Phenotypes associated with the "white" or "Caucasian" populations of Western Eurasian stock emerge during the LGM, from about 19,000 years ago. Average cranial capacity in modern human populations varies in the range of 1,200 to 1,450 cm3 for adult males. Larger cranial volume is associated with climatic region, the largest averages being found in populations of Siberia and the Arctic. Both Neanderthal and EEMH had somewhat larger cranial volumes on average than modern Europeans, suggesting the relaxation of selection pressures for larger brain volume after the end of the LGM.
Examples for still later adaptations related to agriculture and animal domestication including East Asian types of ADH1B associated with rice domestication, or lactase persistence, are due to recent selection pressures.
An even more recent adaptation has been proposed for the Austronesian Sama-Bajau, developed under selection pressures associated with subsisting on freediving over the past thousand years or so.
Behavioral modernity
Behavioral modernity, involving the development of language, figurative art and early forms of religion (etc.) is taken to have arisen before 40,000 years ago, marking the beginning of the Upper Paleolithic (in African contexts also known as the Later Stone Age).
There is considerable debate regarding whether the earliest anatomically modern humans behaved similarly to recent or existing humans. Behavioral modernity is taken to include fully developed language (requiring the capacity for abstract thought), artistic expression, early forms of religious behavior, increased cooperation and the formation of early settlements, and the production of articulated tools from lithic cores, bone or antler. The term Upper Paleolithic is intended to cover the period since the rapid expansion of modern humans throughout Eurasia, which coincides with the first appearance of Paleolithic art such as cave paintings and the development of technological innovation such as the spear-thrower. The Upper Paleolithic begins around 50,000 to 40,000 years ago, and also coincides with the disappearance of archaic humans such as the Neanderthals.
The term "behavioral modernity" is somewhat disputed. It is most often used for the set of characteristics marking the Upper Paleolithic, but some scholars use "behavioral modernity" for the emergence of H. sapiens around 200,000 years ago, while others use the term for the rapid developments occurring around 50,000 years ago. It has been proposed that the emergence of behavioral modernity was a gradual process.
Examples of behavioural modernity
The equivalent of the Eurasian Upper Paleolithic in African archaeology is known as the Later Stone Age, also beginning roughly 40,000 years ago. While most clear evidence for behavioral modernity uncovered from the later 19th century was from Europe, such as the Venus figurines and other artefacts from the Aurignacian, more recent archaeological research has shown that all essential elements of the kind of material culture typical of contemporary San hunter-gatherers in Southern Africa was also present by at least 40,000 years ago, including digging sticks of similar materials used today, ostrich egg shell beads, bone arrow heads with individual maker's marks etched and embedded with red ochre, and poison applicators. There is also a suggestion that "pressure flaking best explains the morphology of lithic artifacts recovered from the c. 75-ka Middle Stone Age levels at Blombos Cave, South Africa. The technique was used during the final shaping of Still Bay bifacial points made on heat‐treated silcrete." Both pressure flaking and heat treatment of materials were previously thought to have occurred much later in prehistory, and both indicate a behaviourally modern sophistication in the use of natural materials. Further reports of research on cave sites along the southern African coast indicate that "the debate as to when cultural and cognitive characteristics typical of modern humans first appeared" may be coming to an end, as "advanced technologies with elaborate chains of production" which "often demand high-fidelity transmission and thus language" have been found at the South African Pinnacle Point Site 5–6. These have been dated to approximately 71,000 years ago. The researchers suggest that their research "shows that microlithic technology originated early in South Africa by 71 kya, evolved over a vast time span (c. 11,000 years), and was typically coupled to complex heat treatment that persisted for nearly 100,000 years. Advanced technologies in Africa were early and enduring; a small sample of excavated sites in Africa is the best explanation for any perceived 'flickering' pattern." Increases in behavioral complexity have been speculated to have been linked to an earlier climatic change to much drier conditions between 135,000 and 75,000 years ago. This might have led to human groups who were seeking refuge from the inland droughts, expanded along the coastal marshes rich in shellfish and other resources. Since sea levels were low due to so much water tied up in glaciers, such marshlands would have occurred all along the southern coasts of Eurasia. The use of rafts and boats may well have facilitated exploration of offshore islands and travel along the coast, and eventually permitted expansion to New Guinea and then to Australia.
In addition, a variety of other evidence of abstract imagery, widened subsistence strategies, and other "modern" behaviors has been discovered in Africa, especially South, North, and East Africa, predating 50,000 years ago (with some predating 100,000 years ago). The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100,000–75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago, and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa. Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 72,000–60,000 years ago some of which may have been tipped with poisons, and bone harpoons at the Central African site of Katanda dating ca. 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increase its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago.
In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to ca. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000-67,000 years ago. Evidence of early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago.
Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types have been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior.
Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools.
Evidence was found in 2018, dating to about 320,000 years ago at the site of Olorgesailie in Kenya, of the early emergence of modern behaviors including: the trade and long-distance transportation of resources (such as obsidian), the use of pigments, and the possible making of projectile points. The authors of three 2018 studies on the site observe that the evidence of these behaviors is roughly contemporary with the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors began in Africa around the time of the emergence of Homo sapiens.
In 2019, further evidence of Middle Stone Age complex projectile weapons in Africa was found at Aduma, Ethiopia, dated 100,000–80,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers.
Pace of progress during Homo sapiens history
Homo sapiens technological and cultural progress appears to have been very much faster in recent millennia than in Homo sapiens early periods. The pace of development may indeed have accelerated, due to massively larger population (so more humans extant to think of innovations), more communication and sharing of ideas among human populations, and the accumulation of thinking tools. However it may also be that the pace of advancements always looks relatively faster to humans in the time they live, because previous advances are unrecognised.
Notes
References
Sources
Further reading
External links
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Humans
Mammals described in 1758
Taxa named by Carl Linnaeus
Tool-using mammals | Early modern human | [
"Biology"
] | 7,396 | [
"Biological hypotheses",
"Recent African origin of modern humans",
"Anatomically modern humans"
] |
100,034 | https://en.wikipedia.org/wiki/Military%20engineering | Military engineering is loosely defined as the art, science, and practice of designing and building military works and maintaining lines of military transport and military communications. Military engineers are also responsible for logistics behind military tactics. Modern military engineering differs from civil engineering. In the 20th and 21st centuries, military engineering also includes CBRN defense and other engineering disciplines such as mechanical and electrical engineering techniques.
According to NATO, "military engineering is that engineer activity undertaken, regardless of component or service, to shape the physical operating environment. Military engineering incorporates support to maneuver and to the force as a whole, including military engineering functions such as engineer support to force protection, counter-improvised explosive devices, environmental protection, engineer intelligence and military search. Military engineering does not encompass the activities undertaken by those 'engineers' who maintain, repair and operate vehicles, vessels, aircraft, weapon systems and equipment."
Military engineering is an academic subject taught in military academies or schools of military engineering. The construction and demolition tasks related to military engineering are usually performed by military engineers including soldiers trained as sappers or pioneers. In modern armies, soldiers trained to perform such tasks while well forward in battle and under fire are often called combat engineers.
In some countries, military engineers may also perform non-military construction tasks in peacetime such as flood control and river navigation works, but such activities do not fall within the scope of military engineering.
Etymology
The word engineer was initially used in the context of warfare, dating back to 1325 when engine’er (literally, one who operates an engine) referred to "a constructor of military engines". In this context, "engine" referred to a military machine, i. e., a mechanical contraption used in war (for example, a catapult).
As the design of civilian structures such as bridges and buildings developed as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the older discipline. As the prevalence of civil engineering outstripped engineering in a military context and the number of disciplines expanded, the original military meaning of the word "engineering" is now largely obsolete. In its place, the term "military engineering" has come to be used.
History
In ancient times, military engineers were responsible for siege warfare and building field fortifications, temporary camps and roads. The most notable engineers of ancient times were the Romans and Chinese, who constructed huge siege-machines (catapults, battering rams and siege towers). The Romans were responsible for constructing fortified wooden camps and paved roads for their legions. Many of these Roman roads are still in use today.
The first civilization to have a dedicated force of military engineering specialists were the Romans, whose army contained a dedicated corps of military engineers known as architecti. This group was pre-eminent among its contemporaries. The scale of certain military engineering feats, such as the construction of a double-wall of fortifications long, in just 6 weeks to completely encircle the besieged city of Alesia in 52 B.C.E., is an example. Such military engineering feats would have been completely new, and probably bewildering and demoralizing, to the Gallic defenders. Vitruvius is the best known of these Roman army engineers, due to his writings surviving.
Examples of battles before the early modern period where military engineers played a decisive role include the Siege of Tyre under Alexander the Great, the Siege of Masada by Lucius Flavius Silva as well as the Battle of the Trench under the suggestion of Salman the Persian to dig a trench.
For about 600 years after the fall of the Roman empire, the practice of military engineering barely evolved in the west. In fact, much of the classic techniques and practices of Roman military engineering were lost. Through this period, the foot soldier (who was pivotal to much of the Roman military engineering capability) was largely replaced by mounted soldiers. It was not until later in the Middle Ages, that military engineering saw a revival focused on siege warfare.
Military engineers planned castles and fortresses. When laying siege, they planned and oversaw efforts to penetrate castle defenses. When castles served a military purpose, one of the tasks of the sappers was to weaken the bases of walls to enable them to be breached before means of thwarting these activities were devised. Broadly speaking, sappers were experts at demolishing or otherwise overcoming or bypassing fortification systems.
With the 14th-century development of gunpowder, new siege engines in the form of cannons appeared. Initially military engineers were responsible for maintaining and operating these new weapons just as had been the case with previous siege engines. In England, the challenge of managing the new technology resulted in the creation of the Office of Ordnance around 1370 in order to administer the cannons, armaments and castles of the kingdom. Both military engineers and artillery formed the body of this organization and served together until the office's successor, the Board of Ordnance was disbanded in 1855.
In comparison to older weapons, the cannon was significantly more effective against traditional medieval fortifications. Military engineering significantly revised the way fortifications were built in order to be better protected from enemy direct and plunging shot. The new fortifications were also intended to increase the ability of defenders to bring fire onto attacking enemies. Fort construction proliferated in 16th-century Europe based on the trace italienne design.
By the 18th century, regiments of foot (infantry) in the British, French, Prussian and other armies included pioneer detachments. In peacetime these specialists constituted the regimental tradesmen, constructing and repairing buildings, transport wagons, etc. On active service they moved at the head of marching columns with axes, shovels, and pickaxes, clearing obstacles or building bridges to enable the main body of the regiment to move through difficult terrain. The modern Royal Welch Fusiliers and French Foreign Legion still maintain pioneer sections who march at the front of ceremonial parades, carrying chromium-plated tools intended for show only. Other historic distinctions include long work aprons and the right to wear beards. In West Africa, the Ashanti army was accompanied to war by carpenters who were responsible for constructing shelters and blacksmiths who repaired weapons. By the 18th century, sappers were deployed in the Dahomeyan army during assaults against fortifications.
The Peninsular War (1808–14) revealed deficiencies in the training and knowledge of officers and men of the British Army in the conduct of siege operations and bridging. During this war low-ranking Royal Engineers officers carried out large-scale operations. They had under their command working parties of two or three battalions of infantry, two or three thousand men, who knew nothing in the art of siegeworks. Royal Engineers officers had to demonstrate the simplest tasks to the soldiers, often while under enemy fire. Several officers were lost and could not be replaced, and a better system of training for siege operations was required. On 23 April 1812 an establishment was authorised, by Royal Warrant, to teach "Sapping, Mining, and other Military Fieldworks" to the junior officers of the Corps of Royal Engineers and the Corps of Royal Military Artificers, Sappers and Miners.
The first courses at the Royal Engineers Establishment were done on an all ranks basis with the greatest regard to economy. To reduce staff the NCOs and officers were responsible for instructing and examining the soldiers. If the men could not read or write they were taught to do so, and those who could read and write were taught to draw and interpret simple plans. The Royal Engineers Establishment quickly became the centre of excellence for all fieldworks and bridging. Captain Charles Pasley, the director of the Establishment, was keen to confirm his teaching, and regular exercises were held as demonstrations or as experiments to improve the techniques and teaching of the Establishment. From 1833 bridging skills were demonstrated annually by the building of a pontoon bridge across the Medway which was tested by the infantry of the garrison and the cavalry from Maidstone. These demonstrations had become a popular spectacle for the local people by 1843, when 43,000 came to watch a field day laid on to test a method of assaulting earthworks for a report to the Inspector General of Fortifications. In 1869 the title of the Royal Engineers Establishment was changed to "The School of Military Engineering" (SME) as evidence of its status, not only as the font of engineer doctrine and training for the British Army, but also as the leading scientific military school in Europe.
The dawn of the internal combustion engine marked the beginning of a significant change in military engineering. With the arrival of the automobile at the end of the 19th century and heavier than air flight at the start of the 20th century, military engineers assumed a major new role in supporting the movement and deployment of these systems in war. Military engineers gained vast knowledge and experience in explosives. They were tasked with planting bombs, landmines and dynamite.
At the end of World War I, the standoff on the Western Front caused the Imperial German Army to gather experienced and particularly skilled soldiers to form "Assault Teams" which would break through the Allied trenches. With enhanced training and special weapons (such as flamethrowers), these squads achieved some success, but too late to change the outcome of the war. In early WWII, however, the Wehrmacht "Pioniere" battalions proved their efficiency in both attack and defense, somewhat inspiring other armies to develop their own combat engineers battalions. Notably, the attack on Fort Eben-Emael in Belgium was conducted by Luftwaffe glider-deployed combat engineers.
The need to defeat the German defensive positions of the "Atlantic wall" as part of the amphibious landings in Normandy in 1944 led to the development of specialist combat engineer vehicles. These, collectively known as Hobart's Funnies, included a specific vehicle to carry combat engineers, the Churchill AVRE. These and other dedicated assault vehicles were organised into the specialised 79th Armoured Division and deployed during Operation Overlord – 'D-Day'.
Other significant military engineering projects of World War II include Mulberry harbour and Operation Pluto.
Modern military engineering still retains the Roman role of building field fortifications, road paving and breaching terrain obstacles. A notable military engineering task was, for example, breaching the Suez Canal during the Yom Kippur War.
Education
Military engineers can come from a variety of engineering programs. They may be graduates of mechanical, electrical, civil, or industrial engineering.
Sub-discipline
Modern military engineering can be divided into three main tasks or fields: combat engineering, strategic support, and ancillary support. Combat engineering is associated with engineering on the battlefield. Combat engineers are responsible for increasing mobility on the front lines of war such as digging trenches and building temporary facilities in war zones. Strategic support is associated with providing service in communication zones such as the construction of airfields and the improvement and upgrade of ports, roads and railways communication. Ancillary support includes provision and distribution of maps as well as the disposal of unexploded warheads. Military engineers construct bases, airfields, roads, bridges, ports, and hospitals. During peacetime before modern warfare, military engineers took the role of civil engineers by participating in the construction of civil-works projects. Nowadays, military engineers are almost entirely engaged in war logistics and preparedness.
Explosives engineering
Explosives are defined as any system that produces rapidly expanding gases in a given volume in a short duration. Specific military engineering occupations also extend to the field of explosives and demolitions and their usage on the battlefield. Explosive devices have been used on the battlefield for several centuries, in numerous operations from combat to area clearance. Earliest known development of explosives can be traced back to 10th-century China where the Chinese are credited with engineering the world's first known explosive, black powder. Initially developed for recreational purposes, black powder later was utilized for military application in bombs and projectile propulsion in firearms. Engineers in the military who specialize in this field formulate and design many explosive devices to use in varying operating conditions. Such explosive compounds range from black powder to modern plastic explosives. This particular is commonly listed under the role of combat engineers who demolitions expertise also includes mine and IED detection and disposal. For more information, see Bomb disposal.
Military engineering by country
Military engineers are key in all armed forces of the world, and invariably found either closely integrated into the force structure, or even into the combat units of the national troops.
Brazil
Brazilian Army engineers can be part of the Quadro de Engenheiros Militares, with its members trained or professionalized by the traditional Instituto Militar de Engenharia (IME) (Military Institute of Engineering), or the Arma de Engenharia, with its members trained by the Academia Militar das Agulhas Negras (AMAN) (Agulhas Negras Military Academy).
In the Brazil's Navy, engineers can occupy the Corpo de Engenheiros da Marinha, the Quadro Complementar de Oficiais da Armada and the Quadro Complementar de Oficiais Fuzileiros Navais. Officers can come from the Centro de Instrução Almirante Wandenkolk (CIAW) (Admiral Wandenkolk Instruction Center) and the Escola Naval (EN) (Naval School) which, through internal selection of the Navy, finish their graduation at the Universidade de São Paulo (USP) (University of São Paulo).
The Quadro de Oficias Engenheiros of the Brazilian Air Force is occupied by engineers professionalized by Centro de Instrução e Adaptação da Aeronáutica (CIAAR) (Air Force Instruction and Adaptation Center) and trained, or specialized, by Instituto Tecnológico de Aeronáutica (ITA) (Aeronautics Institute of Technology).
Russia
– Pososhniye lyudi
– Engineer Troops (Soviet Union); Assault Engineering Brigades
– Russian Engineer Troops
United Kingdom
The Royal School of Military Engineering is the main training establishment for the British Army's Royal Engineers. The RSME also provides training for the Royal Navy, Royal Air Force, other Arms and Services of the British Army, Other Government Departments, and Foreign and Commonwealth countries as required. These skills provide vital components in the Army's operational capability, and Royal Engineers are currently deployed in Afghanistan, Iraq, Cyprus, Bosnia, Kosovo, Kenya, Brunei, Falklands, Belize, Germany and Northern Ireland. Royal Engineers also take part in exercises in Saudi Arabia, Kuwait, Italy, Egypt, Jordan, Canada, Poland and the United States.
United States
The prevalence of military engineering in the United States dates back to the American Revolutionary War when engineers would carry out tasks in the U.S. Army. During the war, they would map terrain to and build fortifications to protect troops from opposing forces. The first military engineering organization in the United States was the Army Corps of Engineers. Engineers were responsible for protecting military troops whether using fortifications or designing new technology and weaponry throughout the United States' history of warfare. The Army originally claimed engineers exclusively, but as the U.S. military branches expanded to the sea and sky, the need for military engineering sects in all branches increased. As each branch of the United States military expanded, technology adapted to fit their respective needs.
United States Army Corps of Engineers
Air Force Civil Engineer Support Agency, Rapid Engineer Deployable Heavy Operational Repair Squadron Engineers (RED HORSE), and Prime Base Engineer Emergency Force (Prime BEEF)
The United States Navy Construction Battalion Corps (better known as the Seabees) and Civil Engineer Corps
United States Marine Corps Combat Engineer Battalions
Other nations
Department of the Engineer Troops of the Armed Forces of Armenia
Royal Australian Engineers and the Royal Australian Air Force Airfield Engineers
Corps of Engineers and Military Engineer Services (MES), Bangladesh Army
Canadian Military Engineers
The Danish military engineering corps is almost entirely organized into one regiment, simply named "Ingeniørregimentet" ("The Engineering Regiment").
Engineering Arm, including the Paris Fire Brigade
Indian Army Corps of Engineers
Indonesian Army Corps of Engineers
Irish Army Engineer Corps
Combat Engineering Corps of the Israel Defense Forces
Engineer Regiment (Namibia)
Corps of Royal New Zealand Engineers
("The Engineer Battalion")
Rejimen Askar Jurutera DiRaja ("Royal Engineer Regiment")
Pakistan Army Corps of Engineers and the Military Engineering Service
10th Engineer Brigade
South African Army Engineer Formation
Sri Lanka Engineers and the Engineer Services Regiment
The Le Quy Don Technical University is the main training establishment for the Vietnamese Army's Corps of Engineers
See also
Related topics
Bailey bridge
Fortification
History of warfare
Military bridges
Military engineering vehicles
Military technology and equipment
Siege engine
Notable military engineers
Mozi
Gundulf of Rochester
Henri Alexis Brialmont
John Chard
Menno van Coehoorn
Pierre Charles L'Enfant
Giovanni Fontana
Leslie Groves
Cyril Gordon Martin
Coulson Norman Mitchell
John Rosworm
Charles Pasley
Vauban
Marc René, marquis de Montalembert
Charles George Gordon
Francis Fowke
Paul R. Smith
Vitruvius
Eugénio dos Santos
Tadeusz Kościuszko.
Leonardo da Vinci
Robert E. Lee
Herman Haupt
Douglas MacArthur
George Washington
Fritz Todt
References
External links
Headquarters U.S. Army Corps of Engineers
NATO Military Engineering Centre of Excellence
Engineering
Land warfare
Engineering disciplines
Engineering occupations | Military engineering | [
"Engineering"
] | 3,498 | [
"Construction",
"Military engineering",
"nan"
] |
4,564,673 | https://en.wikipedia.org/wiki/Colloid%20mill | A colloid mill is a machine that is used to reduce the particle size of a solid in suspension in a liquid, or to reduce the droplet size in emulsions. Colloid mills work on the rotor-stator principle: a rotor turns at high speeds (2000–18000 RPM). A high level of stress is applied on the fluid which results in disrupting and breaking down the structure. Colloid mills are frequently used to increase the stability of suspensions and emulsions, but can also be used to reduce the particle size of solids in suspensions. Higher shear rates lead to smaller droplets, down to approximately 1 μm which are more resistant to emulsion separation.
Application suitability
Colloid mills are used in the following industries:
Pharmaceutical
Cosmetic
Paint
Soap
Textile
Paper
Food
Grease
Rotor - stator construction
A colloidal mill consist of a high speed rotor and stator with a conical milling surfaces
1 stage toothed
3 stage toothed
Execution
fix gap
adjustable gap
References
See also
Homogenization (chemistry)
Chemical equipment | Colloid mill | [
"Chemistry",
"Engineering"
] | 211 | [
"Chemical equipment",
"nan"
] |
4,567,548 | https://en.wikipedia.org/wiki/Anderson%27s%20rule | Anderson's rule is used for the construction of energy band diagrams of the heterojunction between two semiconductor materials. Anderson's rule states that when constructing an energy band diagram, the vacuum levels of the two semiconductors on either side of the heterojunction should be aligned (at the same energy).
It is also referred to as the electron affinity rule, and is closely related to the Schottky–Mott rule for metal–semiconductor junctions.
Anderson's rule was first described by R. L. Anderson in 1960.
Constructing energy band diagrams
Once the vacuum levels are aligned it is possible to use the electron affinity and band gap values for each semiconductor to calculate the conduction band and valence band offsets. The electron affinity (usually given by the symbol in solid state physics) gives the energy difference between the lower edge of the conduction band and the vacuum level of the semiconductor. The band gap (usually given the symbol ) gives the energy difference between the lower edge of the conduction band and the upper edge of the valence band. Each semiconductor has different electron affinity and band gap values. For semiconductor alloys it may be necessary to use Vegard's law to calculate these values.
Once the relative positions of the conduction and valence bands for both semiconductors are known, Anderson's rule allows the calculation of the band offsets of both the valence band () and the conduction band ().
After applying Anderson's rule and discovering the bands' alignment at the junction, Poisson’s equation can then be used to calculate the shape of the band bending in the two semiconductors.
Example: straddling gap
Consider a heterojunction between semiconductor 1 and semiconductor 2. Suppose the conduction band of semiconductor 2 is closer to the vacuum level than that of semiconductor 1. The conduction band offset would then be given by the difference in electron affinity (energy from upper conducting band to vacuum level) of the two semiconductors:
Next, suppose that the band gap of semiconductor 2 is large enough that the valence band of semiconductor 1 lies at a higher energy than that of semiconductor 2. Then the valence band offset is given by:
Limitations of Anderson's rule
In real semiconductor heterojunctions, Anderson's rule fails to predict actual band offsets. In Anderson's idealized model the materials are assumed to behave as they would in the limit of a large vacuum separation, yet where the vacuum separation is taken to zero. It is that assumption that involves the use of the vacuum electron affinity parameter, even in a solidly filled junction where there is no vacuum. Much like with the Schottky–Mott rule, Anderson's rule ignores the real chemical bonding effects that occur with a small or nonexistent vacuum separation: interface states which may have a very large electrical polarization and defect states, dislocations and other perturbations caused by imperfect crystal lattice matches.
To try to improve the accuracy of Anderson's rule, various models have been proposed. The common anion rule guesses that, since the valence band is related to anionic states, materials with the same anions should have very small valence band offsets.
Tersoff proposed the presence of a dipole layer due to induced gap states, by analogy to the metal-induced gap states in a metal–semiconductor junction. Practically, heuristic corrections to Anderson's rule have found success in specific systems, such as the 60:40 rule used for the GaAs/AlGaAs system.
References
Semiconductor structures
Electronic band structures
Rules | Anderson's rule | [
"Physics",
"Chemistry",
"Materials_science"
] | 735 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
4,572,458 | https://en.wikipedia.org/wiki/Schlenk%20equilibrium | The Schlenk equilibrium, named after its discoverer Wilhelm Schlenk, is a chemical equilibrium taking place in solutions of Grignard reagents and Hauser bases
2 RMgX MgX2 + MgR2
The process described is an equilibrium between two equivalents of an alkyl or aryl magnesium halide on the left of the equation and one equivalent each of the dialkyl or diaryl magnesium compound and magnesium halide salt on the right. Organomagnesium halides in solution also form dimers and higher oligomers, especially at high concentration. Alkyl magnesium chlorides in ether are present as dimers.
The position of the equilibrium is influenced by solvent, temperature, and the nature of the various substituents. It is known that magnesium center in Grignard reagents typically coordinates two molecules of ether such as diethyl ether or tetrahydrofuran (THF). Thus they are more precisely described as having the formula RMgXL2 where L = an ether. In the presence of monoethers, the equilibrium typically favors the alkyl- or arylmagnesium halide. Addition of dioxane to such solutions, however, leads to precipitation of the coordination polymers MgX2(μ-dioxane)2, driving the equilibrium completely to the right. The dialkylmagnesium compounds are popular in the synthesis of organometallic compounds.
References
Organometallic chemistry
Organic chemistry | Schlenk equilibrium | [
"Chemistry"
] | 311 | [
"Organometallic chemistry",
"nan"
] |
2,460,228 | https://en.wikipedia.org/wiki/Born%20approximation | Generally in scattering theory and in particular in quantum mechanics, the Born approximation consists of taking the incident field in place of the total field as the driving field at each point in the scatterer. The Born approximation is named after Max Born who proposed this approximation in the early days of quantum theory development.
It is the perturbation method applied to scattering by an extended body. It is accurate if the scattered field is small compared to the incident field on the scatterer.
For example, the scattering of radio waves by a light styrofoam column can be approximated by assuming that each part of the plastic is polarized by the same electric field that would be present at that point without the column, and then calculating the scattering as a radiation integral over that polarization distribution.
Born approximation to the Lippmann–Schwinger equation
The Lippmann–Schwinger equation for the scattering state with a momentum p and out-going (+) or in-going (−) boundary conditions is
where is the free particle Green's function, is a positive infinitesimal quantity, and the interaction potential. is the corresponding free scattering solution sometimes called the incident field. The factor on the right hand side is sometimes called the driving field.
Within the Born approximation, the above equation is expressed as
which is much easier to solve since the right hand side no longer depends on the unknown state .
The obtained solution is the starting point of the Born series.
Born approximation to the scattering amplitude
Using the outgoing free Green's function for a particle with mass in coordinate space,
one can extract the Born approximation to the scattering amplitude from the Born approximation to the Lippmann–Schwinger equation above,
where is the angle between the incident wavevector and the scattered wavevector , is the transferred momentum. In the centrally symmetric potential , the scattering amplitude becomes
where In the Born approximation for centrally symmetric field, the scattering amplitude and thus the cross section depends on the momentum and the scattering amplitude only through the combination .
Applications
The Born approximation is used in several different physical contexts.
In neutron scattering, the first-order Born approximation is almost always adequate, except for neutron optical phenomena like internal total reflection in a neutron guide, or grazing-incidence small-angle scattering. Using the first Born approximation, it has been shown that the scattering amplitude for a scattering potential is the same as the Fourier transform of the scattering potential
. Using this concept, the electronic analogue of Fourier optics has been theoretically studied in monolayer graphene. The Born approximation has also been used to calculate conductivity in bilayer graphene and to approximate the propagation of long-wavelength waves in elastic media.
The same ideas have also been applied to studying the movements of seismic waves through the Earth.
Distorted-wave Born approximation
The Born approximation is simplest when the incident waves are plane waves. That is, the scatterer is treated as a perturbation to free space or to a homogeneous medium.
In the distorted-wave Born approximation (DWBA), the incident waves are solutions to a part of the problem that is treated by some other method, either analytical or numerical. The interaction of interest is treated as a perturbation to some system that can be solved by some other method. For nuclear reactions, numerical optical model waves are used. For scattering of charged particles by charged particles, analytic solutions for coulomb scattering are used. This gives the non-Born preliminary equation
and the Born approximation
Other applications include bremsstrahlung and the photoelectric effect. For a charged-particle-induced direct nuclear reaction, the procedure is used twice. There are similar methods that do not use the Born approximations. In condensed-matter research, DWBA is used to analyze grazing-incidence small-angle scattering.
See also
Born series
Lippmann–Schwinger equation
Dyson series
Electromagnetic modeling
Rayleigh–Gans approximation
References
Wu and Ohmura, Quantum Theory of Scattering, Prentice Hall, 1962
Scattering theory
Max Born | Born approximation | [
"Chemistry"
] | 813 | [
"Scattering",
"Scattering theory"
] |
2,460,242 | https://en.wikipedia.org/wiki/Physical%20optics | In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication, which is studied in the sub-branch of coherence theory.
Principle
Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics. In this context, it is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory.
This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation.
In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio.
In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field.
In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer. Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth convex shapes and for lossy (low-reflection) surfaces.
The ray-optics field or current is generally not accurate near edges or shadow boundaries, unless supplemented by diffraction and creeping wave calculations.
The standard theory of physical optics has some defects in the evaluation of scattered fields, leading to decreased accuracy away from the specular direction. An improved theory introduced in 2004 gives exact solutions to problems involving wave diffraction by conducting scatterers.
See also
Optical physics
Electromagnetic modeling
Fourier optics
History of optics
Negative-index metamaterials
References
External links
Optics
Electrical engineering | Physical optics | [
"Physics",
"Chemistry",
"Engineering"
] | 498 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
"Electrical engineering",
" and optical physics"
] |
2,460,410 | https://en.wikipedia.org/wiki/High-frequency%20approximation | A high-frequency approximation (or "high energy approximation") for scattering or other wave propagation problems, in physics or engineering, is an approximation whose accuracy increases with the size of features on the scatterer or medium relative to the wavelength of the scattered particles.
Classical mechanics and geometric optics are the most common and extreme high frequency approximation, where the wave or field properties of, respectively, quantum mechanics and electromagnetism are neglected entirely.
Less extreme approximations include, the WKB approximation, physical optics, the geometric theory of diffraction, the uniform theory of diffraction, and the physical theory of diffraction. When these are used to approximate quantum mechanics, they are called semiclassical approximations.
See also
Electromagnetic modeling
Scattering
Scattering, absorption and radiative transfer (optics) | High-frequency approximation | [
"Physics",
"Chemistry",
"Materials_science"
] | 165 | [
" absorption and radiative transfer (optics)",
"Scattering stubs",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
2,461,914 | https://en.wikipedia.org/wiki/Fundamental%20lemma%20of%20the%20calculus%20of%20variations | In mathematics, specifically in the calculus of variations, a variation of a function can be concentrated on an arbitrarily small interval, but not a single point.
Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function . The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose concentrated on an interval on which keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed.
Basic version
If a continuous function on an open interval satisfies the equality
for all compactly supported smooth functions on , then is identically zero.
Here "smooth" may be interpreted as "infinitely differentiable", but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside for some , such that "; but often a weaker statement suffices, assuming only that (or and a number of its derivatives) vanishes at the endpoints , ; in this case the closed interval is used.
Proof
Suppose for some . Since is continuous, it is nonzero with the same sign for some such that . Without loss of generality, assume . Then take an that is positive on and zero elsewhere, for example
.
Note this bump function satisfies the properties in the statement, including . Since
we reach a contradiction.
Version for two given functions
If a pair of continuous functions f, g on an interval (a,b) satisfies the equality
for all compactly supported smooth functions h on (a,b), then g is differentiable, and g''' = f everywhere.
The special case for g = 0 is just the basic version.
Here is the special case for f = 0 (often sufficient).
If a continuous function g on an interval (a,b) satisfies the equality
for all smooth functions h on (a,b) such that , then g is constant.Web version:
If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange, while the proof of differentiability of g is due to Paul du Bois-Reymond.
Versions for discontinuous functions
The given functions (f, g) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). Sometimes the given functions are assumed to be piecewise continuous, in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points.
Higher derivatives
If a tuple of continuous functions on an interval (a,b) satisfies the equality
for all compactly supported smooth functions h on (a,b), then there exist continuously differentiable functions on (a,b) such that
everywhere.
This necessary condition is also sufficient, since the integrand becomes
The case n = 1 is just the version for two given functions, since and thus,
In contrast, the case n=2 does not lead to the relation since the function need not be differentiable twice. The sufficient condition is not necessary. Rather, the necessary and sufficient condition may be written as for n=2, for n=3, and so on; in general, the brackets cannot be opened because of non-differentiability.
Vector-valued functions
Generalization to vector-valued functions is straightforward; one applies the results for scalar functions to each coordinate separately, or treats the vector-valued case from the beginning.
Multivariable functions
If a continuous multivariable function f on an open set satisfies the equality
for all compactly supported smooth functions h on Ω, then f is identically zero.
Similarly to the basic version, one may consider a continuous function f on the closure of Ω, assuming that h vanishes on the boundary of Ω (rather than compactly supported).
Here is a version for discontinuous multivariable functions.
Let be an open set, and satisfy the equality
for all compactly supported smooth functions h on Ω. Then f=0 (in L''2, that is, almost everywhere).
Applications
This lemma is used to prove that extrema of the functional
are weak solutions (for an appropriate vector space ) of the Euler–Lagrange equation
The Euler–Lagrange equation plays a prominent role in classical mechanics and differential geometry.
Notes
References
(transl. from Russian).
Classical mechanics
Calculus of variations
Smooth functions
Lemmas in analysis | Fundamental lemma of the calculus of variations | [
"Physics",
"Mathematics"
] | 1,054 | [
"Theorems in mathematical analysis",
"Classical mechanics",
"Mechanics",
"Lemmas in mathematical analysis",
"Lemmas"
] |
2,462,837 | https://en.wikipedia.org/wiki/Generalized%20Appell%20polynomials | In mathematics, a polynomial sequence has a generalized Appell representation if the generating function for the polynomials takes on a certain form:
where the generating function or kernel is composed of the series
with
and
and all
and
with
Given the above, it is not hard to show that is a polynomial of degree .
Boas–Buck polynomials are a slightly more general class of polynomials.
Special cases
The choice of gives the class of Brenke polynomials.
The choice of results in the Sheffer sequence of polynomials, which include the general difference polynomials, such as the Newton polynomials.
The combined choice of and gives the Appell sequence of polynomials.
Explicit representation
The generalized Appell polynomials have the explicit representation
The constant is
where this sum extends over all compositions of into parts; that is, the sum extends over all such that
For the Appell polynomials, this becomes the formula
Recursion relation
Equivalently, a necessary and sufficient condition that the kernel can be written as with is that
where and have the power series
and
Substituting
immediately gives the recursion relation
For the special case of the Brenke polynomials, one has and thus all of the , simplifying the recursion relation significantly.
See also
q-difference polynomials
References
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263.
Polynomials | Generalized Appell polynomials | [
"Mathematics"
] | 304 | [
"Polynomials",
"Algebra"
] |
2,463,065 | https://en.wikipedia.org/wiki/Porkchop%20plot | In orbital mechanics, a porkchop plot (also pork-chop plot) is a chart that shows level curves of equal characteristic energy (C3) against combinations of launch date and arrival date for a particular interplanetary flight. The chart shows the characteristic energy ranges in zones around the local minima, which resembles the shape of a porkchop slice.
By examining the results of the porkchop plot, engineers can determine when a launch opportunity exists (a 'launch window') that is compatible with the capabilities of a particular spacecraft. A given contour, called a porkchop curve, represents constant C3, and the center of the porkchop the optimal minimum C3. The orbital elements of the solution, where the fixed values are the departure date, the arrival date, and the length of the flight, were first solved mathematically in 1761 by Johann Heinrich Lambert, and the equation is generally known as Lambert's problem (or theorem).
Math
The general form of characteristic energy can be computed as:
where is the orbital velocity when the orbital distance tends to infinity. Note that, since the kinetic energy is , C3 is in fact equal to twice the magnitude of the specific orbital energy, , of the escaping object.
Use
For the Voyager program, engineers at JPL plotted around 10,000 potential trajectories using porkchop plots, from which they selected around 100 that were optimal for the mission objectives. The plots allowed them to reduce or eliminate planetary encounters taking place over the Thanksgiving or Christmas holidays, and to plan the completion of the mission's primary goals before the end of the fiscal year 1981.
See also
Orbit
Parabolic trajectory
Hyperbolic trajectory
References
External links
JPL Introduction to Porkchop plots
Pork-chop plot online computation tool
Plots (graphics)
Astrodynamics | Porkchop plot | [
"Engineering"
] | 369 | [
"Astrodynamics",
"Aerospace engineering"
] |
2,465,250 | https://en.wikipedia.org/wiki/Thermal%20energy%20storage | Thermal energy storage (TES) is the storage of thermal energy for later reuse. Employing widely different technologies, it allows surplus thermal energy to be stored for hours, days, or months. Scale both of storage and use vary from small to large – from individual processes to district, town, or region. Usage examples are the balancing of energy demand between daytime and nighttime, storing summer heat for winter heating, or winter cold for summer cooling (Seasonal thermal energy storage). Storage media include water or ice-slush tanks, masses of native earth or bedrock accessed with heat exchangers by means of boreholes, deep aquifers contained between impermeable strata; shallow, lined pits filled with gravel and water and insulated at the top, as well as eutectic solutions and phase-change materials.
Other sources of thermal energy for storage include heat or cold produced with heat pumps from off-peak, lower cost electric power, a practice called peak shaving; heat from combined heat and power (CHP) power plants; heat produced by renewable electrical energy that exceeds grid demand and waste heat from industrial processes. Heat storage, both seasonal and short term, is considered an important means for cheaply balancing high shares of variable renewable electricity production and integration of electricity and heating sectors in energy systems almost or completely fed by renewable energy.
Categories
The different kinds of thermal energy storage can be divided into three separate categories: sensible heat, latent heat, and thermo-chemical heat storage. Each of these has different advantages and disadvantages that determine their applications.
Sensible heat storage
Sensible heat storage (SHS) is the most straightforward method. It simply means the temperature of some medium is either increased or decreased. This type of storage is the most commercially available out of the three; other techniques are less developed.
The materials are generally inexpensive and safe. One of the cheapest, most commonly used options is a water tank, but materials such as molten salts or metals can be heated to higher temperatures and therefore offer a higher storage capacity. Energy can also be stored underground (UTES), either in an underground tank or in some kind of heat-transfer fluid (HTF) flowing through a system of pipes, either placed vertically in U-shapes (boreholes) or horizontally in trenches. Yet another system is known as a packed-bed (or pebble-bed) storage unit, in which some fluid, usually air, flows through a bed of loosely packed material (usually rock, pebbles or ceramic brick) to add or extract heat.
A disadvantage of SHS is its dependence on the properties of the storage medium. Storage capacities are limited by the specific heat capacity of the storage material, and the system needs to be properly designed to ensure energy extraction at a constant temperature.
Molten salt technology
The sensible heat of molten salt is also used for storing solar energy at a high temperature, termed molten-salt technology or molten salt energy storage (MSES). Molten salts can be employed as a thermal energy storage method to retain thermal energy. Presently, this is a commercially used technology to store the heat collected by concentrated solar power (e.g., from a solar tower or solar trough). The heat can later be converted into superheated steam to power conventional steam turbines and generate electricity at a later time. It was demonstrated in the Solar Two project from 1995 to 1999. Estimates in 2006 predicted an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity. Various eutectic mixtures of different salts are used (e.g., sodium nitrate, potassium nitrate and calcium nitrate). Experience with such systems exists in non-solar applications in the chemical and metals industries as a heat-transport fluid.
The salt melts at . It is kept liquid at in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused sun heats it to . It is then sent to a hot storage tank. With proper insulation of the tank the thermal energy can be usefully stored for up to a week. When electricity is needed, the hot molten salt is pumped to a conventional steam-generator to produce superheated steam for driving a conventional turbine/generator set as used in any coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank of about tall and in diameter to drive it for four hours by this design.A single tank with a divider plate to separate cold and hot molten salt is under development. It is more economical by achieving 100% more heat storage per unit volume over the dual tanks system as the molten-salt storage tank is costly due to its complicated construction. Phase Change Material (PCMs) are also used in molten-salt energy storage, while research on obtaining shape-stabilized PCMs using high porosity matrices is ongoing.
Most solar thermal power plants use this thermal energy storage concept. The Solana Generating Station in the U.S. can store 6 hours worth of generating capacity in molten salt. During the summer of 2013 the Gemasolar Thermosolar solar power-tower/molten-salt plant in Spain achieved a first by continuously producing electricity 24 hours per day for 36 days. The Cerro Dominador Solar Thermal Plant, inaugurated in June 2021, has 17.5 hours of heat storage.
Heat storage in tanks, ponds or rock caverns
A steam accumulator consists of an insulated steel pressure tank containing hot water and steam under pressure. As a heat storage device, it is used to mediate heat production by a variable or steady source from a variable demand for heat. Steam accumulators may take on a significance for energy storage in solar thermal energy projects.
Large stores, mostly hot water storage tanks, are widely used in Nordic countries to store heat for several days, to decouple heat and power production and to help meet peak demands. Some towns use insulated ponds heated by solar power as a heat source for district heating pumps. Intersessional storage in caverns has been investigated and appears to be economical and plays a significant role in heating in Finland. Energy producer Helen Oy estimates an 11.6 GWh capacity and 120 MW thermal output for its water cistern under Mustikkamaa (fully charged or discharged in 4 days at capacity), operating from 2021 to offset days of peak production/demand; while the rock caverns under sea level in Kruunuvuorenranta (near Laajasalo) were designated in 2018 to store heat in summer from warm seawater and release it in winter for district heating. In 2024, it was announced that the municipal energy supplier of Vantaa had commissioned an underground heat storage facility of over in size and 90GWh in capacity to be built, expected to be operational in 2028.
Hot silicon technology
Solid or molten silicon offers much higher storage temperatures than salts with consequent greater capacity and efficiency. It is being researched as a possible more energy efficient storage technology. Silicon is able to store more than 1 MWh of energy per cubic meter at 1400 °C. An additional advantage is the relative abundance of silicon when compared to the salts used for the same purpose.
Molten aluminum
Another medium that can store thermal energy is molten (recycled) aluminum. This technology was developed by the Swedish company Azelio. The material is heated to 600 °C. When needed, the energy is transported to a Stirling engine using a heat-transfer fluid.
Heat storage using oils
Using oils as sensible heat storage materials is an effective approach for storing thermal energy, particularly in medium- to high-temperature applications. Different types of oils are used based on the temperature range and the specific requirements of the thermal energy storage system: mineral oils, synthetic oils are more recently, vegetable oils are gaining interest because they are renewable and biodegradable. Numerious criteria are used to select an oil for a particular application: high energy storage capacity and specific heat capacity, high thermal conductivity, high chemical and physical stability, low coefficient of expansion, low cost, availability, low corrosion and compatibility with compounds materials, limited environmental issues, etc. Regarding the selection of a low-cost or cost-effective thermal oil, it is important to consider not only the acquisition or purchase cost, but also the operating and replacement costs or even final disposal costs. An oil that is initially more expensive may prove to be more cost-effective in the long run if it offers higher thermal stability, thereby reducing the frequency of replacement.
Heat storage in hot rocks or concrete
Water has one of the highest thermal capacities at 4.2 kJ/(kg⋅K) whereas concrete has about one third of that. On the other hand, concrete can be heated to much higher temperatures (1200 °C) by for example electrical heating and therefore has a much higher overall volumetric capacity. Thus in the example below, an insulated cube of about would appear to provide sufficient storage for a single house to meet 50% of heating demand. This could, in principle, be used to store surplus wind or solar heat due to the ability of electrical heating to reach high temperatures. At the neighborhood level, the Wiggenhausen-Süd solar development at Friedrichshafen in southern Germany has received international attention. This features a () reinforced concrete thermal store linked to () of solar collectors, which will supply the 570 houses with around 50% of their heating and hot water. Siemens-Gamesa built a 130 MWh thermal storage near Hamburg with 750 °C in basalt and 1.5 MW electric output. A similar system is scheduled for Sorø, Denmark, with 41–58% of the stored 18 MWh heat returned for the town's district heating, and 30–41% returned as electricity.
“Brick toaster” is a recently (August 2022) announced innovative heat reservoir operating at up to 1,500 °C (2,732 °F) that its maker, Titan Cement/Rondo claims should be able cut global output by 15% over 15 years.
Latent heat storage
Because latent heat storage (LHS) is associated with a phase transition, the general term for the associated media is Phase-Change Material (PCM). During these transitions, heat can be added or extracted without affecting the material's temperature, giving it an advantage over SHS-technologies. Storage capacities are often higher as well.
There are a multitude of PCMs available, including but not limited to salts, polymers, gels, paraffin waxes, metal alloys and semiconductor-metal alloys, each with different properties. This allows for a more target-oriented system design. As the process is isothermal at the PCM's melting point, the material can be picked to have the desired temperature range. Desirable qualities include high latent heat and thermal conductivity. Furthermore, the storage unit can be more compact if volume changes during the phase transition are small.
PCMs are further subdivided into organic, inorganic and eutectic materials. Compared to organic PCMs, inorganic materials are less flammable, cheaper and more widely available. They also have higher storage capacity and thermal conductivity. Organic PCMs, on the other hand, are less corrosive and not as prone to phase-separation. Eutectic materials, as they are mixtures, are more easily adjusted to obtain specific properties, but have low latent and specific heat capacities.
Another important factor in LHS is the encapsulation of the PCM. Some materials are more prone to erosion and leakage than others. The system must be carefully designed in order to avoid unnecessary loss of heat.
Miscibility gap alloy technology
Miscibility gap alloys rely on the phase change of a metallic material (see: latent heat) to store thermal energy.
Rather than pumping the liquid metal between tanks as in a molten-salt system, the metal is encapsulated in another metallic material that it cannot alloy with (immiscible). Depending on the two materials selected (the phase changing material and the encapsulating material) storage densities can be between 0.2 and 2 MJ/L.
A working fluid, typically water or steam, is used to transfer the heat into and out of the system. Thermal conductivity of miscibility gap alloys is often higher (up to 400 W/(m⋅K)) than competing technologies which means quicker "charge" and "discharge" of the thermal storage is possible. The technology has not yet been implemented on a large scale.
Ice-based technology
Several applications are being developed where ice is produced during off-peak periods and used for cooling at a later time. For example, air conditioning can be provided more economically by using low-cost electricity at night to freeze water into ice, then using the cooling capacity of ice in the afternoon to reduce the electricity needed to handle air conditioning demands. Thermal energy storage using ice makes use of the large heat of fusion of water. Historically, ice was transported from mountains to cities for use as a coolant. One metric ton of water (= one cubic meter) can store 334 million joules (MJ) or 317,000 BTUs (93 kWh). A relatively small storage facility can hold enough ice to cool a large building for a day or a week.
In addition to using ice in direct cooling applications, it is also being used in heat pump-based heating systems. In these applications, the phase change energy provides a very significant layer of thermal capacity that is near the bottom range of temperature that water source heat pumps can operate in. This allows the system to ride out the heaviest heating load conditions and extends the timeframe by which the source energy elements can contribute heat back into the system.
Cryogenic energy storage
Cryogenic energy storage uses liquification of air or nitrogen as an energy store.
A pilot cryogenic energy system that uses liquid air as the energy store, and low-grade waste heat to drive the thermal re-expansion of the air, operated at a power station in Slough, UK in 2010.
Thermo-chemical heat storage
Thermo-chemical heat storage (TCS) involves some kind of reversible exotherm/endotherm chemical reaction with thermo-chemical materials (TCM) . Depending on the reactants, this method can allow for an even higher storage capacity than LHS.
In one type of TCS, heat is applied to decompose certain molecules. The reaction products are then separated, and mixed again when required, resulting in a release of energy. Some examples are the decomposition of potassium oxide (over a range of 300–800 °C, with a heat decomposition of 2.1 MJ/kg), lead oxide (300–350 °C, 0.26 MJ/kg) and calcium hydroxide (above 450 °C, where the reaction rates can be increased by adding zinc or aluminum). The photochemical decomposition of nitrosyl chloride can also be used and, since it needs photons to occur, works especially well when paired with solar energy.
Adsorption (or Sorption) solar heating and storage
Adsorption processes also fall into this category. It can be used to not only store thermal energy, but also control air humidity. Zeolites (microporous crystalline alumina-silicates) and silica gels are well suited for this purpose. In hot, humid environments, this technology is often used in combination with lithium chloride to cool water.
The low cost ($200/ton) and high cycle rate (2,000×) of synthetic zeolites such as Linde 13X with water adsorbate has garnered much academic and commercial interest recently for use for thermal energy storage (TES), specifically of low-grade solar and waste heat. Several pilot projects have been funded in the EU from 2000 to the present (2020). The basic concept is to store solar thermal energy as chemical latent energy in the zeolite. Typically, hot dry air from flat plate solar collectors is made to flow through a bed of zeolite such that any water adsorbate present is driven off. Storage can be diurnal, weekly, monthly, or even seasonal depending on the volume of the zeolite and the area of the solar thermal panels. When heat is called for during the night, or sunless hours, or winter, humidified air flows through the zeolite. As the humidity is adsorbed by the zeolite, heat is released to the air and subsequently to the building space. This form of TES, with specific use of zeolites, was first taught by Guerra in 1978. Advantages over molten salts and other high temperature TES include that (1) the temperature required is only the stagnation temperature typical of a solar flat plate thermal collector, and (2) as long as the zeolite is kept dry, the energy is stored indefinitely. Because of the low temperature, and because the energy is stored as latent heat of adsorption, thus eliminating the insulation requirements of a molten salt storage system, costs are significantly lower.
Salt hydrate technology
One example of an experimental storage system based on chemical reaction energy is the salt hydrate technology. The system uses the reaction energy created when salts are hydrated or dehydrated. It works by storing heat in a container containing 50% sodium hydroxide (NaOH) solution. Heat (e.g. from using a solar collector) is stored by evaporating the water in an endothermic reaction. When water is added again, heat is released in an exothermic reaction at 50 °C (120 °F). Current systems operate at 60% efficiency. The system is especially advantageous for seasonal thermal energy storage, because the dried salt can be stored at room temperature for prolonged times, without energy loss. The containers with the dehydrated salt can even be transported to a different location. The system has a higher energy density than heat stored in water and the capacity of the system can be designed to store energy from a few months to years.
In 2013 the Dutch technology developer TNO presented the results of the MERITS project to store heat in a salt container. The heat, which can be derived from a solar collector on a rooftop, expels the water contained in the salt. When the water is added again, the heat is released, with almost no energy losses. A container with a few cubic meters of salt could store enough of this thermochemical energy to heat a house throughout the winter. In a temperate climate like that of the Netherlands, an average low-energy household requires about 6.7 GJ/winter. To store this energy in water (at a temperature difference of 70 °C), 23 m3 insulated water storage would be needed, exceeding the storage abilities of most households. Using salt hydrate technology with a storage density of about 1 GJ/m3, 4–8 m3 could be sufficient.
As of 2016, researchers in several countries are conducting experiments to determine the best type of salt, or salt mixture. Low pressure within the container seems favorable for the energy transport. Especially promising are organic salts, so called ionic liquids. Compared to lithium halide-based sorbents they are less problematic in terms of limited global resources and compared to most other halides and sodium hydroxide (NaOH) they are less corrosive and not negatively affected by CO2 contaminations.
However, a recent meta-analysis on studies of thermochemical heat storage suggests that salt hydrates offer very low potential for thermochemical heat storage, that absorption processes have prohibitive performance for long-term heat storage, and that thermochemical storage may not be suitable for long-term solar heat storage in buildings.
Molecular bonds
Storing energy in molecular bonds is being investigated. Energy densities equivalent to lithium-ion batteries have been achieved. This has been done by a DSPEC (dys-sensitized photoelectrosythesis cell). This is a cell that can store energy that has been acquired by solar panels during the day for night-time (or even later) use. It is designed by taking an indication from, well known, natural photosynthesis.
The DSPEC generates hydrogen fuel by making use of the acquired solar energy to split water molecules into its elements. As the result of this split, the hydrogen is isolated and the oxygen is released into the air. This sounds easier than it actually is. Four electrons of the water molecules need to be separated and transported elsewhere. Another difficult part is the process of merging the two separate hydrogen molecules.
The DSPEC consists of two components: a molecule and a nanoparticle. The molecule is called a chromophore-catalyst assembly which absorbs sunlight and kick starts the catalyst. This catalyst separates the electrons and the water molecules. The nanoparticles are assembled into a thin layer and a single nanoparticle has many chromophore-catalyst on it. The function of this thin layer of nanoparticles is to transfer away the electrons which are separated from the water. This thin layer of nanoparticles is coated by a layer of titanium dioxide. With this coating, the electrons that come free can be transferred more quickly so that hydrogen could be made. This coating is, again, coated with a protective coating that strengthens the connection between the chromophore-catalyst and the nanoparticle.
Using this method, the solar energy acquired from the solar panels is converted into fuel (hydrogen) without releasing the so-called greenhouse gasses. This fuel can be stored into a fuel cell and, at a later time, used to generate electricity.
Molecular Solar Thermal System (MOST)
Another promising way to store solar energy for electricity and heat production is a so-called molecular solar thermal system (MOST). With this approach a molecule is converted by photoisomerization into a higher-energy isomer. Photoisomerization is a process in which one (cis trans) isomer is converted into another by light (solar energy). This isomer is capable of storing the solar energy until the energy is released by a heat trigger or catalyst (then, the isomer is converted into its original isomer). A promising candidate for such a MOST is Norbornadiene (NBD). This is because there is a high energy difference between the NBD and the quadricyclane (QC) photoisomer. This energy difference is approximately 96 kJ/mol. It is also known that for such systems, the donor-acceptor substitutions provide an effective means for red shifting the longest-wavelength absorption. This improves the solar spectrum match.
A crucial challenge for a useful MOST system is to acquire a satisfactory high energy storage density (if possible, higher than 300 kJ/kg). Another challenge of a MOST system is that light can be harvested in the visible region. The functionalization of the NBD with the donor and acceptor units is used to adjust this absorption maxima. However, this positive effect on the solar absorption is compensated by a higher molecular weight. This implies a lower energy density. This positive effect on the solar absorption has another downside. Namely, that the energy storage time is lowered when the absorption is redshifted. A possible solution to overcome this anti-correlation between the energy density and the red shifting is to couple one chromophore unit to several photo switches. In this case, it is advantageous to form so called dimers or trimers. The NBD share a common donor and/or acceptor.
Kasper Moth-Poulsen and his team tried to engineer the stability of the high energy photo isomer by having two electronically coupled photo switches with separate barriers for thermal conversion. By doing so, a blue shift occurred after the first isomerization (NBD-NBD to QC-NBD). This led to a higher energy of isomerization of the second switching event (QC-NBD to QC-QC). Another advantage of this system, by sharing a donor, is that the molecular weight per norbornadiene unit is reduced. This leads to an increase of the energy density.
Eventually, this system could reach a quantum yield of photoconversion up 94% per NBD unit. A quantum yield is a measure of the efficiency of photon emission. With this system the measured energy densities reached up to 559 kJ/kg (exceeding the target of 300 kJ/kg). So, the potential of the molecular photo switches is enormousnot only for solar thermal energy storage but for other applications as well.
In 2022, researchers reported combining the MOST with a chip-sized thermoelectric generator to generate electricity from it. The system can reportedly store solar energy for up to 18 years and may be an option for renewable energy storage.
Thermal Battery
A thermal energy battery is a physical structure used for the purpose of storing and releasing thermal energy. Such a thermal battery (a.k.a. TBat) allows energy available at one time to be temporarily stored and then released at another time. The basic principles involved in a thermal battery occur at the atomic level of matter, with energy being added to or taken from either a solid mass or a liquid volume which causes the substance's temperature to change. Some thermal batteries also involve causing a substance to transition thermally through a phase transition which causes even more energy to be stored and released due to the delta enthalpy of fusion or delta enthalpy of vaporization.
Thermal batteries are very common, and include such familiar items as a hot water bottle. Early examples of thermal batteries include stone and mud cook stoves, rocks placed in fires, and kilns. While stoves and kilns are ovens, they are also thermal storage systems that depend on heat being retained for an extended period of time. Thermal energy storage systems can also be installed in domestic situations with heat batteries and thermal stores being amongst the most common types of energy storage systems installed at homes in the UK.
Types of thermal batteries
Thermal batteries generally fall into 4 categories with different forms and applications, although fundamentally all are for the storage and retrieval of thermal energy. They also differ in method and density of heat storage.
Phase change thermal battery
Phase change materials used for thermal storage are capable of storing and releasing significant thermal capacity at the temperature that they change phase. These materials are chosen based on specific applications because there is a wide range of temperatures that may be useful in different applications and a wide range of materials that change phase at different temperatures. These materials include salts and waxes that are specifically engineered for the applications they serve. In addition to manufactured materials, water is a phase change material. The latent heat of water is 334 joules/gram. The phase change of water occurs at 0 °C (32 °F).
Some applications use the thermal capacity of water or ice as cold storage; others use it as heat storage. It can serve either application; ice can be melted to store heat then refrozen to warm an environment. The advantage of using a phase change in this way is that a given mass of material can absorb a large quantity of energy without its temperature changing. Hence a thermal battery that uses a phase change can be made lighter, or more energy can be put into it without raising the internal temperature unacceptably.
Encapsulated thermal battery
An encapsulated thermal battery is physically similar to a phase change thermal battery in that it is a confined amount of physical material which is thermally heated or cooled to store or extract energy. However, in a non-phase change encapsulated thermal battery, the temperature of the substance is changed without inducing a phase change. Since a phase change is not needed many more materials are available for use in an encapsulated thermal battery. One of the key properties of an encapsulated thermal battery is its volumetric heat capacity (VHC), also termed volume-specific heat capacity. Several substances are used for these thermal batteries, for example water, concrete, and wet or dry sand.
An example of an encapsulated thermal battery is a residential water heater with a storage tank. This thermal battery is usually slowly charged over a period of about 30–60 minutes for rapid use when needed (e.g., 10–15 minutes). Many utilities, understanding the "thermal battery" nature of water heaters, have begun using them to absorb excess renewable energy power when available for later use by the homeowner. According to the above-cited article, "net savings to the electricity system as a whole could be $200 per year per heater — some of which may be passed on to its owner".
Research into using sand as a heat storage medium has been performed in Finland, where a prototype 8 MWh sand battery was built in 2022 to store renewable solar and wind power as heat, for later use as district heating, and possible later power generation. In Canada, single building thermal storage also stores renewable solar and wind power as heat, for later use as space or water heating for the building in which it's installed. It differs from the system in Finland by being compact, using low pressure pumped fluids, and can only heat one building rather than several. It can take in waste heat from alternate sources such as computer server rooms or compost heaps and store it for later distribution.
Ground heat exchange thermal battery
A ground heat exchanger (GHEX) is an area of the earth that is utilized as a seasonal/annual cycle thermal battery. These thermal batteries are areas of the earth into which pipes have been placed in order to transfer thermal energy. Energy is added to the GHEX by running a higher temperature fluid through the pipes and thus raising the temperature of the local earth. Energy can also be taken from the GHEX by running a lower-temperature fluid through those same pipes.
GHEX are usually implemented in two forms. The picture above depicts what is known as a "horizontal" GHEX where trenching is used to place an amount of pipe in a closed loop in the ground. They are also formed by drilling boreholes into the ground, either vertically or horizontally, and then the pipes are inserted in the form of a closed-loop with a "u-bend" fitting on the far end of the loop.
Heat energy can be added to or removed from a GHEX at any point in time. However, they are most often used as a Seasonal thermal energy storage operating on an annual cycle where energy is extracted from a building during the summer season to cool a building and added to the GHEX. Then that same energy is later extracted from the GHEX in the winter season to heat the building. This annual cycle of energy addition and subtraction is highly predictable based on energy modelling of the building served. A thermal battery used in this mode is a renewable energy source as the energy extracted in the winter will be restored to the GHEX the next summer in a continually repeating cycle. This type is solar powered because it is the heat from the sun in the summer that is removed from a building and stored in the ground for use in the next winter season for heating. There are two main methods of Thermal Response Testing that are used to characterize the thermal conductivity and Thermal Capacity/Diffusivity of GHEX Thermal Batteries—Log-Time 1-Dimensional Curve Fit and newly released Advanced Thermal Response Testing.
A good example of the Annual Cycle nature of a GHEX Thermal Battery can be seen in the ASHRAE Building study. As seen there in the 'Ground Loop and Ambient Air temperatures by date' graphic (Figure 2–7), one can easily see the annual cycle sinusoidal shape of the ground temperature as heat is seasonally extracted from the ground in winter and rejected to the ground in summer, creating a ground "thermal charge" in one season that is not uncharged and driven the other direction from neutral until a later season. Other more advanced examples of Ground-based Thermal Batteries utilizing intentional well-bore thermal patterns are currently in research and early use.
Other thermal batteries
In the defense industry primary molten-salt batteries are termed "thermal batteries". They are non-rechargeable electrical batteries using a low-melting eutectic mixture of ionic metal salts (sodium, potassium and lithium chlorides, bromides, etc.) as the electrolyte, manufactured with the salts in solid form. As long as the salts remain solid, the battery has a long shelf life of up to 50 years. Once activated (usually by a pyrotechnic heat source) and the electrolyte melts, it is very reliable with a high energy and power density. They are extensively used for military applications such as small to large guided missiles, and nuclear weapons.
There are other items that have historically been termed "thermal batteries", such as energy-storage heat packs that skiers use for keeping hands and feet warm (see hand warmer). These contain iron powder moist with oxygen-free salt water which rapidly corrodes over a period of hours, releasing heat, when exposed to air. Instant cold packs absorb heat by a non-chemical phase-change such as by absorbing the endothermic heat of solution of certain compounds.
The one common principle of these other thermal batteries is that the reaction involved is not reversible. Thus, these batteries are not used for storing and retrieving heat energy.
Electric thermal storage
Storage heaters are commonplace in European homes with time-of-use metering (traditionally using cheaper electricity at nighttime). They consist of high-density ceramic bricks or feolite blocks heated to a high temperature with electricity and may or may not have good insulation and controls to release heat over a number of hours. Some advice not to use them in areas with young children or where there is an increased risk of fires due to poor housekeeping, both due to the high temperatures involved.
With the rise of wind and solar power (and other renewable energies) providing an ever increasing share of energy input into the electricity grids in some countries, the use of larger scale electric energy storage is being explored by several commercial companies. Ideally, the utilisation of surplus renewable energy is transformed into high temperature high grade heat in highly insulated heat stores, for release later when needed. An emerging technology is the use of vacuum super insulated (VSI) heat stores. The use of electricity to generate heat, and not say direct heat from solar thermal collectors, means that very high temperatures can be realised, potentially allowing for inter seasonal heat transferstoring high grade heat in summer from surplus photovoltaics generation into heat stored for the following winter with relatively minimal standing losses.
Solar energy storage
Solar energy is an application of thermal energy storage. Most practical solar thermal storage systems provide storage from a few hours to a day's worth of energy. However, a growing number of facilities use seasonal thermal energy storage (STES), enabling solar energy to be stored in summer to heat space during winter. In 2017 Drake Landing Solar Community in Alberta, Canada, achieved a year-round 97% solar heating fraction, a world record made possible by incorporating STES.
The combined use of latent heat and sensible heat are possible with high temperature solar thermal input. Various eutectic metal mixtures, such as aluminum and silicon () offer a high melting point suited to efficient steam generation, while high alumina cement-based materials offer good storage capabilities.
Pumped-heat electricity storage
In pumped-heat electricity storage (PHES), a reversible heat-pump system is used to store energy as a temperature difference between two heat stores.
Isentropic
Isentropic systems involve two insulated containers filled, for example, with crushed rock or gravel: a hot vessel storing thermal energy at high temperature/pressure, and a cold vessel storing thermal energy at low temperature/pressure. The vessels are connected at top and bottom by pipes and the whole system is filled with an inert gas such as argon.
While charging, the system can use off-peak electricity to work as a heat pump. One prototype used argon at ambient temperature and pressure from the top of the cold store is compressed adiabatically, to a pressure of, for example, 12 bar, heating it to around . The compressed gas is transferred to the top of the hot vessel where it percolates down through the gravel, transferring heat to the rock and cooling to ambient temperature. The cooled, but still pressurized, gas emerging at the bottom of the vessel is then adiabatically expanded to 1 bar, which lowers its temperature to −150 °C. The cold gas is then passed up through the cold vessel where it cools the rock while warming to its initial condition.
The energy is recovered as electricity by reversing the cycle. The hot gas from the hot vessel is expanded to drive a generator and then supplied to the cold store. The cooled gas retrieved from the bottom of the cold store is compressed which heats the gas to ambient temperature. The gas is then transferred to the bottom of the hot vessel to be reheated.
The compression and expansion processes are provided by a specially designed reciprocating machine using sliding valves. Surplus heat generated by inefficiencies in the process is shed to the environment through heat exchangers during the discharging cycle.
The developer claimed that a round trip efficiency of 72–80% was achievable. This compares to >80% achievable with pumped hydro energy storage.
Another proposed system uses turbomachinery and is capable of operating at much higher power levels. Use of phase change material as heat storage material could enhance performance.
See also
Carnot battery
District heating
Eutectic system
Fireless locomotive
Geothermal energy
Geothermal power
Heat capacity
Ice storage air conditioning
Lamm-Honigmann process
Liquid nitrogen economy
List of energy storage projects
Phase change material
Pumpable ice technology
Pumped-storage hydroelectricity
Steam accumulator
Storage heater
Thermal battery
Uniform Mechanical Code
Uniform Solar Energy and Hydronics Code
US DOE International Energy Storage Database
References
External links
ASHRAE white paper on the economies of load shifting
ICE TES Thermal Energy Storage — IDE-Tech
Laramie, Wyoming
"Prepared for the Thermal Energy-Storage Systems Collaborative of the California Energy Commission" Report titled "Source Energy and Environmental Impacts of Thermal Energy Storage." Tabors Caramanis & Assoc energy.ca.gov
Competence Center Thermal Energy Storage at Lucerne School of Engineering and Architecture
Further reading
Energy storage
Heating, ventilation, and air conditioning
Energy conservation
Heat transfer
Solar design
Renewable energy | Thermal energy storage | [
"Physics",
"Chemistry",
"Engineering"
] | 7,903 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Solar design",
"Energy engineering",
"Thermodynamics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.