id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
172,586 | https://en.wikipedia.org/wiki/Laser%20cooling | Laser cooling, sometimes also referred to as Doppler cooling, includes several techniques where atoms, molecules, and small mechanical systems are cooled with laser light. The directed energy of lasers is often associated with heating materials, e.g. laser cutting, so it can be counterintuitive that laser cooling often results in sample temperatures approaching absolute zero. It is a routine step in many atomic physics experiments where the laser-cooled atoms are then subsequently manipulated and measured, or in technologies, such as atom-based quantum computing architectures. Laser cooling relies on the change in momentum when an object, such as an atom, absorbs and re-emits a photon (a particle of light). For example, if laser light illuminates a warm cloud of atoms from all directions and the laser's frequency is tuned below an atomic resonance, the atoms will be cooled. This common type of laser cooling relies on the Doppler effect where individual atoms will preferentially absorb laser light from the direction opposite to the atom's motion. The absorbed light is re-emitted by the atom in a random direction. After repeated emission and absorption of light the net effect on the cloud of atoms is that they will expand more slowly. The slower expansion reflects a decrease in the velocity distribution of the atoms, which corresponds to a lower temperature and therefore the atoms have been cooled. For an ensemble of particles, their thermodynamic temperature is proportional to the variance in their velocity, therefore the lower the distribution of velocities, the lower temperature of the particles.
The 1997 Nobel Prize in Physics was awarded to Claude Cohen-Tannoudji, Steven Chu, and William Daniel Phillips "for development of methods to cool and trap atoms with laser light".
History
Radiation pressure
Radiation pressure is the force that electromagnetic radiation exerts on matter. In 1873 Maxwell published his treatise on electromagnetism in which he predicted radiation pressure. The force was experimentally demonstrated for the first time by Lebedev and reported at a conference in Paris in 1900, and later published in more detail in 1901. Following Lebedev's measurements Nichols and Hull also demonstrated the force of radiation pressure in 1901, with a refined measurement reported in 1903.
Atoms and molecules have bound states and transitions can occur between these states in the presence of light. Sodium is historically notable because it has a strong transition at 589 nm, a wavelength which is close to the peak sensitivity of the human eye. This made it easy to see the interaction of light with sodium atoms. In 1933, Otto Frisch deflected an atomic beam of sodium atoms with light.
This was the first realization of radiation pressure acting on an atom or molecule.
Laser cooling proposals
The introduction of lasers in atomic physics experiments was the precursor to the laser cooling proposals in the mid 1970s. Laser cooling was proposed separately in 1975 by two different research groups: Hänsch and Schawlow, and Wineland and Dehmelt. Both proposals outlined the simplest laser cooling process, known as Doppler cooling, where laser light tuned below an atom's resonant frequency is preferentially absorbed by atoms moving towards the laser and after absorption a photon is emitted in a random direction. This process is repeated many times and in a configuration with counterpropagating laser cooling light the velocity distribution of the atoms is reduced.
In 1977 Ashkin submitted a paper which describes how Doppler cooling could be used to provide the necessary damping to load atoms into an optical trap. In this work he emphasized how this could allow for long spectroscopic measurements which would increase precision because the atoms would be held in place. He also discussed overlapping optical traps to study interactions between different atoms.
Initial realizations
Following the laser cooling proposals, in 1978 two research groups that Wineland, Drullinger and Walls of NIST, and Neuhauser, Hohenstatt, Toscheck and Dehmelt of the University of Washington succeeded in laser cooling atoms. The NIST group wanted to reduce the effect of Doppler broadening on spectroscopy. They cooled magnesium ions in a Penning trap to below 40 K. The Washington group cooled barium ions.
The research from both groups served to illustrate the mechanical properties of light.
Influenced by the Wineland's work on laser cooling ions, William Phillips applied the same principles to laser cool neutral atoms. In 1982, he published the first paper where neutral atoms were laser cooled. The process used is now known as the Zeeman slower and is a standard technique for slowing an atomic beam.
Modern advances
Atoms
The Doppler cooling limit for electric dipole transitions is typically in the hundreds of microkelvins. In the 1980s this limit was seen as the lowest achievable temperature. It was a surprise then when sodium atoms were cooled to 43 microkelvins when their Doppler cooling limit is 240 microkelvins, this unforeseen low temperature was explained by considering the interaction of polarized laser light with more atomic states and transitions. Previous conceptions of laser cooling were decided to have been too simplistic. The major laser cooling breakthroughs in the 70s and 80s led to several improvements to preexisting technology and new discoveries with temperatures just above absolute zero. The cooling processes were utilized to make atomic clocks more accurate and to improve spectroscopic measurements, and led to the observation of a new state of matter at ultracold temperatures. The new state of matter, the Bose–Einstein condensate, was observed in 1995 by Eric Cornell, Carl Wieman, and Wolfgang Ketterle.
Exotic Atoms
Most laser cooling experiments bring the atoms close to at rest in the laboratory frame, but cooling of relativistic atoms has also been achieved, where the effect of cooling manifests as a narrowing of the velocity distribution. In 1990, a group at JGU successfully laser-cooled a beam of 7Li+ at in a storage ring from to lower than , using two counter-propagating lasers addressing the same transition, but at and , respectively, to compensate for the large Doppler shift.
Laser cooling of antimatter has also been demonstrated, first in 2021 by the ALPHA collaboration on antihydrogen atoms.
Molecules
Molecules are significantly more challenging to laser cool than atoms because molecules have vibrational and rotational degrees of freedom. These extra degrees of freedom result in more energy levels that can be populated from excited state decays, requiring more lasers compared to atoms to address the more complex level structure. Vibrational decays are particularly challenging because there are no symmetry rules that restrict the vibrational states that can be populated.
In 2010, a team at Yale successfully laser-cooled a diatomic molecule. In 2016, a group at MPQ successfully cooled formaldehyde to via optoelectric Sisyphus cooling. In 2022, a group at Harvard successfully laser cooled and trapped CaOH to in a magneto-optical trap.
Mechanical systems
Starting in the 2000s, laser cooling was applied to small mechanical systems, ranging from small cantilevers to the mirrors used in the LIGO observatory. These devices are connected to a larger substrate, such as a mechanical membrane attached to a frame, or they are held in optical traps, in both cases the mechanical system is a harmonic oscillator. Laser cooling reduces the random vibrations of the mechanical oscillator, removing thermal phonons from the system.
In 2007, an MIT team successfully laser-cooled a macro-scale (1 gram) object to 0.8 K. In 2011, a team from the California Institute of Technology and the University of Vienna became the first to laser-cool a (10 μm × 1 μm) mechanical object to its quantum ground state.
Methods
The first example of laser cooling, and also still the most common method (so much so that it is still often referred to simply as 'laser cooling') is Doppler cooling.
Doppler cooling
Doppler cooling, which is usually accompanied by a magnetic trapping force to give a magneto-optical trap, is by far the most common method of laser cooling. It is used to cool low density gases down to the Doppler cooling limit, which for rubidium-85 is around 150 microkelvins.
In Doppler cooling, initially, the frequency of light is tuned slightly below an electronic transition in the atom. Because the light is detuned to the "red" (i.e., at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect. Thus if one applies light from two opposite directions, the atoms will always scatter more photons from the laser beam pointing opposite to their direction of motion. In each scattering event the atom loses a momentum equal to the momentum of the photon. If the atom, which is now in the excited state, then emits a photon spontaneously, it will be kicked by the same amount of momentum, but in a random direction. Since the initial momentum change is a pure loss (opposing the direction of motion), while the subsequent change is random, the probable result of the absorption and emission process is to reduce the momentum of the atom, and therefore its speed—provided its initial speed was larger than the recoil speed from scattering a single photon. If the absorption and emission are repeated many times, the average speed, and therefore the kinetic energy of the atom, will be reduced. Since the temperature of a group of atoms is a measure of the average random internal kinetic energy, this is equivalent to cooling the atoms.
Other methods
Other methods of laser cooling include:
Sisyphus cooling
Resolved sideband cooling
Raman sideband cooling
Velocity selective coherent population trapping (VSCPT)
Gray molasses
Optical molasses
Cavity-mediated cooling
Use of a Zeeman slower
Electromagnetically induced transparency (EIT) cooling
Anti-Stokes cooling in solids
Polarization gradient cooling
Applications
Laser cooling is very common in the field of atomic physics. Reducing the random motion of atoms has several benefits, including the ability to trap atoms with optical or magnetic fields. Spectroscopic measurements of a cold atomic sample will also have reduced systematic uncertainties due to thermal motion.
Often multiple laser cooling techniques are used in a single experiment to prepare a cold sample of atoms, which is then subsequently manipulated and measured. In a representative experiment a vapor of strontium atoms is generated in a hot oven that exit the oven as an atomic beam. After leaving the oven the atoms are Doppler cooled in two dimensions transverse to their motion to reduce loss of atoms due to divergence of the atomic beam. The atomic beam is then slowed and cooled with a Zeeman slower to optimize the atom loading efficiency into a magneto-optical trap (MOT), which Doppler cools the atoms, that operates on the with lasers at 461 nm. The MOT transitions from using light at 461 nm to using light at 689 nm to drive the , which is a narrow transition, to realize even colder atoms. The atoms are then transferred into an optical dipole trap where evaporative cooling gets them to temperatures where they can be effectively loaded into an optical lattice.
Laser cooling is important for quantum computing efforts based on neutral atoms and trapped atomic ions. In an ion trap Doppler cooling reduces the random motion of the ions so they form a well-ordered crystal structure in the trap. After Doppler cooling the ions are often cooled to their motional ground state to reduce decoherence during quantum gates between ions.
Equipment
Laser cooling atoms (and molecules especially) requires specialized experimental equipment that when assembled forms a cold atom machine. Such a machine generally consists of two parts: a vacuum chamber which houses the laser cooled atoms and the laser systems used for cooling, as well as for preparing and manipulating atomic states and detecting the atoms.
Vacuum system
In order for atoms to be laser cooled, the atoms cannot collide with room temperature background gas particles. Such collisions will drastically heat the atoms, and knock them out of weak traps. Acceptable collision rates for cold atom machines typically require vacuum pressures at 10−9 Torr, and very often hundreds or even thousands of times lower pressures are necessary. To achieve these low pressures, a vacuum chamber is needed. The vacuum chamber typically includes windows so that the atoms can be addressed with lasers (e.g. for laser cooling) and light emitted by the atoms or absorption of light be the atoms can be detected. The vacuum chamber also requires an atomic source for the atom(s) to be laser cooled. The atomic source is generally heated to produce thermal atoms that can be laser cooled. For ion trapping experiments the vacuum system must also hold the ion trap, with the appropriate electric feedthroughs for the trap. Neutral atom systems very often employ a Magneto-optical trap (MOT) as one of the early stages in collecting and cooling atoms. For a MOT typically magnetic field coils are placed outside of the vacuum chamber to generate magnetic field gradients for the MOT.
Lasers
The lasers required for cold atom machines are entirely dependent on the choice of atom. Each atom has unique electronic transitions at very distinct wavelengths that must be driven for the atom to be laser cooled. Rubidium, for example is a very commonly used atom which requires driving two transitions with laser light at 780 nm that are separated by a few GHz. The light for rubidium can be generated from a signal laser at 780 nm and an Electro-optic modulator. Generally tens of mW (and often hundreds of mW to cool significantly more atoms) is used to cool neutral atoms. Trapped ions on the other hand require microwatts of optical power, as they are generally tightly confined and the laser light can be focused to a small spot size. The strontium ion, for example requires light at both 422 nm and 1092 nm in order to be Doppler cooled. Because of the small Doppler shifts involved with laser cooling, very narrow lasers, order of a few MHz, are required for laser cooling. Such lasers are generally stabilized to spectroscopy reference cells, optical cavities, or sometimes wavemeters so the laser light can be precisely tuned relative to the atomic transitions.
See also
Particle beam cooling
References
Additional sources
Laser Cooling HyperPhysics
PhysicsWorld series of articles by Chad Orzel:
Cold: how physicists learned to manipulate and move particles with laser cooling
Colder: how physicists beat the theoretical limit for laser cooling and laid the foundations for a quantum revolution
Coldest: how a letter to Einstein and advances in laser-cooling technology led physicists to new quantum states of matter
Thermodynamics
Atomic physics
Cooling technology
Laser applications | Laser cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,983 | [
"Dynamical systems",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Thermodynamics",
"Atomic",
" and optical physics"
] |
172,825 | https://en.wikipedia.org/wiki/Condensation%20reaction | In organic chemistry, a condensation reaction is a type of chemical reaction in which two molecules are combined to form a single molecule, usually with the loss of a small molecule such as water. If water is lost, the reaction is also known as a dehydration synthesis. However other molecules can also be lost, such as ammonia, ethanol, acetic acid and hydrogen sulfide.
The addition of the two molecules typically proceeds in a step-wise fashion to the addition product, usually in equilibrium, and with loss of a water molecule (hence the name condensation). The reaction may otherwise involve the functional groups of the molecule, and is a versatile class of reactions that can occur in acidic or basic conditions or in the presence of a catalyst. This class of reactions is a vital part of life as it is essential to the formation of peptide bonds between amino acids and to the biosynthesis of fatty acids.
Many variations of condensation reactions exist. Common examples include the aldol condensation and the Knoevenagel condensation, which both form water as a by-product, as well as the Claisen condensation and the Dieckman condensation (intramolecular Claisen condensation), which form alcohols as by-products.
Synthesis of prebiotic molecules
Condensation reactions likely played major roles in the synthesis of the first biotic molecules including early peptides and nucleic acids. In fact, condensation reactions would be required at multiple steps in RNA oligomerization: the condensation of nucleobases and sugars, nucleoside phosphorylation, and nucleotide polymerization.
See also
Anabolism
Hydrolysis, the opposite of a condensation reaction
Condensed tannins
References | Condensation reaction | [
"Chemistry"
] | 370 | [
"Condensation reactions",
"Organic reactions"
] |
173,186 | https://en.wikipedia.org/wiki/Roll-to-roll%20processing | In the field of electronic devices, roll-to-roll processing, also known as web processing, reel-to-reel processing or R2R, is the process of creating electronic devices on a roll of flexible plastic, metal foil, or flexible glass. In other fields predating this use, it can refer to any process of applying coating, printing, or performing other processes starting with a roll of a flexible material and re-reeling after the process to create an output roll. These processes, and others such as sheeting, can be grouped together under the general term converting. When the rolls of material have been coated, laminated or printed they can be subsequently slit to their finished size on a slitter rewinder.
In electronic devices
Large circuits made with thin-film transistors and other devices can be patterned onto these large substrates, which can be up to a few metres wide and long. Some of the devices can be patterned directly, much like an inkjet printer deposits ink. For most semiconductors, however, the devices must be patterned using photolithography techniques.
Roll-to-roll processing of large-area electronic devices reduces manufacturing cost. Most notable would be solar cells, which are still prohibitively expensive for most markets due to the high cost per unit area of traditional bulk (mono- or polycrystalline) silicon manufacturing. Other applications could arise which take advantage of the flexible nature of the substrates, such as electronics embedded into clothing, large-area flexible displays, and roll-up portable displays.
LED (Light Emitting Diode)
Inorganic LED - Flexible LED is commonly made into 25, 50, 100 m, or even longer strips using a roll-to-roll process. A long neon LED tube is using such a long flexible strip and encapsulated with PVC or silicone diffusing encapsulation.
Organic LED (OLED) - OLED for foldable phone screen is adopting roll-to-roll processing technology.
Thin-film cells
A crucial issue for a roll-to-roll thin-film cell production system is the deposition rate of the microcrystalline layer, and this can be tackled using four approaches:
very high frequency plasma-enhanced chemical vapour deposition (VHF-PECVD)
microwave (MW)-PECVD
hot wire chemical vapour deposition (hot-wire CVD)
the use of ultrasonic nozzles in an in-line process
In electrochemical devices
Roll-to-roll processing has been used in the manufacture of electrochemical devices such as batteries, supercapacitors, fuel cells, and water electrolyzers. Here, the roll-to-roll processing is utilized for electrode manufacturing and is the key to reducing manufacturing cost through stable production of electrodes on various film substrates such as metal foils, membranes, diffusion media, and separators.
See also
Amorphous silicon
Low cost solar cell
Printed electronics
Roll slitting
Rolling (metalworking)
Thin film solar cell
Web manufacturing
Tape automated bonding, TAB
References
Electronics manufacturing
Semiconductors | Roll-to-roll processing | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 621 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Electronics manufacturing",
"Solid state engineering",
"Matter"
] |
173,196 | https://en.wikipedia.org/wiki/Spin%20network | In physics, a spin network is a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations.
Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others.
Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations.
Definition
Penrose's definition
A spin network, as described in Penrose (1971), is a kind of diagram in which each line segment represents the world line of a "unit" (either an elementary particle or a compound system of particles). Three line segments join at each vertex. A vertex may be interpreted as an event in which either a single unit splits into two or two units collide and join into a single unit. Diagrams whose line segments are all joined at vertices are called closed spin networks. Time may be viewed as going in one direction, such as from the bottom to the top of the diagram, but for closed spin networks the direction of time is irrelevant to calculations.
Each line segment is labelled with an integer called a spin number. A unit with spin number n is called an n-unit and has angular momentum nħ/2, where ħ is the reduced Planck constant. For bosons, such as photons and gluons, n is an even number. For fermions, such as electrons and quarks, n is odd.
Given any closed spin network, a non-negative integer can be calculated which is called the norm of the spin network. Norms can be used to calculate the probabilities of various spin values. A network whose norm is zero has zero probability of occurrence. The rules for calculating norms and probabilities are beyond the scope of this article. However, they imply that for a spin network to have nonzero norm, two requirements must be met at each vertex. Suppose a vertex joins three units with spin numbers a, b, and c. Then, these requirements are stated as:
Triangle inequality: a ≤ b + c and b ≤ a + c and c ≤ a + b.
Fermion conservation: a + b + c must be an even number.
For example, a = 3, b = 4, c = 6 is impossible since 3 + 4 + 6 = 13 is odd, and a = 3, b = 4, c = 9 is impossible since 9 > 3 + 4. However, a = 3, b = 4, c = 5 is possible since 3 + 4 + 5 = 12 is even, and the triangle inequality is satisfied.
Some conventions use labellings by half-integers, with the condition that the sum a + b + c must be a whole number.
Formal approach to definition
Formally, a spin network may be defined as a (directed) graph whose edges are associated with irreducible representations of a compact Lie group and whose vertices are associated with intertwiners of the edge representations adjacent to it.
Properties
A spin network, immersed into a manifold, can be used to define a functional on the space of connections on this manifold. One computes holonomies of the connection along every link (closed path) of the graph, determines representation matrices corresponding to every link, multiplies all matrices and intertwiners together, and contracts indices in a prescribed way. A remarkable feature of the resulting functional is that it is invariant under local gauge transformations.
Usage in physics
In the context of loop quantum gravity
In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots"that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space.
One of the key results of loop quantum gravity is quantization of areas: the operator of the area A of a two-dimensional surface Σ should have a discrete spectrum. Every spin network is an eigenstate of each such operator, and the area eigenvalue equals
where the sum goes over all intersections i of Σ with the spin network. In this formula,
PL is the Planck length,
is the Immirzi parameter and
ji = 0, 1/2, 1, 3/2, ... is the spin associated with the link i of the spin network. The two-dimensional area is therefore "concentrated" in the intersections with the spin network.
According to this formula, the lowest possible non-zero eigenvalue of the area operator corresponds to a link that carries spin 1/2 representation. Assuming an Immirzi parameter on the order of 1, this gives the smallest possible measurable area of ~10−66 cm2.
The formula for area eigenvalues becomes somewhat more complicated if the surface is allowed to pass through the vertices, as with anomalous diffusion models. Also, the eigenvalues of the area operator A are constrained by ladder symmetry.
Similar quantization applies to the volume operator. The volume of a 3D submanifold that contains part of a spin network is given by a sum of contributions from each node inside it. One can think that every node in a spin network is an elementary "quantum of volume" and every link is a "quantum of area" surrounding this volume.
More general gauge theories
Similar constructions can be made for general gauge theories with a compact Lie group G and a connection form. This is actually an exact duality over a lattice. Over a manifold however, assumptions like diffeomorphism invariance are needed to make the duality exact (smearing Wilson loops is tricky). Later, it was generalized by Robert Oeckl to representations of quantum groups in 2 and 3 dimensions using the Tannaka–Krein duality.
Michael A. Levin and Xiao-Gang Wen have also defined string-nets using tensor categories that are objects very similar to spin networks. However the exact connection with spin networks is not clear yet. String-net condensation produces topologically ordered states in condensed matter.
Usage in mathematics
In mathematics, spin networks have been used to study skein modules and character varieties, which correspond to spaces of connections.
See also
Spin connection
Spin structure
Character variety
Penrose graphical notation
Spin foam
String-net
Trace diagram
Tensor network
References
Further reading
Early papers
I. B. Levinson, "Sum of Wigner coefficients and their graphical representation," Proceed. Phys-Tech Inst. Acad Sci. Lithuanian SSR 2, 17-30 (1956)
(see the Euclidean high temperature (strong coupling) section)
(see the sections on Abelian gauge theories)
Modern papers
Xiao-Gang Wen, "Quantum Field Theory of Many-body Systems – from the Origin of Sound to an Origin of Light and Fermions," . (Dubbed string-nets here.)
Books
G. E. Stedman, Diagram Techniques in Group Theory, Cambridge University Press, 1990.
Predrag Cvitanović, Group Theory: Birdtracks, Lie's, and Exceptional Groups, Princeton University Press, 2008.
Diagrams
Quantum field theory
Loop quantum gravity
Mathematical physics
Diagram algebras | Spin network | [
"Physics",
"Mathematics"
] | 1,534 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical physics"
] |
173,238 | https://en.wikipedia.org/wiki/Diels%E2%80%93Alder%20reaction | In organic chemistry, the Diels–Alder reaction is a chemical reaction between a conjugated diene and a substituted alkene, commonly termed the dienophile, to form a substituted cyclohexene derivative. It is the prototypical example of a pericyclic reaction with a concerted mechanism. More specifically, it is classified as a thermally allowed [4+2] cycloaddition with Woodward–Hoffmann symbol [π4s + π2s]. It was first described by Otto Diels and Kurt Alder in 1928. For the discovery of this reaction, they were awarded the Nobel Prize in Chemistry in 1950. Through the simultaneous construction of two new carbon–carbon bonds, the Diels–Alder reaction provides a reliable way to form six-membered rings with good control over the regio- and stereochemical outcomes. Consequently, it has served as a powerful and widely applied tool for the introduction of chemical complexity in the synthesis of natural products and new materials. The underlying concept has also been applied to π-systems involving heteroatoms, such as carbonyls and imines, which furnish the corresponding heterocycles; this variant is known as the hetero-Diels–Alder reaction. The reaction has also been generalized to other ring sizes, although none of these generalizations have matched the formation of six-membered rings in terms of scope or versatility. Because of the negative values of ΔH° and ΔS° for a typical Diels–Alder reaction, the microscopic reverse of a Diels–Alder reaction becomes favorable at high temperatures, although this is of synthetic importance for only a limited range of Diels–Alder adducts, generally with some special structural features; this reverse reaction is known as the retro-Diels–Alder reaction.
Mechanism
The reaction is an example of a concerted pericyclic reaction. It is believed to occur via a single, cyclic transition state, with no intermediates generated during the course of the reaction. As such, the Diels–Alder reaction is governed by orbital symmetry considerations: it is classified as a [π4s + π2s] cycloaddition, indicating that it proceeds through the suprafacial/suprafacial interaction of a 4π electron system (the diene structure) with a 2π electron system (the dienophile structure), an interaction that leads to a transition state without an additional orbital symmetry-imposed energetic barrier and allows the Diels–Alder reaction to take place with relative ease.
A consideration of the reactants' frontier molecular orbitals (FMO) makes plain why this is so. (The same conclusion can be drawn from an orbital correlation diagram or a Dewar-Zimmerman analysis.) For the more common "normal" electron demand Diels–Alder reaction, the more important of the two HOMO/LUMO interactions is that between the electron-rich diene's ψ2 as the highest occupied molecular orbital (HOMO) with the electron-deficient dienophile's π* as the lowest unoccupied molecular orbital (LUMO). However, the HOMO–LUMO energy gap is close enough that the roles can be reversed by switching electronic effects of the substituents on the two components. In an inverse (reverse) electron-demand Diels–Alder reaction, electron-withdrawing substituents on the diene lower the energy of its empty ψ3 orbital and electron-donating substituents on the dienophile raise the energy of its filled π orbital sufficiently that the interaction between these two orbitals becomes the most energetically significant stabilizing orbital interaction. Regardless of which situation pertains, the HOMO and LUMO of the components are in phase and a bonding interaction results as can be seen in the diagram below. Since the reactants are in their ground state, the reaction is initiated thermally and does not require activation by light.
The "prevailing opinion" is that most Diels–Alder reactions proceed through a concerted mechanism; the issue, however, has been thoroughly contested. Despite the fact that the vast majority of Diels–Alder reactions exhibit stereospecific, syn addition of the two components, a diradical intermediate has been postulated (and supported with computational evidence) on the grounds that the observed stereospecificity does not rule out a two-step addition involving an intermediate that collapses to product faster than it can rotate to allow for inversion of stereochemistry.
There is a notable rate enhancement when certain Diels–Alder reactions are carried out in polar organic solvents such as dimethylformamide and ethylene glycol, and even in water. The reaction of cyclopentadiene and butenone for example is 700 times faster in water relative to 2,2,4-trimethylpentane as solvent. Several explanations for this effect have been proposed, such as an increase in effective concentration due to hydrophobic packing or hydrogen-bond stabilization of the transition state.
The geometry of the diene and dienophile components each propagate into stereochemical details of the product. For intermolecular reactions especially, the preferred positional and stereochemical relationship of substituents of the two components compared to each other are controlled by electronic effects. However, for intramolecular Diels–Alder cycloaddition reactions, the conformational stability of the structure of the transition state can be an overwhelming influence.
Regioselectivity
Frontier molecular orbital theory has also been used to explain the regioselectivity patterns observed in Diels–Alder reactions of substituted systems. Calculation of the energy and orbital coefficients of the components' frontier orbitals provides a picture that is in good accord with the more straightforward analysis of the substituents' resonance effects, as illustrated below.
In general, the regioselectivity found for both normal and inverse electron-demand Diels–Alder reaction follows the ortho-para rule, so named, because the cyclohexene product bears substituents in positions that are analogous to the ortho and para positions of disubstituted arenes. For example, in a normal-demand scenario, a diene bearing an electron-donating group (EDG) at C1 has its largest HOMO coefficient at C4, while the dienophile with an electron withdrawing group (EWG) at C1 has the largest LUMO coefficient at C2. Pairing these two coefficients gives the "ortho" product as seen in case 1 in the figure below. A diene substituted at C2 as in case 2 below has the largest HOMO coefficient at C1, giving rise to the "para" product. Similar analyses for the corresponding inverse-demand scenarios gives rise to the analogous products as seen in cases 3 and 4. Examining the canonical mesomeric forms above, it is easy to verify that these results are in accord with expectations based on consideration of electron density and polarization.
In general, with respect to the energetically most well-matched HOMO-LUMO pair, maximizing the interaction energy by forming bonds between centers with the largest frontier orbital coefficients allows the prediction of the main regioisomer that will result from a given diene-dienophile combination. In a more sophisticated treatment, three types of substituents (Z withdrawing: HOMO and LUMO lowering (CF3, NO2, CN, C(O)CH3), X donating: HOMO and LUMO raising (Me, OMe, NMe2), C conjugating: HOMO raising and LUMO lowering (Ph, vinyl)) are considered, resulting in a total of 18 possible combinations. The maximization of orbital interaction correctly predicts the product in all cases for which experimental data is available. For instance, in uncommon combinations involving X groups on both diene and dienophile, a 1,3-substitution pattern may be favored, an outcome not accounted for by a simplistic resonance structure argument. However, cases where the resonance argument and the matching of largest orbital coefficients disagree are rare.
Stereospecificity and stereoselectivity
Diels–Alder reactions, as concerted cycloadditions, are stereospecific. Stereochemical information of the diene and the dienophile are retained in the product, as a syn addition with respect to each component. For example, substituents in a cis (trans, resp.) relationship on the double bond of the dienophile give rise to substituents that are cis (trans, resp.) on those same carbons with respect to the cyclohexene ring. Likewise, cis,cis- and trans,trans-disubstituted dienes give cis substituents at these carbons of the product whereas cis,trans-disubstituted dienes give trans substituents:
Diels–Alder reactions in which adjacent stereocenters are generated at the two ends of the newly formed single bonds imply two different possible stereochemical outcomes. This is a stereoselective situation based on the relative orientation of the two separate components when they react with each other. In the context of the Diels–Alder reaction, the transition state in which the most significant substituent (an electron-withdrawing and/or conjugating group) on the dienophile is oriented towards the diene π system and slips under it as the reaction takes place is known as the endo transition state. In the alternative exo transition state, it is oriented away from it. (There is a more general usage of the terms endo and exo in stereochemical nomenclature.)
In cases where the dienophile has a single electron-withdrawing / conjugating substituent, or two electron-withdrawing / conjugating substituents cis to each other, the outcome can often be predicted. In these "normal demand" Diels–Alder scenarios, the endo transition state is typically preferred, despite often being more sterically congested. This preference is known as the Alder endo rule. As originally stated by Alder, the transition state that is preferred is the one with a "maximum accumulation of double bonds." Endo selectivity is typically higher for rigid dienophiles such as maleic anhydride and benzoquinone; for others, such as acrylates and crotonates, selectivity is not very pronounced.
The most widely accepted explanation for the origin of this effect is a favorable interaction between the π systems of the dienophile and the diene, an interaction described as a secondary orbital effect, though dipolar and van der Waals attractions may play a part as well, and solvent can sometimes make a substantial difference in selectivity. The secondary orbital overlap explanation was first proposed by Woodward and Hoffmann. In this explanation, the orbitals associated with the group in conjugation with the dienophile double-bond overlap with the interior orbitals of the diene, a situation that is possible only for the endo transition state. Although the original explanation only invoked the orbital on the atom α to the dienophile double bond, Salem and Houk have subsequently proposed that orbitals on the α and β carbons both participate when molecular geometry allows.
Often, as with highly substituted dienes, very bulky dienophiles, or reversible reactions (as in the case of furan as diene), steric effects can override the normal endo selectivity in favor of the exo isomer.
The diene
The diene component of the Diels–Alder reaction can be either open-chain or cyclic, and it can host many different types of substituents. It must, however, be able to exist in the s-cis conformation, since this is the only conformer that can participate in the reaction. Though butadienes are typically more stable in the s-trans conformation, for most cases energy difference is small (~2–5 kcal/mol).
A bulky substituent at the C2 or C3 position can increase reaction rate by destabilizing the s-trans conformation and forcing the diene into the reactive s-cis conformation. 2-tert-butyl-buta-1,3-diene, for example, is 27 times more reactive than simple butadiene. Conversely, a diene having bulky substituents at both C2 and C3 is less reactive because the steric interactions between the substituents destabilize the s-cis conformation.
Dienes with bulky terminal substituents (C1 and C4) decrease the rate of reaction, presumably by impeding the approach of the diene and dienophile.
An especially reactive diene is 1-methoxy-3-trimethylsiloxy-buta-1,3-diene, otherwise known as Danishefsky's diene. It has particular synthetic utility as means of furnishing α,β–unsaturated cyclohexenone systems by elimination of the 1-methoxy substituent after deprotection of the enol silyl ether. Other synthetically useful derivatives of Danishefsky's diene include 1,3-alkoxy-1-trimethylsiloxy-1,3-butadienes (Brassard dienes) and 1-dialkylamino-3-trimethylsiloxy-1,3-butadienes (Rawal dienes). The increased reactivity of these and similar dienes is a result of synergistic contributions from donor groups at C1 and C3, raising the HOMO significantly above that of a comparable monosubstituted diene.
Unstable (and thus highly reactive) dienes can be synthetically useful, e.g. o-quinodimethanes can be generated in situ. In contrast, stable dienes, such as naphthalene, require forcing conditions and/or highly reactive dienophiles, such as N-phenylmaleimide. Anthracene, being less aromatic (and therefore more reactive for Diels–Alder syntheses) in its central ring can form a 9,10 adduct with maleic anhydride at 80 °C and even with acetylene, a weak dienophile, at 250 °C.
The dienophile
In a normal demand Diels–Alder reaction, the dienophile has an electron-withdrawing group in conjugation with the alkene; in an inverse-demand scenario, the dienophile is conjugated with an electron-donating group. Dienophiles can be chosen to contain a "masked functionality". The dienophile undergoes Diels–Alder reaction with a diene introducing such a functionality onto the product molecule. A series of reactions then follow to transform the functionality into a desirable group. The end product cannot be made in a single DA step because equivalent dienophile is either unreactive or inaccessible. An example of such approach is the use of α-chloroacrylonitrile (CH2=CClCN). When reacted with a diene, this dienophile will introduce α-chloronitrile functionality onto the product molecule. This is a "masked functionality" which can be then hydrolyzed to form a ketone. α-Chloroacrylonitrile dienophile is an equivalent of ketene dienophile (CH2=C=O), which would produce same product in one DA step. The problem is that ketene itself cannot be used in Diels–Alder reactions because it reacts with dienes in unwanted manner (by [2+2] cycloaddition), and therefore "masked functionality" approach has to be used. Other such functionalities are phosphonium substituents (yielding exocyclic double bonds after Wittig reaction), various sulfoxide and sulfonyl functionalities (both are acetylene equivalents), and nitro groups (ketene equivalents).
Variants on the classical Diels–Alder reaction
Hetero-Diels–Alder
Diels–Alder reactions involving at least one heteroatom are also known and are collectively called hetero-Diels–Alder reactions. Carbonyl groups, for example, can successfully react with dienes to yield dihydropyran rings, a reaction known as the oxo-Diels–Alder reaction, and imines can be used, either as the dienophile or at various sites in the diene, to form various N-heterocyclic compounds through the aza-Diels–Alder reaction. Nitroso compounds (R-N=O) can react with dienes to form oxazines. Chlorosulfonyl isocyanate can be utilized as a dienophile to prepare Vince lactam.
Lewis acid activation
Lewis acids, such as zinc chloride, boron trifluoride, tin tetrachloride, or aluminium chloride, can catalyze Diels–Alder reactions by binding to the dienophile. Traditionally, the enhanced Diels-Alder reactivity is ascribed to the ability of the Lewis acid to lower the LUMO of the activated dienophile, which results in a smaller normal electron demand HOMO-LUMO orbital energy gap and hence more stabilizing orbital interactions.
Recent studies, however, have shown that this rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect. It is found that Lewis acids accelerate the Diels–Alder reaction by reducing the destabilizing steric Pauli repulsion between the interacting diene and dienophile and not by lowering the energy of the dienophile's LUMO and consequently, enhancing the normal electron demand orbital interaction. The Lewis acid binds via a donor-acceptor interaction to the dienophile and via that mechanism polarizes occupied orbital density away from the reactive C=C double bond of the dienophile towards the Lewis acid. This reduced occupied orbital density on C=C double bond of the dienophile will, in turn, engage in a less repulsive closed-shell-closed-shell orbital interaction with the incoming diene, reducing the destabilizing steric Pauli repulsion and hence lowers the Diels–Alder reaction barrier. In addition, the Lewis acid catalyst also increases the asynchronicity of the Diels–Alder reaction, making the occupied π-orbital located on the C=C double bond of the dienophile asymmetric. As a result, this enhanced asynchronicity leads to an extra reduction of the destabilizing steric Pauli repulsion as well as a diminishing pressure on the reactants to deform, in other words, it reduced the destabilizing activation strain (also known as distortion energy). This working catalytic mechanism is known as Pauli-lowering catalysis, which is operative in a variety of organic reactions.
The original rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect, because besides lowering the energy of the dienophile's LUMO, the Lewis acid also lowers the energy of the HOMO of the dienophile and hence increases the inverse electron demand LUMO-HOMO orbital energy gap. Thus, indeed Lewis acid catalysts strengthen the normal electron demand orbital interaction by lowering the LUMO of the dienophile, but, they simultaneously weaken the inverse electron demand orbital interaction by also lowering the energy of the dienophile's HOMO. These two counteracting phenomena effectively cancel each other, resulting in nearly unchanged orbital interactions when compared to the corresponding uncatalyzed Diels–Alder reactions and making this not the active mechanism behind Lewis acid-catalyzed Diels–Alder reactions.
Asymmetric Diels–Alder
Many methods have been developed for influencing the stereoselectivity of the Diels–Alder reaction, such as the use of chiral auxiliaries, catalysis by chiral Lewis acids, and small organic molecule catalysts. Evans' oxazolidinones, oxazaborolidines, bis-oxazoline–copper chelates, imidazoline catalysis, and many other methodologies exist for effecting diastereo- and enantioselective Diels–Alder reactions.
Hexadehydro Diels–Alder
In the hexadehydro Diels–Alder reaction, alkynes and diynes are used instead of alkenes and dienes, forming an unstable benzyne intermediate which can then be trapped to form an aromatic product. This reaction allows the formation of heavily functionalized aromatic rings in a single step.
Applications and natural occurrence
The retro-Diels–Alder reaction is used in the industrial production of cyclopentadiene. Cyclopentadiene is a precursor to various norbornenes, which are common monomers. The Diels–Alder reaction is also employed in the production of vitamin B6.
History
The Diels-Alder reaction was the culmination of several intertwined research threads, some near misses, and ultimately, the insightful recognition of a general principle by Otto Diels and Kurt Alder. Their seminal work, detailed in a series of 28 articles published in the Justus Liebigs Annalen der Chemie and Berichte der deutschen chemischen Gesellschaft from 1928 to 1937, established the reaction's wide applicability and its importance in constructing six-membered rings. The first 19 articles were authored by Diels and Alder, while the later articles were authored by Diels and various other coauthors. However, the history of the reaction extends further back, revealing a fascinating narrative of discoveries missed and opportunities overlooked.
Several chemists, working independently in the late 19th and early 20th centuries, encountered reactions that, in retrospect, involved the Diels-Alder process but remained unrecognized as such.
Theodor Zincke performed a series of experiments between 1892 and 1912 involving tetrachlorocyclopentadienone, a highly reactive diene analogue.
In 1910, Sergey Lebedev systematically investigated thermal polymerization of three conjugated dienes (butadiene, isoprene and dimethylbutadiene), a process now recognized as a Diels-Alder self-reaction, providing a detailed analysis of the dimerization products and recognizing the importance of the conjugated system in the process. Five years earlier, Carl Harries studied the degradation of natural rubber, leading him to propose a cyclic structure for the polymer.
Hermann Staudinger's work with ketenes published in 1912 covered both [2+2] cycloadditions, where one molecule of a ketene reacted with an unsaturated compound to form a four-membered ring, and, importantly, [4+2] cycloadditions. In the latter case, two molecules of ketene combined with one molecule of an unsaturated compound (such as a quinone) to yield a six-membered ring. While not a classic Diels-Alder reaction in the typical sense of a conjugated diene and a separate dienophile, Staudinger's observation of this [4+2] process, forming a six-membered ring, foreshadowed the later work of Diels and Alder. However, his focus remained primarily on the more common [2+2] ketene cycloaddition.
Hans von Euler-Chelpin and K. O. Josephson, investigating isoprene and butadiene reactions in 1920, both observed products consistent with Diels-Alder cycloadditions, but didn't go on to research it further.
Perhaps the most striking near miss came from Walter Albrecht in early 1900s. Working in Johannes Thiele's laboratory, Albrecht investigated the reaction of cyclopentadiene with para-benzoquinone. His 1902 doctoral dissertation clearly describes the formation of the Diels-Alder adduct, even providing (incorrect) structural assignments. However, influenced by Thiele's focus on conjugation and partial valence, Albrecht in his 1906 publication interpreted the reaction as a 1,4-addition followed by a 1,2-addition, completely overlooking the cycloaddition aspect.
While these observations hinted at the possibility of a broader class of cycloaddition reactions, they remained isolated incidents, their significance not fully appreciated at the time, with none of the researchers even trying to generalize their findings.
It fell to Diels and Alder to synthesize these disparate threads into a coherent whole. Unlike the earlier researchers, they recognized the generality and predictability of the diene and dienophile combining to form a cyclic structure. Through their systematic investigations, exploring various combinations of dienes and dienophiles, they firmly established the "diene synthesis" as a powerful new synthetic method. Their meticulous work not only demonstrated the reaction's scope and versatility but also laid the groundwork for future theoretical developments, including the Woodward-Hoffmann rules, which would provide a deeper understanding of pericyclic reactions, including the Diels-Alder.
Applications in total synthesis
The Diels–Alder reaction was one step in an early preparation of the steroids cortisone and cholesterol. The reaction involved the addition of butadiene to a quinone.
Diels–Alder reactions were used in the original synthesis of prostaglandins F2α and E2. The Diels–Alder reaction establishes the relative stereochemistry of three contiguous stereocenters on the prostaglandin cyclopentane core. Activation by Lewis acidic cupric tetrafluoroborate was required.
A Diels–Alder reaction was used in the synthesis of disodium prephenate, a biosynthetic precursor of the amino acids phenylalanine and tyrosine.
A synthesis of reserpine uses a Diels–Alder reaction to set the cis-decalin framework of the D and E rings.
In another synthesis of reserpine, the cis-fused D and E rings was formed by a Diels–Alder reaction. Intramolecular Diels–Alder of the pyranone below with subsequent extrusion of carbon dioxide via a retro [4+2] afforded the bicyclic lactam. Epoxidation from the less hindered α-face, followed by epoxide opening at the less hindered C18 afforded the desired stereochemistry at these positions, while the cis-fusion was achieved with hydrogenation, again proceeding primarily from the less hindered face.
A pyranone was similarly used as the dienophile in the total synthesis of taxol. The intermolecular reaction of the hydroxy-pyrone and α,β–unsaturated ester shown below suffered from poor yield and regioselectivity; however, when directed by phenylboronic acid the desired adduct could be obtained in 61% yield after cleavage of the boronate with neopentyl glycol. The stereospecificity of the Diels–Alder reaction in this instance allowed for the definition of four stereocenters that were carried on to the final product.
A Diels–Alder reaction is a key step in the synthesis of (-)-furaquinocin C.
Tabersonine was prepared by a Diels–Alder reaction to establish cis relative stereochemistry of the alkaloid core. Conversion of the cis-aldehyde to its corresponding alkene by Wittig olefination and subsequent ring-closing metathesis with a Schrock catalyst gave the second ring of the alkaloid core. The diene in this instance is notable as an example of a 1-amino-3-siloxybutadiene, otherwise known as a Rawal diene.
(+)-Sterpurene can be prepared by asymmetric D-A reaction that featured a remarkable intramolecular Diels–Alder reaction of an allene. The [2,3]-sigmatropic rearrangement of the thiophenyl group to give the sulfoxide as below proceeded enantiospecifically due to the predefined stereochemistry of the propargylic alcohol. In this way, the single allene isomer formed could direct the Diels–Alder reaction to occur on only one face of the generated 'diene'.
The tetracyclic core of the antibiotic (-)-tetracycline was prepared with a Diels–Alder reaction. Thermally initiated, conrotatory opening of the benzocyclobutene generated the o-quinodimethane, which reacted intermolecularly to give the tetracycline skeleton. The dienophile's free hydroxyl group is integral to the success of the reaction, as hydroxyl-protected variants did not react under several different reaction conditions.
Takemura et al. synthesized cantharidin in 1980 by Diels–Alder reaction, utilizing high pressure.
Synthetic applications of the Diels–Alder reaction have been reviewed extensively.
See also
Bradsher cycloaddition
Wagner-Jauregg reaction
Aza-Diels–Alder reaction
References
Bibliography
External links
English Translation of Diels and Alder's seminal 1928 German article that won them the Nobel prize. English title: 'Syntheses of the hydroaromatic series'; German title "Synthesen in der hydroaromatischen Reihe".
Cycloadditions
Carbon-carbon bond forming reactions
Ring forming reactions
German inventions
1928 in science
1928 in Germany
Name reactions | Diels–Alder reaction | [
"Chemistry"
] | 6,267 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Ring forming reactions",
"Organic reactions"
] |
173,283 | https://en.wikipedia.org/wiki/Poly%28methyl%20methacrylate%29 | Poly(methyl methacrylate) (PMMA) is a synthetic polymer derived from methyl methacrylate. It is a transparent thermoplastic, used as an engineering plastic. PMMA is also known as acrylic, acrylic glass, as well as by the trade names and brands Crylux, Hesalite, Plexiglas, Acrylite, Lucite, and Perspex, among several others (see below). This plastic is often used in sheet form as a lightweight or shatter-resistant alternative to glass. It can also be used as a casting resin, in inks and coatings, and for many other purposes.
It is often technically classified as a type of glass, in that it is a non-crystalline vitreous substance—hence its occasional historic designation as acrylic glass.
History
The first acrylic acid was created in 1843. Methacrylic acid, derived from acrylic acid, was formulated in 1865. The reaction between methacrylic acid and methanol results in the ester methyl methacrylate.
It was developed in 1928 in several different laboratories by many chemists, such as William R. Conn, Otto Röhm, and Walter Bauer, and first brought to market in 1933 by German Röhm & Haas AG (as of January 2019, part of Evonik Industries) and its partner and former U.S. affiliate Rohm and Haas Company under the trademark Plexiglas.
Polymethyl methacrylate was discovered in the early 1930s by British chemists Rowland Hill and John Crawford at Imperial Chemical Industries (ICI) in the United Kingdom. ICI registered the product under the trademark Perspex. About the same time, chemist and industrialist Otto Röhm of Röhm and Haas AG in Germany attempted to produce safety glass by polymerizing methyl methacrylate between two layers of glass. The polymer separated from the glass as a clear plastic sheet, which Röhm gave the trademarked name Plexiglas in 1933. Both Perspex and Plexiglas were commercialized in the late 1930s. In the United States, E.I. du Pont de Nemours & Company (now DuPont Company) subsequently introduced its own product under the trademark Lucite. In 1936 ICI Acrylics (now Lucite International) began the first commercially viable production of acrylic safety glass. During World War II both Allied and Axis forces used acrylic glass for submarine periscopes and aircraft windscreen, canopies, and gun turrets. Scraps of acrylic were also used to make clear pistol grips for the M1911A1 pistol or clear handle grips for the M1 bayonet or theater knifes so that soldiers could put small photos of loved ones or pin-up girls' pictures inside. They were called "Sweetheart Grips" or "Pin-up Grips". Others were used to make handles for theater knives made from scrap materials. Civilian applications followed after the war.
Names
Common orthographic stylings include polymethyl methacrylate and polymethylmethacrylate. The full IUPAC chemical name is poly(methyl 2-methylpropoate), although it is a common mistake to use "an" instead of "en".
Although PMMA is often called simply "acrylic", acrylic can also refer to other polymers or copolymers containing polyacrylonitrile. Notable trade names and brands include Acrylite, Altuglas, Astariglas, Cho Chen, Crystallite, Cyrolite, Hesalite (when used in Omega watches), Lucite, Optix, Oroglas, PerClax, Perspex, Plexiglas, R-Cast, and Sumipex.
PMMA is an economical alternative to polycarbonate (PC) when tensile strength, flexural strength, transparency, polishability, and UV tolerance are more important than impact strength, chemical resistance, and heat resistance. Additionally, PMMA does not contain the potentially harmful bisphenol-A subunits found in polycarbonate and is a far better choice for laser cutting. It is often preferred because of its moderate properties, easy handling and processing, and low cost. Non-modified PMMA behaves in a brittle manner when under load, especially under an impact force, and is more prone to scratching than conventional inorganic glass, but modified PMMA is sometimes able to achieve high scratch and impact resistance.
Properties
PMMA is a strong, tough, and lightweight material. It has a density of 1.17–1.20 g/cm, which is approximately half that of glass, which is generally, depending on composition, 2.2–2.53 g/cm. It also has good impact strength, higher than both glass and polystyrene, but significantly lower than polycarbonate and some engineered polymers. PMMA ignites at and burns, forming carbon dioxide, water, carbon monoxide, and low-molecular-weight compounds, including formaldehyde.
PMMA transmits up to 92% of visible light ( thickness), and gives a reflection of about 4% from each of its surfaces due to its refractive index (1.4905 at 589.3nm). It filters ultraviolet (UV) light at wavelengths below about 300 nm (similar to ordinary window glass). Some manufacturers add coatings or additives to PMMA to improve absorption in the 300–400 nm range. PMMA passes infrared light of up to 2,800 nm and blocks IR of longer wavelengths up to 25,000 nm. Colored PMMA varieties allow specific IR wavelengths to pass while blocking visible light (for remote control or heat sensor applications, for example).
PMMA swells and dissolves in many organic solvents; it also has poor resistance to many other chemicals due to its easily hydrolyzed ester groups. Nevertheless, its environmental stability is superior to most other plastics such as polystyrene and polyethylene, and therefore it is often the material of choice for outdoor applications.
PMMA has a maximum water absorption ratio of 0.3–0.4% by weight. Tensile strength decreases with increased water absorption. Its coefficient of thermal expansion is relatively high at (5–10)×10 °C.
The Futuro house was made of fibreglass-reinforced polyester plastic, polyester-polyurethane, and poly(methylmethacrylate); one of them was found to be degrading by cyanobacteria and Archaea.
PMMA can be joined using cyanoacrylate cement (commonly known as superglue), with heat (welding), or by using chlorinated solvents such as dichloromethane or trichloromethane (chloroform) to dissolve the plastic at the joint, which then fuses and sets, forming an almost invisible weld. Scratches may easily be removed by polishing or by heating the surface of the material. Laser cutting may be used to form intricate designs from PMMA sheets. PMMA vaporizes to gaseous compounds (including its monomers) upon laser cutting, so a very clean cut is made, and cutting is performed very easily. However, the pulsed lasercutting introduces high internal stresses, which on exposure to solvents produce undesirable "stress-crazing" at the cut edge and several millimetres deep. Even ammonium-based glass-cleaner and almost everything short of soap-and-water produces similar undesirable crazing, sometimes over the entire surface of the cut parts, at great distances from the stressed edge. Annealing the PMMA sheet/parts is therefore an obligatory post-processing step when intending to chemically bond lasercut parts together.
In the majority of applications, PMMA will not shatter. Rather, it breaks into large dull pieces. Since PMMA is softer and more easily scratched than glass, scratch-resistant coatings are often added to PMMA sheets to protect it (as well as possible other functions).
Pure poly(methyl methacrylate) homopolymer is rarely sold as an end product, since it is not optimized for most applications. Rather, modified formulations with varying amounts of other comonomers, additives, and fillers are created for uses where specific properties are required. For example:
A small amount of acrylate comonomers are routinely used in PMMA grades destined for heat processing, since this stabilizes the polymer to depolymerization ("unzipping") during processing.
Comonomers such as butyl acrylate are often added to improve impact strength.
Comonomers such as methacrylic acid can be added to increase the glass transition temperature of the polymer for higher temperature use such as in lighting applications.
Plasticizers may be added to improve processing properties, lower the glass transition temperature, improve impact properties, and improve mechanical properties such as elastic modulus
Dyes may be added to give color for decorative applications, or to protect against (or filter) UV light.
Fillers may be substituted to reduce cost.
Synthesis and processing
PMMA is routinely produced by emulsion polymerization, solution polymerization, and bulk polymerization. Generally, radical initiation is used (including living polymerization methods), but anionic polymerization of PMMA can also be performed.
The glass transition temperature (T) of atactic PMMA is . The T values of commercial grades of PMMA range from ; the range is so wide because of the vast number of commercial compositions that are copolymers with co-monomers other than methyl methacrylate. PMMA is thus an organic glass at room temperature; i.e., it is below its T. The forming temperature starts at the glass transition temperature and goes up from there. All common molding processes may be used, including injection molding, compression molding, and extrusion. The highest quality PMMA sheets are produced by cell casting, but in this case, the polymerization and molding steps occur concurrently. The strength of the material is higher than molding grades owing to its extremely high molecular mass. Rubber toughening has been used to increase the toughness of PMMA to overcome its brittle behavior in response to applied loads.
Applications
Being transparent and durable, PMMA is a versatile material and has been used in a wide range of fields and applications such as rear-lights and instrument clusters for vehicles, appliances, and lenses for glasses. PMMA in the form of sheets affords to shatter resistant panels for building windows, skylights, bulletproof security barriers, signs and displays, sanitary ware (bathtubs), LCD screens, furniture and many other applications. It is also used for coating polymers based on MMA provides outstanding stability against environmental conditions with reduced emission of VOC. Methacrylate polymers are used extensively in medical and dental applications where purity and stability are critical to performance.
Glass substitute
PMMA is commonly used for constructing residential and commercial aquariums. Designers started building large aquariums when poly(methyl methacrylate) could be used. It is less often used in other building types due to incidents such as the Summerland disaster.
PMMA is used for viewing ports and even complete pressure hulls of submersibles, such as the Alicia submarine's viewing sphere and the window of the bathyscaphe Trieste.
PMMA is used in the lenses of exterior lights of automobiles.
Spectator protection in ice hockey rinks is made from PMMA.
Historically, PMMA was an important improvement in the design of aircraft windows, making possible such designs as the bombardier's transparent nose compartment in the Boeing B-17 Flying Fortress. Modern aircraft transparencies often use stretched acrylic plies.
Police vehicles for riot control often have the regular glass replaced with PMMA to protect the occupants from thrown objects.
PMMA is an important material in the making of certain lighthouse lenses.
PMMA was used for the roofing of the compound in the Olympic Park for the 1972 Summer Olympics in Munich. It enabled a light and translucent construction of the structure.
PMMA (under the brand name "Lucite") was used for the ceiling of the Houston Astrodome.
Daylight redirection
Laser cut acrylic panels have been used to redirect sunlight into a light pipe or tubular skylight and, from there, to spread it into a room. Their developers Veronica Garcia Hansen, Ken Yeang, and Ian Edmonds were awarded the Far East Economic Review Innovation Award in bronze for this technology in 2003.
Attenuation being quite strong for distances over one meter (more than 90% intensity loss for a 3000 K source), acrylic broadband light guides are then dedicated mostly to decorative uses.
Pairs of acrylic sheets with a layer of microreplicated prisms between the sheets can have reflective and refractive properties that let them redirect part of incoming sunlight in dependence on its angle of incidence. Such panels act as miniature light shelves. Such panels have been commercialized for purposes of daylighting, to be used as a window or a canopy such that sunlight descending from the sky is directed to the ceiling or into the room rather than to the floor. This can lead to a higher illumination of the back part of a room, in particular when combined with a white ceiling, while having a slight impact on the view to the outside compared to normal glazing.
Medicine
PMMA has a good degree of compatibility with human tissue, and it is used in the manufacture of rigid intraocular lenses which are implanted in the eye when the original lens has been removed in the treatment of cataracts. This compatibility was discovered by the English ophthalmologist Harold Ridley in WWII RAF pilots, whose eyes had been riddled with PMMA splinters coming from the side windows of their Supermarine Spitfire fighters – the plastic scarcely caused any rejection, compared to glass splinters coming from aircraft such as the Hawker Hurricane. Ridley had a lens manufactured by the Rayner company (Brighton & Hove, East Sussex) made from Perspex polymerised by ICI. On 29 November 1949 at St Thomas' Hospital, London, Ridley implanted the first intraocular lens at St Thomas's Hospital in London.
In particular, acrylic-type lenses are useful for cataract surgery in patients that have recurrent ocular inflammation (uveitis), as acrylic material induces less inflammation.
Eyeglass lenses are commonly made from PMMA.
Historically, hard contact lenses were frequently made of this material. Soft contact lenses are often made of a related polymer, where acrylate monomers containing one or more hydroxyl groups make them hydrophilic.
In orthopedic surgery, PMMA bone cement is used to affix implants and to remodel lost bone. It is supplied as a powder with liquid methyl methacrylate (MMA). Although PMMA is biologically compatible, MMA is considered to be an irritant and a possible carcinogen. PMMA has also been linked to cardiopulmonary events in the operating room due to hypotension. Bone cement acts like a grout and not so much like a glue in arthroplasty. Although sticky, it does not bond to either the bone or the implant; rather, it primarily fills the spaces between the prosthesis and the bone preventing motion. A disadvantage of this bone cement is that it heats up to while setting that may cause thermal necrosis of neighboring tissue. A careful balance of initiators and monomers is needed to reduce the rate of polymerization, and thus the heat generated.
In cosmetic surgery, tiny PMMA microspheres suspended in some biological fluid are injected as a soft-tissue filler under the skin to reduce wrinkles or scars permanently. PMMA as a soft-tissue filler was widely used in the beginning of the century to restore volume in patients with HIV-related facial wasting. PMMA is used illegally to shape muscles by some bodybuilders.
Plombage is an outdated treatment of tuberculosis where the pleural space around an infected lung was filled with PMMA balls, in order to compress and collapse the affected lung.
Emerging biotechnology and biomedical research use PMMA to create microfluidic lab-on-a-chip devices, which require 100 micrometre-wide geometries for routing liquids. These small geometries are amenable to using PMMA in a biochip fabrication process and offers moderate biocompatibility.
Bioprocess chromatography columns use cast acrylic tubes as an alternative to glass and stainless steel. These are pressure rated and satisfy stringent requirements of materials for biocompatibility, toxicity, and extractables.
Dentistry
Due to its aforementioned biocompatibility, poly(methyl methacrylate) is a commonly used material in modern dentistry, particularly in the fabrication of dental prosthetics, artificial teeth, and orthodontic appliances.
Acrylic prosthetic construction: Pre-polymerized, powdered PMMA spheres are mixed with a Methyl Methacrylate liquid monomer, Benzoyl Peroxide (initiator), and NN-Dimethyl-P-Toluidine (accelerator), and placed under heat and pressure to produce a hardened polymerized PMMA structure. Through the use of injection molding techniques, wax based designs with artificial teeth set in predetermined positions built on gypsum stone models of patients' mouths can be converted into functional prosthetics used to replace missing dentition. PMMA polymer and methyl methacrylate monomer mix is then injected into a flask containing a gypsum mold of the previously designed prosthesis, and placed under heat to initiate polymerization process. Pressure is used during the curing process to minimize polymerization shrinkage, ensuring an accurate fit of the prosthesis. Though other methods of polymerizing PMMA for prosthetic fabrication exist, such as chemical and microwave resin activation, the previously described heat-activated resin polymerization technique is the most commonly used due to its cost effectiveness and minimal polymerization shrinkage.
Artificial teeth: While denture teeth can be made of several different materials, PMMA is a material of choice for the manufacturing of artificial teeth used in dental prosthetics. Mechanical properties of the material allow for heightened control of aesthetics, easy surface adjustments, decreased risk of fracture when in function in the oral cavity, and minimal wear against opposing teeth. Additionally, since the bases of dental prosthetics are often constructed using PMMA, adherence of PMMA denture teeth to PMMA denture bases is unparalleled, leading to the construction of a strong and durable prosthetic.
Art and aesthetics
Acrylic paint essentially consists of PMMA suspended in water; however since PMMA is hydrophobic, a substance with both hydrophobic and hydrophilic groups needs to be added to facilitate the suspension.
Modern furniture makers, especially in the 1960s and 1970s, seeking to give their products a space age aesthetic, incorporated Lucite and other PMMA products into their designs, especially office chairs. Many other products (for example, guitars) are sometimes made with acrylic glass to make the commonly opaque objects translucent.
Perspex has been used as a surface to paint on, for example by Salvador Dalí.
Diasec is a process which uses acrylic glass as a substitute for normal glass in picture frames. This is done for its relatively low cost, light weight, shatter-resistance, aesthetics and because it can be ordered in larger sizes than standard picture framing glass.
As early as 1939, Los Angeles-based Dutch sculptor Jan De Swart experimented with samples of Lucite sent to him by DuPont; De Swart created tools to work the Lucite for sculpture and mixed chemicals to bring about certain effects of color and refraction.
From approximately the 1960s onward, sculptors and glass artists such as Jan Kubíček, Leroy Lamis, and Frederick Hart began using acrylics, especially taking advantage of the material's flexibility, light weight, cost and its capacity to refract and filter light.
In the 1950s and 1960s, Lucite was an extremely popular material for jewelry, with several companies specialized in creating high-quality pieces from this material. Lucite beads and ornaments are still sold by jewelry suppliers.
Acrylic sheets are produced in dozens of standard colors, most commonly sold using color numbers developed by Rohm & Haas in the 1950s.
Methyl methacrylate "synthetic resin" for casting (simply the bulk liquid chemical) may be used in conjunction with a polymerization catalyst such as methyl ethyl ketone peroxide (MEKP), to produce hardened transparent PMMA in any shape, from a mold. Objects like insects or coins, or even dangerous chemicals in breakable quartz ampules, may be embedded in such "cast" blocks, for display and safe handling.
Other uses
PMMA, in the commercial form Technovit 7200 is used vastly in the medical field. It is used for plastic histology, electron microscopy, as well as many more uses.
PMMA has been used to create ultra-white opaque membranes that are flexible and switch appearance to transparent when wet.
Acrylic is used in tanning beds as the transparent surface that separates the occupant from the tanning bulbs while tanning. The type of acrylic used in tanning beds is most often formulated from a special type of polymethyl methacrylate, a compound that allows the passage of ultraviolet rays.
Sheets of PMMA are commonly used in the sign industry to make flat cut out letters in thicknesses typically varying from . These letters may be used alone to represent a company's name and/or logo, or they may be a component of illuminated channel letters. Acrylic is also used extensively throughout the sign industry as a component of wall signs where it may be a backplate, painted on the surface or the backside, a faceplate with additional raised lettering or even photographic images printed directly to it, or a spacer to separate sign components.
PMMA was used in Laserdisc optical media. (CDs and DVDs use both acrylic and polycarbonate for impact resistance).
It is used as a light guide for the backlights in TFT-LCDs.
Plastic optical fiber used for short-distance communication is made from PMMA, and perfluorinated PMMA, clad with fluorinated PMMA, in situations where its flexibility and cheaper installation costs outweigh its poor heat tolerance and higher attenuation versus glass fiber.
PMMA, in a purified form, is used as the matrix in laser dye-doped organic solid-state gain media for tunable solid state dye lasers.
In semiconductor research and industry, PMMA aids as a resist in the electron beam lithography process. A solution consisting of the polymer in a solvent is used to spin coat silicon and other semiconducting and semi-insulating wafers with a thin film. Patterns on this can be made by an electron beam (using an electron microscope), deep UV light (shorter wavelength than the standard photolithography process), or X-rays. Exposure to these creates chain scission or (de-cross-linking) within the PMMA, allowing for the selective removal of exposed areas by a chemical developer, making it a positive photoresist. PMMA's advantage is that it allows for extremely high resolution patterns to be made. Smooth PMMA surface can be easily nanostructured by treatment in oxygen radio-frequency plasma and nanostructured PMMA surface can be easily smoothed by vacuum ultraviolet (VUV) irradiation.
PMMA is used as a shield to stop beta radiation emitted from radioisotopes.
Small strips of PMMA are used as dosimeter devices during the Gamma Irradiation process. The optical properties of PMMA change as the gamma dose increases, and can be measured with a spectrophotometer.
Blacklight-reactive UV tattoos may use tattoo ink made with PMMA microcapsules and fluorescent dyes.
In the 1960s, luthier Dan Armstrong developed a line of electric guitars and basses whose bodies were made completely of acrylic. These instruments were marketed under the Ampeg brand. Ibanez and B.C. Rich have also made acrylic guitars.
Ludwig-Musser makes a line of acrylic drums called Vistalites, well known as being used by Led Zeppelin drummer John Bonham.
Artificial nails in the "acrylic" type often include PMMA powder.
Some modern briar, and occasionally meerschaum, tobacco pipes sport stems made of Lucite.
PMMA technology is utilized in roofing and waterproofing applications. By incorporating a polyester fleece sandwiched between two layers of catalyst-activated PMMA resin, a fully reinforced liquid membrane is created in situ.
PMMA is a widely used material to create deal toys and financial tombstones.
PMMA is used by the Sailor Pen Company of Kure, Japan, in their standard models of gold-nib fountain pens, specifically as the cap and body material.
See also
Cast acrylic
Organic laser
Organic photonics
Polycarbonate
References
External links
Perspex Technical Properties
Perspex Material Safety Data Sheet (MSDS)
Acrylate polymers
Amorphous solids
Biomaterials
Commodity chemicals
Dental materials
Dielectrics
Engineering plastic
German inventions
Optical materials
Plastics
Thermoplastics
Transparent materials | Poly(methyl methacrylate) | [
"Physics",
"Chemistry",
"Biology"
] | 5,281 | [
"Biomaterials",
"Physical phenomena",
"Dental materials",
"Commodity chemicals",
"Products of chemical industry",
"Unsolved problems in physics",
"Optical phenomena",
"Materials",
"Optical materials",
"Medical technology",
"Transparent materials",
"Dielectrics",
"Amorphous solids",
"Matter... |
173,309 | https://en.wikipedia.org/wiki/Liquefaction | In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanosized samples in the column of transmission electron microscope.
Biology
In biology, liquefaction often involves organic tissue turning into a more liquid-like state. For example, liquefactive necrosis in pathology, or liquefaction as a parameter in semen analysis.
See also
Cryogenic energy storage
Fluidization
Liquefaction of gases
Liquifaction point
Liquefied natural gas
Liquefied petroleum gas
Liquid air
Liquid helium
Liquid hydrogen
Liquid nitrogen
Liquid oxygen
Thixotropy
References
External links
Seminal Clot Liquefaction
Condensed matter physics
Earthquake engineering
Food preparation techniques
Laboratory techniques
Food science | Liquefaction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 533 | [
"Structural engineering",
"Phases of matter",
"Materials science",
"Civil engineering",
"Condensed matter physics",
"nan",
"Earthquake engineering",
"Matter"
] |
173,332 | https://en.wikipedia.org/wiki/Overfitting | In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.
Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance.
The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend.
As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions.
Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data.
To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
Statistical inference
In statistics, an inference is drawn from a statistical model, which has been selected via some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony". The authors also state the following.
Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The book Model Selection and Model Averaging (2008) puts it this way.
Regression
In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g. 5–9, 10 and 10–15 — the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. The bias–variance tradeoff is often used to overcome overfit models.
With a large set of explanatory variables that actually have no relation to the dependent variable being predicted, some variables will in general be falsely found to be statistically significant and the researcher may thus retain them in the model, thereby overfitting the model. This is known as Freedman's paradox.
Machine learning
Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training.
Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is a priori less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.
When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) with parameters to a regression model with parameters.
Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse.
As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again.
Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust."
Consequences
The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include:
A function that is overfitted is likely to request more information about each item in the validation dataset than does the optimal function; gathering this additional unneeded data can be expensive or error-prone, especially if each individual piece of information must be gathered by human observation and manual data entry.
A more complex, overfitted function is likely to be less portable than a simple one. At one extreme, a one-variable linear regression is so portable that, if necessary, it could even be done by hand. At the other extreme are models that can be reproduced only by exactly duplicating the original modeler's entire setup, making reuse or scientific reproduction difficult.
It may be possible to reconstruct details of individual training instances from an overfitted machine learning model's training set. This may be undesirable if, for example, the training data includes sensitive personally identifiable information (PII). This phenomenon also presents problems in the area of artificial intelligence and copyright, with the developers of some generative deep learning models such as Stable Diffusion and GitHub Copilot being sued for copyright infringement because these models have been found to be capable of reproducing certain copyrighted items from their training data.
Remedy
The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods like minimum spanning tree or life-time of correlation that applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized.
Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer.
Underfitting
Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: low bias and high variance). This can be gathered from the Bias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (see Generalization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6.
Burnham & Anderson state the following.
Resolving underfitting
There are multiple ways to deal with underfitting:
Increase the complexity of the model: If the model is too simple, it may be necessary to increase its complexity by adding more features, increasing the number of parameters, or using a more flexible model. However, this should be done carefully to avoid overfitting.
Use a different algorithm: If the current algorithm is not able to capture the patterns in the data, it may be necessary to try a different one. For example, a neural network may be more effective than a linear regression model for some types of data.
Increase the amount of training data: If the model is underfitting due to a lack of data, increasing the amount of training data may help. This will allow the model to better capture the underlying patterns in the data.
Regularization: Regularization is a technique used to prevent overfitting by adding a penalty term to the loss function that discourages large parameter values. It can also be used to prevent underfitting by controlling the complexity of the model.
Ensemble Methods: Ensemble methods combine multiple models to create a more accurate prediction. This can help reduce underfitting by allowing multiple models to work together to capture the underlying patterns in the data.
Feature engineering: Feature engineering involves creating new model features from the existing ones that may be more relevant to the problem at hand. This can help improve the accuracy of the model and prevent underfitting.
Benign overfitting
Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such as linear regression. In particular, it has been shown that overparameterization is essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.
See also
Bias–variance tradeoff
Curve fitting
Data dredging
Feature selection
Feature engineering
Freedman's paradox
Generalization error
Goodness of fit
Life-time of correlation
Model selection
Researcher degrees of freedom
Occam's razor
Primary model
Vapnik–Chervonenkis dimension – larger VC dimension implies larger risk of overfitting
Notes
References
Tip 7: Minimize overfitting.
Further reading
External links
The Problem of Overfitting Data – Stony Brook University
What is "overfitting," exactly? – Andrew Gelman blog
CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington
What is Underfitting – IBM
Curve fitting
Applied mathematics
Mathematical modeling
Statistical inference
Machine learning | Overfitting | [
"Mathematics",
"Engineering"
] | 2,791 | [
"Artificial intelligence engineering",
"Applied mathematics",
"Mathematical modeling",
"Machine learning"
] |
173,354 | https://en.wikipedia.org/wiki/Automation | Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.
Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, stabilization of ships, aircraft and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity.
In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time that the industry was rapidly adopting feedback controllers, which were introduced in the 1930s.
The World Bank's World Development Report of 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation. Job losses and downward mobility blamed on automation have been cited as one of many factors in the resurgence of nationalist, protectionist and populist politics in the US, UK and France, among other countries since the 2010s.
History
Early history
It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. In Ptolemaic Egypt, about 270 BC, Ctesibius described a float regulator for a water clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism. The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete.
The Persian Banū Mūsā brothers, in their Book of Ingenious Devices (850 AD), described a number of automatic controls. Two-step level controls for fluids, a form of discontinuous variable structure controls, were developed by the Banu Musa brothers. They also described a feedback controller. The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory.
The centrifugal governor was invented by Christiaan Huygens in the seventeenth century, and used to adjust the gap between millstones.
Industrial Revolution in Western Europe
The introduction of prime movers, or self-driven machines advanced grain mills, furnaces, boilers, and the steam engine created a new requirement for automatic control systems including temperature regulators (invented in 1624; see Cornelius Drebbel), pressure regulators (1681), float regulators (1700) and speed control devices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745. Also in 1745, Jacques de Vaucanson invented the first automated loom. Around 1800, Joseph Marie Jacquard created a punch-card system to program looms.
In 1771 Richard Arkwright invented the first fully automated spinning mill driven by water power, known at the time as the water frame. An automatic flour mill was developed by Oliver Evans in 1785, making it the first completely automated industrial process.
A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a model steam crane. The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour mill Boulton & Watt were building. The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning.
Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory. The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory.
20th century
Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes.
The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematician Irmgard Flügge-Lotz developed the theory of discontinuous automatic controls, which found military applications during the Second World War to fire control systems and aircraft navigation systems.
Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification.
Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter.
The First and Second World Wars saw major advancements in the field of mass communication and signal processing. Other key advances in automatic controls include differential equations, stability theory and system theory (1938), frequency domain analysis (1940), ship control (1950), and stochastic analysis (1941).
Starting in 1958, various systems based on solid-state digital logic modules for hard-wired programmed logic controllers (the predecessors of programmable logic controllers [PLC]) emerged to replace electro-mechanical relay logic in industrial control systems for process control and automation, including early Telefunken/AEG Logistat, Siemens Simatic, Philips/Mullard/ Norbit, BBC Sigmatronic, ACEC Logacec, Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems.
In 1959 Texaco's Port Arthur Refinery became the first chemical plant to use digital control.
Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell.
Significant applications
The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor.
The logic performed by telephone switching relays was the inspiration for the digital computer.
The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers.
Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919. One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928.
Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production.
Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s. Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC).
Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines.
Space/computer age
With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematician Irmgard Flugge-Lotz developed the theory of discontinuous automatic control, which became widely used in hysteresis control systems such as navigation systems, fire-control systems, and electronics. Through Flugge-Lotz and others, the modern era saw time-domain design for nonlinear systems (1961), navigation (1960), optimal control and estimation theory (1962), nonlinear control theory (1969), digital control and filtering theory (1974), and the personal computer (1983).
Advantages, disadvantages, and limitations
Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work. Additionally, tasks that take place in hazardous environments or that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself.
Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented.
The main advantages of automation are:
Increased throughput or productivity
Improved quality
Increased predictability
Improved robustness (consistency), of processes or product
Increased consistency of output
Reduced direct human labor costs and expenses
Reduced cycle time
Increased accuracy
Relieving humans of monotonously repetitive work
Required work in development, deployment, maintenance, and operation of automated processes — often structured as "jobs"
Increased human freedom to do other things
Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include:
Relieving humans of dangerous work stresses and occupational injuries (e.g., fewer strained backs from lifting heavy objects)
Removing humans from dangerous environments (e.g. fire, space, volcanoes, nuclear facilities, underwater, etc.)
The main disadvantages of automation are:
High initial cost
Faster production without human intervention can mean faster unchecked production of defects where automated processes are defective.
Scaled-up capacities can mean scaled-up problems when systems fail — releasing dangerous toxins, forces, energies, etc., at scaled-up rates.
Human adaptiveness is often poorly understood by automation initiators. It is often difficult to anticipate every contingency and develop fully preplanned automated responses for every situation. The discoveries inherent in automating processes can require unanticipated iterations to resolve, causing unanticipated costs and delays.
People anticipating employment income may be seriously disrupted by others deploying automation where no similar income is readily available.
Paradox of automation
The paradox of automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical. Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation." If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in. A fatal example of this was Air France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for.
Limitations
Current technology is unable to automate all the desired tasks.
Many operations using automation have large amounts of invested capital and produce high volumes of products, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel is needed to ensure that the entire system functions properly and that safety and product quality are maintained.
As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function.
As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of the exhaustion of opportunities. New technological paradigms may, however, set new limits that surpass the previous limits.
Current limitations
Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but see Watson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where the automation of industrial tasks is possible. Therefore, algorithmic management as the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy. Overcoming these obstacles is a theorized path to post-scarcity economics.
Societal impact and unemployment
Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in the Industrial Revolution, when inventions like the steam engine were making some job categories expendable, workers forcefully resisted these changes. Luddites, for instance, were English textile workers who protested the introduction of weaving machines by destroying them. More recently, some residents of Chandler, Arizona, have slashed tires and pelted rocks at self-driving car, in protest over the cars' perceived threat to human safety and job prospects.
The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength of organized labor in that region or nation. For example, while a study by the Pew Research Center indicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation and artificial intelligence (AI) as a good thing, due to the country's still-powerful unions and a more robust national safety net.
According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated. Erik Brynjolfsson and Andrew McAfee argue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate." Others however argue that highly skilled professional jobs like a lawyer, doctor, engineer, journalist are also at risk of automation.
According to a 2020 study in the Journal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%." A 2025 study in the American Economic Journal found that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points.
Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School argued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as a working paper in 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions. However, according to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform. The methodology of the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments. The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology. Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable.
Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases. In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies.
Lights-out manufacturing
Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status.
The expansion of lights out manufacturing requires:
Reliability of equipment
Long-term mechanic capabilities
Planned preventive maintenance
Commitment from the staff
Health and environment
The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa. Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation.
The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increase fuel economy and reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently.
Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed that smart homes could reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit.
Convertibility and turnaround time
Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation.
Digital electronics helped too. Former analog-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization, and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring.
Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems.
Automation tools
Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include computer-aided design (CAD software) and computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry.
Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events.
Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are called stationary engineers.
Different types of automation tools exist:
ANN – Artificial neural network
DCS – Distributed control system
HMI – Human machine interface
RPA – Robotic process automation
SCADA – Supervisory control and data acquisition
PLC – Programmable logic controller
Instrumentation
Motion control
Robotics
Host simulation software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time).
Cognitive automation
Cognitive automation, as a subset of AI, is an emerging genus of automation enabled by cognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuring unstructured data. Cognitive automation relies on multiple disciplines: natural language processing, real-time computing, machine learning algorithms, big data analytics, and evidence-based learning.
According to Deloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale." Such tasks include:
Document redaction
Data extraction and document synthesis / reporting
Contract management
Natural language search
Customer, employee, and stakeholder onboarding
Manual activities and verifications
Follow-up and email communications
Recent and emerging applications
CAD AI
Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in 3D modeling. AI CAD libraries could also be developed using linked open data of schematics and diagrams. Ai CAD assistants are used as tools to help streamline workflow.
Automated power production
Technologies like solar panels, wind turbines, and other renewable energy sources—together with smart grids, micro-grids, battery storage—can automate power production.
Agricultural production
Many agricultural operations are automated with machinery and equipment to improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety. Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere.
The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics. Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies
Motorized mechanization has generally increased in recent years. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades.
Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce.
Retail
Many supermarkets and even smaller stores are rapidly introducing self-checkout systems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research by Eurasia Group.
Online shopping could be considered a form of automated retail as the payment and checkout are through an automated online transaction processing system, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016. However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around . Amazon has gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016. Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems.
Food and drink
The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees. The University of Texas at Austin has introduced fully automated cafe retail locations. Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device. Some restaurants have automated food delivery to tables of customers using a conveyor belt system. The use of robots is sometimes employed to replace waiting staff.
Construction
Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance.
Mining
Automated mining involves the removal of human labor from the mining process. The mining industry is currently in the transition towards automation. Currently, it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation.
Video surveillance
The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events.
Highway systems
As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the U.S. Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Congress legislated in ISTEA that:[T]he Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles.Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems.
Waste management
Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service.
Business process
Business process automation (BPA) is the technology-enabled automation of complex business processes. It can help to streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Robotic process automation (RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow.
Home
Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable.
Laboratory
Automation is essential for many scientific and clinical applications. Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working. However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment. Autosamplers are common devices used in laboratory automation.
Logistics automation
Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems.
Industrial automation
Industrial automation deals primarily with the automation of manufacturing, quality control, and material handling processes. General-purpose controllers for industrial processes include programmable logic controllers, stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use of machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries.
Industrial Automation and Industry 4.0
The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines, as well as the advancement of the industrial internet of things (IIoT). An "Internet of Things is a seamless integration of diverse physical objects in the Internet through a virtual representation." These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (through communication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such as SCADA is an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation. Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on.
Industrial robotics
Industrial robotics is a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few. Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods. Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017 In recent years, AI with robotics is also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled.
Programmable Logic Controllers
Industrial automation incorporates programmable logic controllers in the manufacturing process. Programmable logic controllers (PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems.
PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems.
They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time-consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics.
When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability.
Agent-assisted automation
Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions.
Control
Open-loop and closed-loop
Discrete control (on/off)
One of the simplest types of control is on-off control. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.)
Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control.
PID controller
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems.
In a PID loop, the controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D) which give their name to the controller type.
The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers.
Sequential control and logical sequence or system state control
Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler.
States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions.
Early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed.
The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit.
In a typical hard-wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay.
Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits.
Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press-rolls, applying pressure to presses.
Computer control
Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special-purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management.
Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC).
See also
Artificial Intelligence
Automate This
Automated storage and retrieval system
Automation engineering
Automation Master
Automation technician
Cognitive computing
Control engineering
Critique of work
Cybernetics
Data-driven control system
Dirty, dangerous and demeaning
Feedforward control
Fully Automated Luxury Communism
Futures studies
The Human Use of Human Beings
Industrial Revolution
Industry 4.0
Intelligent automation
Inventing the Future: Postcapitalism and a World Without Work
Machine to machine
Mobile manipulator
Multi-agent system
Post-work society
Process control
Productivity improving technologies
The Right to Be Lazy
Right to repair
Robot tax
Robotic process automation
Semi-automation
Technological unemployment
The War on Normal People
References
Citations
Sources
E. McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3)
Executive Office of the President, Artificial Intelligence, Automation and the Economy (December 2016)
Further reading
Acemoglu, Daron, and Pascual Restrepo. "Automation and New Tasks: How Technology Displaces and Reinstates Labor." The Journal of Economic Perspectives, vol. 33, no. 2, American Economic Association, 2019, pp. 3–30, .
Norton, Andrew. Automation and Inequality: The Changing World of Work in the Global South. International Institute for Environment and Development, 2017, .
Danaher, John. "The Case for Technological Unemployment." Automation and Utopia: Human Flourishing in a World without Work, Harvard University Press, 2019, pp. 25–52, .
Reinsch, William, and Jack Caporal. "The Digital Economy & Data Governance." Key Trends in the Global Economy through 2030, edited by Matthew P. Goodman and Scott Miller, Center for Strategic and International Studies (CSIS), 2020, pp. 18–21, .
Articles containing video clips | Automation | [
"Engineering"
] | 10,130 | [
"Control engineering",
"Automation"
] |
173,356 | https://en.wikipedia.org/wiki/Sharpless%20epoxidation | The Sharpless epoxidation reaction is an enantioselective chemical reaction to prepare 2,3-epoxyalcohols from primary and secondary allylic alcohols. The oxidizing agent is tert-butyl hydroperoxide. The method relies on a catalyst formed from titanium tetra(isopropoxide) and diethyl tartrate.
2,3-Epoxyalcohols can be converted into diols, aminoalcohols, and ethers. The reactants for the Sharpless epoxidation are commercially available and relatively inexpensive.
K. Barry Sharpless published a paper on the reaction in 1980 and was awarded the 2001 Nobel Prize in Chemistry for this and related work on asymmetric oxidations. The prize was shared with William S. Knowles and Ryōji Noyori.
Catalyst
5–10 mol% of the catalyst is typical. The presence of 3Å molecular sieves (3Å MS) is necessary. The structure of the catalyst is uncertain although it is thought to be a dimer of [].
Selectivity
The epoxidation of allylic alcohols is a well-utilized conversion in fine chemical synthesis. The chirality of the product of a Sharpless epoxidation is sometimes predicted with the following mnemonic. A rectangle is drawn around the double bond in the same plane as the carbons of the double bond (the xy-plane), with the allylic alcohol in the bottom right corner and the other substituents in their appropriate corners. In this orientation, the (−) diester tartrate preferentially interacts with the top half of the molecule, and the (+) diester tartrate preferentially interacts with the bottom half of the molecule. This model seems to be valid despite substitution on the olefin. Selectivity decreases with larger R1, but increases with larger R2 and R3 (see introduction).
However, this method incorrectly predicts the product of allylic 1,2-diols.
Kinetic resolution
The Sharpless epoxidation can also give kinetic resolution of a racemic mixture of secondary 2,3-epoxyalcohols. While the yield of a kinetic resolution process cannot be higher than 50%, the enantiomeric excess approaches 100% in some reactions.
Synthetic utility
The Sharpless epoxidation is viable with a large range of primary and secondary alkenic alcohols. Furthermore, with the exception noted above, a given dialkyl tartrate will preferentially add to the same face independent of the substitution on the alkene.To demonstrate the synthetic utility of the Sharpless epoxidation, the Sharpless group created synthetic intermediates of various natural products: methymycin, erythromycin, leukotriene C-1, and (+)-disparlure.
As one of the few highly enantioselective reactions during its time, many manipulations of the 2,3-epoxyalcohols have been developed.
The Sharpless epoxidation has been used for the total synthesis of various saccharides, terpenes, leukotrienes, pheromones, and antibiotics.
The main drawback of this protocol is the necessity of the presence of an allylic alcohol. The Jacobsen epoxidation, an alternative method to enantioselectively oxidise alkenes, overcomes this issue and tolerates a wider array of functional groups. For specifically glycidic epoxides, the Jørgensen-Córdova epoxidation avoids the need to reduce the carbonyl and then reoxidize, and has more efficient catalyst turnover.
References of historic interest
See also
Asymmetric catalytic oxidation
Juliá–Colonna epoxidation — for enones
Jacobsen epoxidation — for unfunctionalized alkenes
References
External links
Sharpless Asymmetric Epoxidation Reaction
Epoxidation reactions
Organic redox reactions
Name reactions
Epoxides
Catalysis | Sharpless epoxidation | [
"Chemistry"
] | 846 | [
"Catalysis",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Chemical kinetics",
"Ring forming reactions"
] |
173,416 | https://en.wikipedia.org/wiki/Mathematical%20physics | Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics.
Scope
There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world.
Classical mechanics
Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles).
Partial differential equations
Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics.
Quantum theory
The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty.
Relativity and quantum relativistic theories
The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important.
Statistical mechanics
Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics.
Usage
The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being
"the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature".
Mathematical vs. theoretical physics
The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics.
On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians.
Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation).
The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory.
Prominent mathematical physicists
Before Newton
There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance.
In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion.
An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having made use of experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, two central concepts of what today is known as classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object.
René Descartes developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance helped bring the demise of Aristotelian physics. Descartes used mathematical reasoning as a model for science, and developed analytic geometry, which in time allowed the plotting of locations in 3D space (Cartesian coordinates) and marking their progressions along the flow of time.
Christiaan Huygens, a talented mathematician and physicist and older contemporary of Newton, was the first to successfully idealize a physical problem by a set of mathematical parameters in Horologium Oscillatorum (1673), and the first to fully mathematize a mechanistic explanation of an unobservable physical phenomenon in Traité de la Lumière (1690). He is thus considered a forerunner of theoretical physics and one of the founders of modern mathematical physics.
Newtonian physics and post Newtonian
The prevailing framework for science in the 16th and early 17th centuries was one borrowed from Ancient Greek mathematics, where geometrical shapes formed the building blocks to describe and think about space, and time was often thought as a separate entity. With the introduction of algebra into geometry, and with it the idea of a coordinate system, time and space could now be thought as axes belonging to the same plane. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries.
By the middle of the 17th century, important concepts such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in mathematics and physics. He was extremely successful in his application of calculus and other methods to the study of motion. Newton's theory of motion, culminating in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity.
In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms.
Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism.
A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field.
The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics.
Relativistic
By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928].
In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared.
Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object.
Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.)
Quantum
Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron.
List of prominent contributors to mathematical physics in the 20th century
Prominent contributors to the 20th century's mathematical physics include (ordered by birth date):
William Thomson (Lord Kelvin) (1824–1907)
Oliver Heaviside (1850–1925)
Jules Henri Poincaré (1854–1912)
David Hilbert (1862–1943)
Arnold Sommerfeld (1868–1951)
Constantin Carathéodory (1873–1950)
Albert Einstein (1879–1955)
Emmy Noether (1882–1935)
Max Born (1882–1970)
George David Birkhoff (1884–1944)
Hermann Weyl (1885–1955)
Satyendra Nath Bose (1894–1974)
Louis de Broglie (1892–1987)
Norbert Wiener (1894–1964)
John Lighton Synge (1897–1995)
Mário Schenberg (1914–1990)
Wolfgang Pauli (1900–1958)
Paul Dirac (1902–1984)
Eugene Wigner (1902–1995)
Andrey Kolmogorov (1903–1987)
Lars Onsager (1903–1976)
John von Neumann (1903–1957)
Sin-Itiro Tomonaga (1906–1979)
Hideki Yukawa (1907–1981)
Nikolay Nikolayevich Bogolyubov (1909–1992)
Subrahmanyan Chandrasekhar (1910–1995)
Mark Kac (1914–1984)
Julian Schwinger (1918–1994)
Richard Phillips Feynman (1918–1988)
Irving Ezra Segal (1918–1998)
Ryogo Kubo (1920–1995)
Arthur Strong Wightman (1922–2013)
Chen-Ning Yang (1922–)
Rudolf Haag (1922–2016)
Freeman John Dyson (1923–2020)
Martin Gutzwiller (1925–2014)
Abdus Salam (1926–1996)
Jürgen Moser (1928–1999)
Michael Francis Atiyah (1929–2019)
Joel Louis Lebowitz (1930–)
Roger Penrose (1931–)
Elliott Hershel Lieb (1932–)
Yakir Aharonov (1932–)
Sheldon Glashow (1932–)
Steven Weinberg (1933–2021)
Ludvig Dmitrievich Faddeev (1934–2017)
David Ruelle (1935–)
Yakov Grigorevich Sinai (1935–)
Vladimir Igorevich Arnold (1937–2010)
Arthur Michael Jaffe (1937–)
Roman Wladimir Jackiw (1939–)
Leonard Susskind (1940–)
Rodney James Baxter (1940–)
Michael Victor Berry (1941–)
Giovanni Gallavotti (1941–)
Stephen William Hawking (1942–2018)
Jerrold Eldon Marsden (1942–2010)
Michael C. Reed (1942–)
John Michael Kosterlitz (1943–)
Israel Michael Sigal (1945–)
Alexander Markovich Polyakov (1945–)
Barry Simon (1946–)
Herbert Spohn (1946–)
John Lawrence Cardy (1947–)
Giorgio Parisi (1948-)
Abhay Ashtekar (1949-)
Edward Witten (1951–)
F. Duncan Haldane (1951–)
Ashoke Sen (1956–)
Juan Martín Maldacena (1968–)
See also
International Association of Mathematical Physics
Notable publications in mathematical physics
List of mathematical physics journals
Gauge theory (mathematics)
Relationship between mathematics and physics
Theoretical, computational and philosophical physics
Notes
References
Further reading
Generic works
Textbooks for undergraduate studies
, (Mathematical Methods for Physicists, Solutions for Mathematical Methods for Physicists (7th ed.), archive.org)
Hassani, Sadri (2009), Mathematical Methods for Students of Physics and Related Fields, (2nd ed.), New York, Springer, eISBN 978-0-387-09504-2
Textbooks for graduate studies
Specialized texts in classical physics
Specialized texts in modern physics
External links | Mathematical physics | [
"Physics",
"Mathematics"
] | 4,985 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
173,724 | https://en.wikipedia.org/wiki/Hawking%20radiation | Hawking radiation is black body radiation released outside a black hole's event horizon due to quantum effects according to a model developed by Stephen Hawking in 1974.
The radiation was not predicted by previous models which assumed that once electromagnetic radiation is inside the event horizon, it cannot escape. Hawking radiation is predicted to be extremely faint and is many orders of magnitude below the current best telescopes' detecting ability.
Hawking radiation would reduce the mass and rotational energy of black holes and consequently cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish.
For all except the smallest black holes, this happens extremely slowly. The radiation temperature, called Hawking temperature, is inversely proportional to the black hole's mass, so micro black holes are predicted to be larger emitters of radiation than larger black holes and should dissipate faster per their mass. Consequently, if small black holes exist, as permitted by the hypothesis of primordial black holes, they ought to lose mass more rapidly as they shrink, leading to a final cataclysm of high energy radiation alone. Such radiation bursts have not yet been detected.
Background
Modern black holes were first predicted by Einstein's 1915 theory of general relativity. Evidence of the astrophysical objects termed black holes began to mount half a century later, and these objects are of current interest primarily because of their compact size and immense gravitational attraction. Early research into black holes was done by individuals such as Karl Schwarzschild and John Wheeler, who modeled black holes as having zero entropy.
A black hole can form when enough matter or energy is compressed into a volume small enough that the escape velocity is greater than the speed of light. Because nothing can travel that fast, nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is the event horizon: an observer outside it cannot observe, become aware of, or be affected by events within the event horizon.
Alternatively, using a set of infalling coordinates in general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travel through space faster than light, space itself can infall at any speed.) Once matter is inside the event horizon, all of the matter inside falls inevitably into a gravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter; a classical black hole is pure empty spacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon.
Discovery
In 1971 Soviet scientists Yakov Zeldovich and Alexei Starobinsky proposed that rotating black holes ought to create and emit particles, reasoning by analogy with electromagnetic spinning metal spheres. In 1972, Jacob Bekenstein developed a theory and reported that the black holes should have an entropy proportional to their surface area. Initially Stephen Hawking argued against Bekenstein's theory, viewing black holes as a simple object with no entropy.
After meeting Zeldovich in Moscow in 1973, Hawking put these two ideas together using his mixture of quantum field theory and general relativity.
In his 1974 paper Hawking showed that in theory, black holes radiate particles as if it were a blackbody. Particles escaping effectively drain energy from the black hole.
Due to Bekenstein's contribution to black hole entropy, it is also known as Bekenstein–Hawking radiation.
Hawking radiation derives from vacuum fluctuations. A quantum fluctuation in the electromagnetic field can result in a photon outside of the black hole horizon paired with one on the inside. The horizon allows one to escape in each direction.
Emission process
Hawking radiation is dependent on the Unruh effect and the equivalence principle applied to black-hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation.
A Schwarzschild black hole has a metric
The black hole is the background spacetime for a quantum field theory.
The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position
The local metric to lowest order is
which is Rindler in terms of . The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration, , diverges as .
The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local temperature
which is the Unruh effect.
The gravitational redshift is given by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature:
The inverse temperature redshifted to at infinity is
and is the near-horizon position, near , so this is really
Thus a field theory defined on a black-hole background is in a thermal state whose temperature at infinity is
From the black-hole temperature, it is straightforward to calculate the black-hole entropy . The change in entropy when a quantity of heat is added is
The heat energy that enters serves to increase the total mass, so
So the entropy of a black hole is proportional to its surface area:
where, since the radius of the black hole is twice its mass, we have that the area A is given by
Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can be somehow encoded onto a bounding surface.
Black hole evaporation
When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation ). Consequently, an evaporating black hole will have a finite lifespan. By dimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass, and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1012 kg would have evaporated completely by the present day.
In 1976, Don Page refined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-charged Schwarzschild black hole of mass . The time for the event horizon or entropy of a black hole to halve is known as the Page time. The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; the absorption cross section goes down in a complicated, spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon. Page concluded that primordial black holes could survive to the present day only if their initial mass were roughly or larger. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account 3 flavors of neutrinos with nonzero masses. A 2008 calculation using the particle content of the Standard Model and the WMAP figure for the age of the universe yielded a mass bound of .
Some pre-1998 calculations, using outdated assumptions about neutrinos, were as follows: If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064 years which is vastly longer than the age of the universe. A supermassive black hole with a mass of 1011 (100 billion) will evaporate in around . Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 2 × 10106 years. Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067 years.
The power emitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-charged Schwarzschild black hole of mass . Combining the formulas for the Schwarzschild radius of the black hole, the Stefan–Boltzmann law of blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of a sphere (the black hole's event horizon), several equations can be derived.
The Hawking radiation temperature is:
The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is:
where is the luminosity, i.e., the radiated power, is the reduced Planck constant, is the speed of light, is the gravitational constant and is the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework of semiclassical gravity.
The time that the black hole takes to dissipate is:
where and are the mass and (Schwarzschild) volume of the black hole, and are Planck mass and Planck time. A black hole of one solar mass ( = ) takes more than to evaporate—much longer than the current age of the universe at . But for a black hole of , the evaporation time is . This is why some astronomers are searching for signs of exploding primordial black holes.
However, since the universe contains the cosmic microwave background radiation, in order for the black hole to dissipate, the black hole must have a temperature greater than that of the present-day blackbody radiation of the universe of 2.7 K. A study suggests that must be less than 0.8% of the mass of the Earth – approximately the mass of the Moon.
Black hole evaporation has several significant consequences:
Black hole evaporation produces a more consistent view of black hole thermodynamics by showing how black holes interact thermally with the rest of the universe.
Unlike most objects, a black hole's temperature increases as it radiates away mass. The rate of temperature increase is exponential, with the most likely endpoint being the dissolution of the black hole in a violent burst of gamma rays. A complete description of this dissolution requires a model of quantum gravity, however, as it occurs when the black hole's mass approaches 1 Planck mass, its radius will also approach two Planck lengths.
The simplest models of black hole evaporation lead to the black hole information paradox. The information content of a black hole appears to be lost when it dissipates, as under these models the Hawking radiation is random (it has no relation to the original information). A number of solutions to this problem have been proposed, including suggestions that Hawking radiation is perturbed to contain the missing information, that the Hawking evaporation leaves some form of remnant particle containing the missing information, and that information is allowed to be lost under these conditions.
Problems and extensions
Trans-Planckian problem
The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength.
The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantly Doppler shifts the modes.
An outgoing photon of Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed.
The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing.
The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon.
There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial.
Large extra dimensions
The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale. In particular, for black holes with masses below the Planck mass (~), they result in impossible lifetimes below the Planck time (~). This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole.
In a model with large extra dimensions (10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well. In particular, the lifetime of a micro black hole with a radius below the scale of the extra dimensions is given by equation 9 in Cheung (2002) and equations 25 and 26 in Carr (2005).
where is the low-energy scale, which could be as low as a few TeV, and is the number of large extra dimensions. This formula is now consistent with black holes as light as a few TeV, with lifetimes on the order of the "new Planck time" ~.
In loop quantum gravity
A detailed study of the quantum geometry of a black hole event horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless the value of a free parameter is set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However, quantum gravitational corrections to the entropy and radiation of black holes have been computed based on the theory.
Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking radiation spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of the Hawking spectrum.
Experimental observation
Astronomical search
In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. As of Jan 1st, 2024, none have been detected.
Heavy-ion collider physics
If speculative large extra dimension theories are correct, then CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN.
Experimental
Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy using sonic black holes, in which sound perturbations are analogous to light in a gravitational black hole and the flow of an approximately perfect fluid is analogous to gravity (see Analog models of gravity). Observations of Hawking radiation were reported, in sonic black holes employing Bose–Einstein condensates.
In September 2010 an experimental set-up created a laboratory "white hole event horizon" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation. However, the results remain unverified and debatable, and its status as a genuine confirmation remains in doubt.
See also
Black hole information paradox
Black hole thermodynamics
Black hole starship
Blandford–Znajek process and Penrose process, other extractions of black-hole energy
Gibbons–Hawking effect
Thorne–Hawking–Preskill bet
Unruh effect
References
Further reading
External links
Hawking radiation calculator tool
The case for mini black holes A. Barrau & J. Grain explain how the Hawking radiation could be detected at colliders
Black holes
Quantum field theory
Radiation
Astronomical hypotheses
Hypothetical processes
1974 introductions | Hawking radiation | [
"Physics",
"Astronomy"
] | 3,844 | [
"Quantum field theory",
"Physical phenomena",
"Black holes",
"Hypotheses in physics",
"Physical quantities",
"Astronomical hypotheses",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Astrophysics",
"Astronomical controversies",
"Density",
"Stellar phenomena",
... |
173,838 | https://en.wikipedia.org/wiki/Thorne%E2%80%93%C5%BBytkow%20object | A Thorne–Żytkow object (TŻO or TZO), also known as a hybrid star, is a conjectured type of star wherein a red giant or red supergiant contains a neutron star at its core, formed from the collision of the giant with the neutron star. Such objects were hypothesized by Kip Thorne and Anna Żytkow in 1977. In 2014, it was discovered that the star HV 2112, located in the Small Magellanic Cloud (SMC), was a strong candidate, though this view has since been refuted. Another possible candidate is the star HV 11417, also located in the SMC.
Formation
A Thorne–Żytkow object would be formed when a neutron star collides with another star, often a red giant or supergiant. The colliding objects can simply be wandering stars, though this is only likely to occur in extremely crowded globular clusters. Alternatively, the neutron star could form in a binary system when one of the two stars goes supernova. Because no supernova is perfectly symmetric, and because the binding energy of the binary changes with the mass lost in the supernova, the neutron star will be left with some velocity relative to its original orbit. This kick may cause its new orbit to intersect with its companion, or, if its companion is a main-sequence star, it may be engulfed when its companion evolves into a red giant.
Once the neutron star enters the red giant, drag between the neutron star and the outer, diffuse layers of the red giant causes the binary star system's orbit to decay, and the neutron star and core of the red giant spiral inward toward one another. Depending on their initial separation, this process may take hundreds of years. When the two finally collide, the neutron star and red giant core will merge. If their combined mass exceeds the Tolman–Oppenheimer–Volkoff limit, then the two will collapse into a black hole. Otherwise, the two will coalesce into a single neutron star.
If a neutron star and a white dwarf merge, this could form a Thorne–Żytkow object with the properties of an R Coronae Borealis variable.
Properties
The surface of the neutron star is very hot, with temperatures exceeding 109 K, hotter than the cores of all but the most massive stars. This heat is dominated either by nuclear fusion in the accreting gas or by compression of the gas by the neutron star's gravity. Because of the high temperature, unusual nuclear processes may take place as the envelope of the red giant falls onto the neutron star's surface. Hydrogen may fuse to produce a different mixture of isotopes than it does in ordinary stellar nucleosynthesis, and some astronomers have proposed that the rapid proton nucleosynthesis that occurs in X-ray bursts also takes place inside Thorne–Żytkow objects.
Observationally, a Thorne–Żytkow object may resemble a red supergiant, or, if it is hot enough to blow off the hydrogen-rich surface layers, a nitrogen-rich Wolf–Rayet star (type WN8).
A TŻO has an estimated lifespan of 105–106 years. Given this lifespan, it is possible that between 20 and 200 Thorne-Żytkow objects currently exist in the Milky Way.
The only way to unambiguously determine whether or not a star is a TŻO is a multi-messenger detection of both the gravitational waves of the inner neutron star and an optical spectrum of the metals atypical of a normal red supergiant. It is possible to detect pre-existing TŻOs with current LIGO detectors; the neutron star core would emit a continuous wave.
Dissolution
It has been theorized that mass loss will eventually end the TŻO stage, with the remaining envelope converted to a disk, resulting in the formation of a neutron star with a massive accretion disk. These neutron stars may form the population of isolated pulsars with accretion disks. The massive accretion disk may also collapse into a new star, becoming a stellar companion to the neutron star. The neutron star may also accrete sufficient material to collapse into a black hole.
Observation history
In 2014, a team led by Emily Levesque argued that the star HV 2112 had unusually high abundances of elements such as molybdenum, rubidium, lithium, and calcium, and a high luminosity. Since both are expected characteristics of Thorne–Żytkow objects, this led the team to suggest that HV 2112 might be the first discovery of a TZO. However, this claim was challenged in a 2018 paper by Emma Beasor and collaborators, who argued that there is no evidence for HV 2112 having any unusual abundance patterns beyond a possible enrichment of lithium and that its luminosity is too low. They put forth another candidate, HV 11417, based on an apparent over-abundance of rubidium and a similar luminosity as HV 2112.
List of candidate TŻOs
List of candidate former and future TŻOs
See also
Quasar
Quasi-star
References
Star types
Stellar evolution
Red giants
Neutron stars
1977 in science
Hypothetical stars | Thorne–Żytkow object | [
"Physics",
"Astronomy"
] | 1,063 | [
"Astronomical classification systems",
"Star types",
"Astrophysics",
"Stellar evolution"
] |
173,900 | https://en.wikipedia.org/wiki/Epicenter | The epicenter (), epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates.
Determination
The primary purpose of a seismometer is to locate the initiating points of earthquake epicenters. The secondary purpose, of determining the 'size' or magnitude must be calculated after the precise location is known.
The earliest seismographs were designed to give a sense of the direction of the first motions from an earthquake. The Chinese frog seismograph would have dropped its ball in the general compass direction of the earthquake, assuming a strong positive pulse. We now know that first motions can be in almost any direction depending on the type of initiating rupture (focal mechanism).
The first refinement that allowed a more precise determination of the location was the use of a time scale. Instead of merely noting, or recording, the absolute motions of a pendulum, the displacements were plotted on a moving graph, driven by a clock mechanism. This was the first seismogram, which allowed precise timing of the first ground motion, and an accurate plot of subsequent motions.
From the first seismograms, as seen in the figure, it was noticed that the trace was divided into two major portions. The first seismic wave to arrive was the P wave, followed closely by the S wave. Knowing the relative 'velocities of propagation', it was a simple matter to calculate the distance of the earthquake.
One seismograph would give the distance, but that could be plotted as a circle, with an infinite number of possibilities. Two seismographs would give two intersecting circles, with two possible locations. Only with a third seismograph would there be a precise location.
Modern earthquake location still requires a minimum of three seismometers. Most likely, there are many, forming a seismic array. The emphasis is on precision since much can be learned about the fault mechanics and seismic hazard, if the locations can be determined to be within a kilometer or two, for small earthquakes. For this, computer programs use an iterative process, involving a 'guess and correction' algorithm. As well, a very good model of the local crustal velocity structure is required: seismic velocities vary with the local geology. For P waves, the relation between velocity and bulk density of the medium has been quantified in Gardner's relation.
Surface damage
Before the instrumental period of earthquake observation, the epicenter was thought to be the location where the greatest damage occurred, but the subsurface fault rupture may be long and spread surface damage across the entire rupture zone. As an example, in the magnitude 7.9 Denali earthquake of 2002 in Alaska, the epicenter was at the western end of the rupture, but the greatest damage was about away at the eastern end. Focal depths of earthquakes occurring in continental crust mostly range from . Continental earthquakes below are rare whereas in subduction zone earthquakes can originate at depths deeper than .
Epicentral distance
During an earthquake, seismic waves propagates in all directions from the hypocenter. Seismic shadowing occurs on the opposite side of the Earth from the earthquake epicenter because the planet's liquid outer core refracts the longitudinal or compressional (P waves) while it absorbs the transverse or shear waves (S waves). Outside the seismic shadow zone, both types of wave can be detected, but because of their different velocities and paths through the Earth, they arrive at different times. By measuring the time difference on any seismograph and the distance on a travel-time graph on which the P wave and S wave have the same separation, geologists can calculate the distance to the quake's epicenter. This distance is called the epicentral distance, commonly measured in ° (degrees) and denoted as Δ (delta) in seismology. The Láska's empirical rule provides an approximation of epicentral distance in the range of 2,000−10,000 km.
Once distances from the epicenter have been calculated from at least three seismographic measuring stations, the point can be located, using trilateration.
Epicentral distance is also used in calculating seismic magnitudes as developed by Richter and Gutenberg.
Fault rupture
The point at which fault slipping begins is referred to as the focus of the earthquake. The fault rupture begins at the focus and then expands along the fault surface. The rupture stops where the stresses become insufficient to continue breaking the fault (because the rocks are stronger) or where the rupture enters ductile material. The magnitude of an earthquake is related to the total area of its fault rupture. Most earthquakes are small, with rupture dimensions less than the depth of the focus so the rupture doesn't break the surface, but in high magnitude, destructive earthquakes, surface breaks are common. Fault ruptures in large earthquakes can extend for more than . When a fault ruptures unilaterally (with the epicenter at or near the end of the fault break) the waves are stronger in one direction along the fault.
Macroseismic epicenter
The macroseismic epicenter is the best estimate of the location of the epicenter derived without instrumental data. This may be estimated using intensity data, information about foreshocks and aftershocks, knowledge of local fault systems or extrapolations from data regarding similar earthquakes. For historical earthquakes that have not been instrumentally recorded, only a macroseismic epicenter can be given.
Etymology
The word is derived from the Neo-Latin noun epicentrum, the latinisation of the ancient Greek adjective ἐπίκεντρος (), "occupying a cardinal point, situated on a centre", from ἐπί (epi) "on, upon, at" and κέντρον (kentron) "centre". The term was coined by Irish seismologist Robert Mallet.
It is also used to mean "center of activity", as in "Travel is restricted in the Chinese province thought to be the epicentre of the SARS outbreak." Garner's Modern American Usage gives several examples of use in which "epicenter" is used to mean "center". Garner also refers to a William Safire article in which Safire quotes a geophysicist as attributing the use of the term to "spurious erudition on the part of writers combined with scientific illiteracy on the part of copy editors". Garner has speculated that these misuses may just be "metaphorical descriptions of focal points of unstable and potentially destructive environments."
References
Seismology
Geometric centers
Geographic position | Epicenter | [
"Physics",
"Mathematics"
] | 1,404 | [
"Point (geometry)",
"Geographic position",
"Geometric centers",
"Position",
"Symmetry"
] |
173,937 | https://en.wikipedia.org/wiki/Cosmological%20principle | In modern physical cosmology, the cosmological principle is the notion that the spatial distribution of matter in the universe is uniformly isotropic and homogeneous when viewed on a large enough scale, since the forces are expected to act equally throughout the universe on a large scale, and should, therefore, produce no observable inequalities in the large-scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
Definition
Astronomer William Keel explains:
The cosmological principle is usually stated formally as 'Viewed on a sufficiently large scale, the properties of the universe are the same for all observers.' This amounts to the strongly philosophical statement that the part of the universe which we can see is a fair sample, and that the same physical laws apply throughout. In essence, this in a sense says that the universe is knowable and is playing fair with scientists.
The cosmological principle depends on a definition of "observer", and contains an implicit qualification and two testable consequences.
"Observers" means any observer at any location in the universe, not simply any human observer at any location on Earth: as Andrew Liddle puts it, "the cosmological principle [means that] the universe looks the same whoever and wherever you are."
The qualification is that variation in physical structures can be overlooked, provided this does not imperil the uniformity of conclusions drawn from observation: the Sun is different from the Earth, our galaxy is different from a black hole, some galaxies advance toward rather than recede from us, and the universe has a "foamy" texture of galaxy clusters and voids, but none of these different structures appears to violate the basic laws of physics.
The two testable structural consequences of the cosmological principle are homogeneity and isotropy. Homogeneity means that the same observational evidence is available to observers at different locations in the universe ("the part of the universe which we can see is a fair sample"). Isotropy means that the same observational evidence is available by looking in any direction in the universe ("the same physical laws apply throughout"). The principles are distinct but closely related, because a universe that appears isotropic from any two (for a spherical geometry, three) locations must also be homogeneous.
Origin
The cosmological principle is first clearly asserted in the Philosophiæ Naturalis Principia Mathematica (1687) of Isaac Newton. In contrast to some earlier classical or medieval cosmologies, in which Earth rested at the center of universe, Newton conceptualized the Earth as a sphere in orbital motion around the Sun within an empty space that extended uniformly in all directions to immeasurably large distances. He then showed, through a series of mathematical proofs on detailed observational data of the motions of planets and comets, that their motions could be explained by a single principle of "universal gravitation" that applied as well to the orbits of the Galilean moons around Jupiter, the Moon around the Earth, the Earth around the Sun, and to falling bodies on Earth. That is, he asserted the equivalent material nature of all bodies within the Solar System, the identical nature of the Sun and distant stars and thus the uniform extension of the physical laws of motion to a great distance beyond the observational location of Earth itself.
Implications
Since the 1990s, observations assuming the cosmological principle have concluded that around 68% of the mass–energy density of the universe can be attributed to dark energy, which led to the development of the ΛCDM model.
Observations show that more distant galaxies are closer together and have lower content of chemical elements heavier than lithium. Applying the cosmological principle, this suggests that heavier elements were not created in the Big Bang but were produced by nucleosynthesis in giant stars and expelled across a series of supernovae and new star formation from the supernova remnants, which means heavier elements would accumulate over time. Another observation is that the furthest galaxies (earlier time) are often more fragmentary, interacting and unusually shaped than local galaxies (recent time), suggesting evolution in galaxy structure as well.
A related implication of the cosmological principle is that the largest discrete structures in the universe are in mechanical equilibrium. Homogeneity and isotropy of matter at the largest scales would suggest that the largest discrete structures are parts of a single indiscrete form, like the crumbs which make up the interior of a cake. At extreme cosmological distances, the property of mechanical equilibrium in surfaces lateral to the line of sight can be empirically tested; however, under the assumption of the cosmological principle, it cannot be detected parallel to the line of sight (see timeline of the universe).
Cosmologists agree that in accordance with observations of distant galaxies, a universe must be non-static if it follows the cosmological principle. In 1923, Alexander Friedmann set out a variant of Albert Einstein's equations of general relativity that describe the dynamics of a homogeneous isotropic universe. Independently, Georges Lemaître derived in 1927 the equations of an expanding universe from the General Relativity equations. Thus, a non-static universe is also implied, independent of observations of distant galaxies, as the result of applying the cosmological principle to general relativity.
Criticism
Karl Popper criticized the cosmological principle on the grounds that it makes "our lack of knowledge a principle of knowing something". He summarized his position as:
the "cosmological principles" were, I fear, dogmas that should not have been proposed.
Observations
Although the universe is inhomogeneous at smaller scales, according to the ΛCDM model it ought to be isotropic and statistically homogeneous on scales larger than 250 million light years. However, recent findings (the Axis of Evil for example) have suggested that violations of the cosmological principle exist in the universe and thus have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is now obsolete and the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universe.
Violations of isotropy
The cosmic microwave background (CMB) is predicted by the ΛCDM model to be isotropic, that is to say that its intensity is about the same whichever direction we look at. Data from the Planck Mission shows hemispheric bias in 2 respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities), the collaboration noted that these features are not strongly statistically inconsistent with isotropy. Some authors say that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps. There are however claims of isotropy violations from galaxy clusters, quasars, and type Ia supernovae.
Violations of homogeneity
The cosmological principle implies that at a sufficiently large scale, the universe is homogeneous. Based on N-body simulations in a ΛCDM universe, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales of 260/h Mpc or more.
A number of observations have been reported to be in conflict with predictions of maximal structure sizes:
The Clowes–Campusano LQG, discovered in 1991, has a length of 580 Mpc, and is marginally larger than the consistent scale.
The Sloan Great Wall, discovered in 2003, has a length of 423 Mpc, which is only just consistent with the cosmological principle.
U1.11, a large quasar group discovered in 2011, has a length of 780 Mpc, and is two times larger than the upper limit of the homogeneity scale.
The Huge-LQG, discovered in 2012, is three times longer than, and twice as wide as is predicted possible according to these current models, and so challenges our understanding of the universe on large scales.
In November 2013, a new structure 10 billion light years away measuring 2000–3000 Mpc (more than seven times that of the Sloan Great Wall) was discovered, the Hercules–Corona Borealis Great Wall, putting further doubt on the validity of the cosmological principle.
In September 2020, a 4.9σ conflict was found between the kinematic explanation of the CMB dipole and the measurement of the dipole in the angular distribution of a flux-limited, all-sky sample of 1.36 million quasars.
In June 2021, the Giant Arc was discovered, a structure spanning approximately 1000 Mpc. It is located 2820 Mpc away and consists of galaxies, galactic clusters, gas, and dust.
In January 2024, the Big Ring was discovered. It is located 9.2 billion light years away from Earth has a diameter of 1.3 billion light years or around the size of 15 full Moons as seen from Earth.
However, as pointed out by Seshadri Nadathur in 2013 using statistical properties, the existence of structures larger than the homogeneous scale (260/h Mpc by Yadav's estimation) does not necessarily violate the cosmological principle in the ΛCDM model (see ).
CMB dipole
The cosmic microwave background (CMB) provides a snapshot of a largely isotropic and homogeneous universe. The largest scale feature of the CMB is the dipole anisotropy; it is typically subtracted from maps due to its large amplitude. The standard interpretation of the dipole is that it is due to the Doppler effect caused by the motion of the solar system with respect to the CMB rest-frame.
Several studies have reported dipoles in the large scale distribution of galaxies that align with the CMB dipole direction, but indicate a larger amplitude than would be caused by the CMB dipole velocity. A similar dipole is seen in data of radio galaxies, however the amplitude of the dipole depends on the observing frequency showing that these anomalous features cannot be purely kinematic. Other authors have found radio dipoles consistent with the CMB expectation. Further claims of anisotropy along the CMB dipole axis have been made with respect to the Hubble diagram of type Ia supernovae and quasars. Separately, the CMB dipole direction has emerged as a preferred direction in some studies of alignments in quasar polarizations, strong lensing time delay, type Ia supernovae, and standard candles. Some authors have argued that the correlation of distant effects with the dipole direction may indicate that its origin is not kinematic.
Alternatively, Planck data has been used to estimate the velocity with respect to the CMB independently of the dipole, by measuring the subtle aberrations and distortions of fluctuations caused by relativistic beaming and separately using the Sunyaev-Zeldovich effect. These studies found a velocity consistent with the value obtained from the dipole, indicating it is consistent with being entirely kinematic. Measurements of the velocity field of galaxies in the local universe show that on short scales galaxies are moving with the local group, and that the average mean velocity decreases with increasing distance. This follows the expectation if the CMB dipole were due to the local peculiar velocity field, it becomes more homogeneous on large scales. Surveys of the local volume have been used to reveal a low density region in the opposite direction to the CMB dipole, potentially explaining the origin of the local bulk flow.
Perfect cosmological principle
The perfect cosmological principle is an extension of the cosmological principle, and states that the universe is homogeneous and isotropic in space and time. In this view the universe looks the same everywhere (on the large scale), the same as it always has and always will. The perfect cosmological principle underpins steady state theory and emerges from chaotic inflation theory.
See also
Background independence
Copernican principle
End of Greatness
Friedmann–Lemaître–Robertson–Walker metric
Large-scale structure of the cosmos
Expansion of the universe
Redshift
References
Physical cosmological concepts
Principles
Concepts in astronomy | Cosmological principle | [
"Physics",
"Astronomy"
] | 2,505 | [
"Concepts in astronomy",
"Concepts in astrophysics",
"Physical cosmological concepts"
] |
174,030 | https://en.wikipedia.org/wiki/Western%20blot | The western blot (sometimes called the protein immunoblot), or western blotting, is a widely used analytical technique in molecular biology and immunogenetics to detect specific proteins in a sample of tissue homogenate or extract. Besides detecting the proteins, this technique is also utilized to visualize, distinguish, and quantify the different proteins in a complicated protein combination.
Western blot technique uses three elements to achieve its task of separating a specific protein from a complex: separation by size, transfer of protein to a solid support, and marking target protein using a primary and secondary antibody to visualize. A synthetic or animal-derived antibody (known as the primary antibody) is created that recognizes and binds to a specific target protein. The electrophoresis membrane is washed in a solution containing the primary antibody, before excess antibody is washed off. A secondary antibody is added which recognizes and binds to the primary antibody. The secondary antibody is visualized through various methods such as staining, immunofluorescence, and radioactivity, allowing indirect detection of the specific target protein.
Other related techniques include dot blot analysis, quantitative dot blot, immunohistochemistry and immunocytochemistry, where antibodies are used to detect proteins in tissues and cells by immunostaining, and enzyme-linked immunosorbent assay (ELISA).
The name western blot is a play on the Southern blot, a technique for DNA detection named after its inventor, English biologist Edwin Southern. Similarly, detection of RNA is termed as northern blot. The term western blot was given by W. Neal Burnette in 1981, although the method itself was independently invented in 1979 by Jaime Renart, Jakob Reiser, and George Stark at Stanford University, and by Harry Towbin, Theophil Staehelin, and Julian Gordon at the Friedrich Miescher Institute in Basel, Switzerland. The Towbin group also used secondary antibodies for detection, thus resembling the actual method that is almost universally used today. Between 1979 and 2019 "it has been mentioned in the titles, abstracts, and keywords of more than 400,000 PubMed-listed publications" and may still be the most-used protein-analytical technique.
Applications
The western blot is extensively used in biochemistry for the qualitative detection of single proteins and protein-modifications (such as post-translational modifications). At least 8–9% of all protein-related publications are estimated to apply western blots. It is used as a general method to identify the presence of a specific single protein within a complex mixture of proteins. A semi-quantitative estimation of a protein can be derived from the size and colour intensity of a protein band on the blot membrane. In addition, applying a dilution series of a purified protein of known concentrations can be used to allow a more precise estimate of protein concentration. The western blot is routinely used for verification of protein production after cloning. It is also used in medical diagnostics, e.g., in the HIV test or BSE-Test.
The confirmatory HIV test employs a western blot to detect anti-HIV antibody in a human serum sample. Proteins from known HIV-infected cells are separated and blotted on a membrane as above. Then, the serum to be tested is applied in the primary antibody incubation step; free antibody is washed away, and a secondary anti-human antibody linked to an enzyme signal is added. The stained bands then indicate the proteins to which the patient's serum contains antibody. A western blot is also used as the definitive test for variant Creutzfeldt–Jakob disease, a type of prion disease linked to the consumption of contaminated beef from cattle with bovine spongiform encephalopathy (BSE, commonly referred to as 'mad cow disease'). Another application is in the diagnosis of tularemia. An evaluation of the western blot's ability to detect antibodies against F. tularensis revealed that its sensitivity is almost 100% and the specificity is 99.6%. Some forms of Lyme disease testing employ western blotting. A western blot can also be used as a confirmatory test for Hepatitis B infection and HSV-2 (Herpes Type 2) infection. In veterinary medicine, a western blot is sometimes used to confirm FIV+ status in cats.
Further applications of the western blot technique include its use by the World Anti-Doping Agency (WADA). Blood doping is the misuse of certain techniques and/or substances to increase one's red blood cell mass, which allows the body to transport more oxygen to muscles and therefore increase stamina and performance. There are three widely known substances or methods used for blood doping, namely, erythropoietin (EPO), synthetic oxygen carriers and blood transfusions. Each is prohibited under WADA's List of Prohibited Substances and Methods. The western blot technique was used during the 2014 FIFA World Cup in the anti-doping campaign for that event. In total, over 1000 samples were collected and analysed by Reichel, et al. in the WADA accredited Laboratory of Lausanne, Switzerland. Recent research utilizing the western blot technique showed an improved detection of EPO in blood and urine based on novel Velum SAR precast horizontal gels optimized for routine analysis. With the adoption of the horizontal SAR-PAGE in combination with the precast film-supported Velum SAR gels the discriminatory capacity of micro-dose application of rEPO was significantly enhanced.
Identification of protein localization across cells
For medication development, the identification of therapeutic targets, and biological research, it is essential to comprehend where proteins are located within a cell. The subcellular locations of proteins inside the cell and their functions are closely related. The relationship between protein function and localization suggests that when proteins move, their functions may change or acquire new characteristics. A protein's subcellular placement can be determined using a variety of methods. Numerous efficient and reliable computational tools and strategies have been created and used to identify protein subcellular localization. With the aid of subcellular fractionation methods, WB continues to be an important fundamental method for the investigation and comprehension of protein localization.
Epitope mapping
Due to their various epitopes, antibodies have gained interest in both basic and clinical research. The foundation of antibody characterization and validation is epitope mapping. The procedure of identifying an antibody's binding sites (epitopes) on the target protein is referred to as "epitope mapping." Finding the binding epitope of an antibody is essential for the discovery and creation of novel vaccines, diagnostics, and therapeutics. As a result, various methods for mapping antibody epitopes have been created. At this point, western blotting's specificity is the main feature that sets it apart from other epitope mapping techniques. There are several application of western blot for epitope mapping on human skin samples, hemorrhagic disease virus.
Procedure
The western blot method is composed of gel electrophoresis to separate native proteins by 3-D structure or denatured proteins by the length of the polypeptide, followed by an electrophoretic transfer onto a membrane (mostly PVDF or nitrocellulose) and an immunostaining procedure to visualize a certain protein on the blot membrane.
Sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE) is generally used for the denaturing electrophoretic separation of proteins. Sodium dodecyl sulfate (SDS) is generally used as a buffer (as well as in the gel) in order to give all proteins present a uniform negative charge, since proteins can be positively, negatively, or neutrally charged. Prior to electrophoresis, protein samples are often boiled to denature the proteins present. This ensures that proteins are separated based on size and prevents proteases (enzymes that break down proteins) from degrading samples. Following electrophoretic separation, the proteins are transferred to a membrane (typically nitrocellulose or PVDF). The membrane is often then stained with Ponceau S in order to visualize the proteins on the blot and ensure a proper transfer occurred. Next the proteins are blocked with milk (or other blocking agents) to prevent non-specific antibody binding, and then stained with antibodies specific to the target protein. Lastly, the membrane will be stained with a secondary antibody that recognizes the first antibody staining, which can then be used for detection by a variety of methods. The gel electrophoresis step is included in western blot analysis to resolve the issue of the cross-reactivity of antibodies.
Sample preparation
As a significant step in conducting a western blot, sample preparation has to be done effectively since the interpretation of this assay is influenced by the protein preparation, which is composed of protein extraction and purification processes. To achieve efficient protein extraction, a proper homogenization method needs to be chosen due to the fact that it is responsible for bursting the cell membrane and releasing the intracellular components. Besides that, the ideal lysis buffer is needed to acquire substantial amounts of target protein content because the buffer is leading the process of protein solubilization and preventing protein degradation. After completing the sample preparation, the protein content is ready to be separated by the utilization of gel electrophoresis.
Gel electrophoresis
The proteins of the sample are separated using gel electrophoresis. Separation of proteins may be by isoelectric point (pI), molecular weight, electric charge, or a combination of these factors. The nature of the separation depends on the treatment of the sample and the nature of the gel.
By far the most common type of gel electrophoresis employs polyacrylamide gels and buffers loaded with sodium dodecyl sulfate (SDS). SDS-PAGE (SDS-polyacrylamide gel electrophoresis) maintains polypeptides in a denatured state once they have been treated with strong reducing agents to remove secondary and tertiary structure (e.g. disulfide bonds [S-S] to sulfhydryl groups [SH and SH]) and thus allows separation of proteins by their molecular mass. Sampled proteins become covered in the negatively charged SDS, effectively becoming anionic, and migrate towards the positively charged (higher voltage) anode (usually having a red wire) through the acrylamide mesh of the gel. Smaller proteins migrate faster through this mesh, and the proteins are thus separated according to size (usually measured in kilodaltons, kDa). The concentration of acrylamide determines the resolution of the gel – the greater the acrylamide concentration, the better the resolution of lower molecular weight proteins. The lower the acrylamide concentration, the better the resolution of higher molecular weight proteins. Proteins travel only in one dimension along the gel for most blots.
Samples are loaded into wells in the gel. One lane is usually reserved for a marker or ladder, which is a commercially available mixture of proteins of known molecular weights, typically stained so as to form visible, coloured bands. When voltage is applied along the gel, proteins migrate through it at different speeds dependent on their size. These different rates of advancement (different electrophoretic mobilities) separate into bands within each lane. Protein bands can then be compared to the ladder bands, allowing estimation of the protein's molecular weight.
It is also possible to use a two-dimensional gel which spreads the proteins from a single sample out in two dimensions. Proteins are separated according to isoelectric point (pH at which they have a neutral net charge) in the first dimension, and according to their molecular weight in the second dimension.
Transfer
To make the proteins accessible to antibody detection, they are moved from within the gel onto a membrane, a solid support, which is an essential part of the process. There are two types of membrane: nitrocellulose (NC) or polyvinylidene difluoride (PVDF). NC membrane has high affinity for protein and its retention abilities. However, NC is brittle, and does not allow the blot to be used for re-probing, whereas PVDF membrane allows the blot to be re-probed. The most commonly used method for transferring the proteins is called electroblotting. Electroblotting uses an electric current to pull the negatively charged proteins from the gel towards the positively charged anode, and into the PVDF or NC membrane. The proteins move from within the gel onto the membrane while maintaining the organization they had within the gel. An older method of transfer involves placing a membrane on top of the gel, and a stack of filter papers on top of that. The entire stack is placed in a buffer solution which moves up the paper by capillary action, bringing the proteins with it. In practice this method is not commonly used due to the lengthy procedure time.
As a result of either transfer process, the proteins are exposed on a thin membrane layer for detection. Both varieties of membrane are chosen for their non-specific protein binding properties (i.e. binds all proteins equally well). Protein binding is based upon hydrophobic interactions, as well as charged interactions between the membrane and protein. Nitrocellulose membranes are cheaper than PVDF, but are far more fragile and cannot withstand repeated probings.
Total protein staining
Total protein staining allows the total protein that has been successfully transferred to the membrane to be visualised, allowing the user to check the uniformity of protein transfer and to perform subsequent normalization of the target protein with the actual protein amount per lane. Normalization with the so-called "loading control" was based on immunostaining of housekeeping proteins in the classical procedure, but is heading toward total protein staining recently, due to multiple benefits. At least seven different approaches for total protein staining have been described for western blot normalization: Ponceau S, stain-free techniques, Sypro Ruby, Epicocconone, Coomassie R-350, Amido Black, and Cy5. In order to avoid noise of signal, total protein staining should be performed before blocking of the membrane. Nevertheless, post-antibody stainings have been described as well.
Blocking
Since the membrane has been chosen for its ability to bind protein and as both antibodies and the target are proteins, steps must be taken to prevent the interactions between the membrane and the antibody used for detection of the target protein. Blocking of non-specific binding is achieved by placing the membrane in a dilute solution of protein – typically 3–5% bovine serum albumin (BSA) or non-fat dry milk (both are inexpensive) in tris-buffered saline (TBS) or I-Block, with a minute percentage (0.1%) of detergent such as Tween 20 or Triton X-100. Although non-fat dry milk is preferred due to its availability, an appropriate blocking solution is needed as not all proteins in milk are compatible with all the detection bands. The protein in the dilute solution attaches to the membrane in all places where the target proteins have not attached. Thus, when the antibody is added, it cannot bind to the membrane, and therefore the only available binding site is the specific target protein. This reduces background in the final product of the western blot, leading to clearer results, and eliminates false positives.
Incubation
During the detection process, the membrane is "probed" for the protein of interest with a modified antibody which is linked to a reporter enzyme; when exposed to an appropriate substrate, this enzyme drives a colorimetric reaction and produces a colour. For a variety of reasons, this traditionally takes place in a two-step process, although there are now one-step detection methods available for certain applications.
Primary antibody
The primary antibodies are generated when a host species or immune cell culture is exposed to the protein of interest (or a part thereof). Normally, this is part of the immune response, whereas here they are harvested and used as sensitive and specific detection tools that bind the protein directly.
After blocking, a solution of primary antibody (generally between 0.5 and 5 micrograms/mL) diluted in either PBS or TBST wash buffer is incubated with the membrane under gentle agitation for typically an hour at room temperature, or overnight at 4°C. It can also be incubated at different temperatures, with lesser temperatures being associated with more binding, both specific (to the target protein, the "signal") and non-specific ("noise"). Following incubation, the membrane is washed several times in wash buffer to remove unbound primary antibody, and thereby minimize background. Typically, the wash buffer solution is composed of buffered saline solution with a small percentage of detergent, and sometimes with powdered milk or BSA.
Secondary antibody
After rinsing the membrane to remove unbound primary antibody, the membrane is exposed to another antibody known as the secondary antibody. Antibodies come from animal sources (or animal sourced hybridoma cultures). The secondary antibody recognises and binds to the species-specific portion of the primary antibody. Therefore, an anti-mouse secondary antibody will bind to almost any mouse-sourced primary antibody, and can be referred to as an 'anti-species' antibody (e.g. anti-mouse, anti-goat etc.). To allow detection of the target protein, the secondary antibody is commonly linked to biotin or a reporter enzyme such as alkaline phosphatase or horseradish peroxidase. This means that several secondary antibodies will bind to one primary antibody and enhance the signal, allowing the detection of proteins of a much lower concentration than would be visible by SDS-PAGE alone.
Horseradish peroxidase is commonly linked to secondary antibodies to allow the detection of the target protein by chemiluminescence. The chemiluminescent substrate is cleaved by horseradish peroxidase, resulting in the production of luminescence. Therefore, the production of luminescence is proportional to the amount of horseradish peroxidase-conjugated secondary antibody, and therefore, indirectly measures the presence of the target protein. A sensitive sheet of photographic film is placed against the membrane, and exposure to the light from the reaction creates an image of the antibodies bound to the blot. A cheaper but less sensitive approach utilizes a 4-chloronaphthol stain with 1% hydrogen peroxide; the reaction of peroxide radicals with 4-chloronaphthol produces a dark purple stain that can be photographed without using specialized photographic film.
As with the ELISPOT and ELISA procedures, the enzyme can be provided with a substrate molecule that will be converted by the enzyme to a coloured reaction product that will be visible on the membrane (see the figure below with blue bands).
Another method of secondary antibody detection utilizes a near-infrared fluorophore-linked antibody. The light produced from the excitation of a fluorescent dye is static, making fluorescent detection a more precise and accurate measure of the difference in the signal produced by labeled antibodies bound to proteins on a western blot. Proteins can be accurately quantified because the signal generated by the different amounts of proteins on the membranes is measured in a static state, as compared to chemiluminescence, in which light is measured in a dynamic state.
A third alternative is to use a radioactive label rather than an enzyme coupled to the secondary antibody, such as labeling an antibody-binding protein like Staphylococcus Protein A or Streptavidin with a radioactive isotope of iodine. Since other methods are safer, quicker, and cheaper, this method is now rarely used; however, an advantage of this approach is the sensitivity of auto-radiography-based imaging, which enables highly accurate protein quantification when combined with optical software (e.g. Optiquant).
One step
Historically, the probing process was performed in two steps because of the relative ease of producing primary and secondary antibodies in separate processes. This gives researchers and corporations huge advantages in terms of flexibility, reduction of cost, and adds an amplification step to the detection process. Given the advent of high-throughput protein analysis and lower limits of detection, however, there has been interest in developing one-step probing systems that would allow the process to occur faster and with fewer consumables. This requires a probe antibody which both recognizes the protein of interest and contains a detectable label, probes which are often available for known protein tags. The primary probe is incubated with the membrane in a manner similar to that for the primary antibody in a two-step process, and then is ready for direct detection after a series of wash steps.
Detection and visualization
After the unbound probes are washed away, the western blot is ready for detection of the probes that are labeled and bound to the protein of interest. In practical terms, not all westerns reveal protein only at one band in a membrane. Size approximations are taken by comparing the stained bands to that of the marker or ladder loaded during electrophoresis. The process is commonly repeated for a structural protein, such as actin or tubulin, that should not change between samples. The amount of target protein is normalized to the structural protein to control between groups. A superior strategy is the normalization to the total protein visualized with trichloroethanol or epicocconone. This practice ensures correction for the amount of total protein on the membrane in case of errors or incomplete transfers. (see western blot normalization)
Colorimetric detection
The colorimetric detection method depends on incubation of the western blot with a substrate that reacts with the reporter enzyme (such as peroxidase) that is bound to the secondary antibody. This converts the soluble dye into an insoluble form of a different colour that precipitates next to the enzyme and thereby stains the membrane. Development of the blot is then stopped by washing away the soluble dye. Protein levels are evaluated through densitometry (how intense the stain is) or spectrophotometry.
Chemiluminescent detection
Chemiluminescent detection methods depend on incubation of the western blot with a substrate that will luminesce when exposed to the reporter on the secondary antibody. The light is then detected by CCD cameras which capture a digital image of the western blot or photographic film. The use of film for western blot detection is slowly disappearing because of non linearity of the image (non accurate quantification). The image is analysed by densitometry, which evaluates the relative amount of protein staining and quantifies the results in terms of optical density. Newer software allows further data analysis such as molecular weight analysis if appropriate standards are used.
Radioactive detection
Radioactive labels do not require enzyme substrates, but rather, allow the placement of medical X-ray film directly against the western blot, which develops as it is exposed to the label and creates dark regions which correspond to the protein bands of interest (see image above). The importance of radioactive detections methods is declining due to its hazardous radiation , because it is very expensive, health and safety risks are high, and ECL (enhanced chemiluminescence) provides a useful alternative.
Fluorescent detection
The fluorescently labeled probe is excited by light and the emission of the excitation is then detected by a photosensor such as a CCD camera equipped with appropriate emission filters which captures a digital image of the western blot and allows further data analysis such as molecular weight analysis and a quantitative western blot analysis. Fluorescence is considered to be one of the best methods for quantification but is less sensitive than chemiluminescence.
Secondary probing
One major difference between nitrocellulose and PVDF membranes relates to the ability of each to support "stripping" antibodies off and reusing the membrane for subsequent antibody probes. While there are well-established protocols available for stripping nitrocellulose membranes, the sturdier PVDF allows for easier stripping, and for more reuse before background noise limits experiments. Another difference is that, unlike nitrocellulose, PVDF must be soaked in 95% ethanol, isopropanol or methanol before use. PVDF membranes also tend to be thicker and more resistant to damage during use.
Minimum requirement specification for Western Blot
In order to ensure that the results of Western blots are reproducible, it is important to report the various parameters mentioned above, including specimen preparation, the concentration of protein used for loading, the percentage of gel and running condition, various transfer methods, attempting to block conditions, the concentration of antibodies, and identification and quantitative determination methods. Many of the articles that have been published don't cover all of these variables. Hence, it is crucial to describe different experimental circumstances or parameters in order to increase the repeatability and precision of WB. To increase WB repeatability, a minimum reporting criteria is thus required.
2-D gel electrophoresis
Two-dimensional SDS-PAGE uses the principles and techniques outlined above. 2-D SDS-PAGE, as the name suggests, involves the migration of polypeptides in 2 dimensions. For example, in the first dimension, polypeptides are separated according to isoelectric point, while in the second dimension, polypeptides are separated according to their molecular weight. The isoelectric point of a given protein is determined by the relative number of positively (e.g. lysine, arginine) and negatively (e.g. glutamate, aspartate) charged amino acids, with negatively charged amino acids contributing to a low isoelectric point and positively charged amino acids contributing to a high isoelectric point. Samples could also be separated first under nonreducing conditions using SDS-PAGE, and under reducing conditions in the second dimension, which breaks apart disulfide bonds that hold subunits together. SDS-PAGE might also be coupled with urea-PAGE for a 2-dimensional gel.
In principle, this method allows for the separation of all cellular proteins on a single large gel. A major advantage of this method is that it often distinguishes between different isoforms of a particular protein – e.g. a protein that has been phosphorylated (by addition of a negatively charged group). Proteins that have been separated can be cut out of the gel and then analysed by mass spectrometry, which identifies their molecular weight.
Problems
Detection problems
There may be a weak or absent signal in the band for a number of reasons related to the amount of antibody and antigen used. This problem might be resolved by using the ideal antigen and antibody concentrations and dilutions specified in the supplier's data sheet. Increasing the exposition period in the detection system's software can address weak bands caused by lower sample and antibody concentrations.
Multiple band problems
When the protein is broken down by proteases, several bands other than predicted bands of low molecular weight might appear. The development of numerous bands can be prevented by properly preparing protein samples with enough protease inhibitors. Multiple bands might show up in the high molecular weight region because some proteins form dimers, trimers, and multimers; this issue might be solved by heating the sample for longer periods of time. Proteins with post-translational modifications (PTMs) or numerous isoforms cause several bands to appear at various molecular weight areas. PTMs can be removed from a specimen using specific chemicals, which also remove extra bands.
High background
Strong antibody concentrations, inadequate blocking, inadequate washing, and excessive exposure time during imaging can result in a high background in the blots. A high background in the blots could be avoided by fixing these issues.
Irregular and uneven bands
It has been claimed that a variety of odd and unequal bands, including black dots, white spots or bands, and curving bands, have occurred. The block dots are removed from the blots by effective blocking. White patches develop as a result of bubbles between the membrane and gel. White bands appear in the blots when main and secondary antibodies are present in significant concentrations. Because of the high voltage used during the gel run and the rapid protein migration, smiley bands appear in the blots. The strange bands in the blot are resolved by resolving these problems.
Mitigations
During the western blotting, there could be several problems related to the different steps of this procedure. Those problems could originate from a protein analysis step such as the detection of low- or post-translationally modified proteins. Additionally, they can be based on the selection of antibodies since the quality of the antibodies plays a significant role in the detection of proteins specifically. On account of the presence of these kinds of problems, a variety of improvements are being produced in the fields of preparation of cell lysate and blotting procedures to build up reliable results. Moreover, to achieve more sensitive analysis and overcome the problems associated with western blotting, several different techniques have been developed and utilized, such as far-western blotting, diffusion blotting, single-cell resolution western blotting, and automated microfluidic western blotting.
Presentation
Researchers use different software to process and align image-sections for elegant presentation of western blot results. Popular tools include Sciugo, Microsoft PowerPoint, Adobe Illustrator and GIMP.
See also
Eastern blot
Far-eastern blot
Far-western blot
Fast parallel proteolysis
Northwestern blot
References
External links
Archived at Ghostarchive and the Wayback Machine:
Archived at Ghostarchive and the Wayback Machine:
Diagnostic virology
Protein methods
Laboratory techniques
Molecular biology techniques | Western blot | [
"Chemistry",
"Biology"
] | 6,170 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
14,674,051 | https://en.wikipedia.org/wiki/Smart%20Materials%20and%20Structures | Smart Materials and Structures is a monthly peer-reviewed scientific journal covering technical advances in smart materials, systems and structures; including intelligent systems, sensing and actuation, adaptive structures, and active control.
The initial editors-in-chief starting in 1992 were Vijay K. Varadan (Pennsylvania State University), Gareth J. Knowles (Grumman Corporation), and Richard O. Claus (Virginia Tech); in 2008 Ephrahim Garcia (Cornell University) took over as editor-in-chief until 2014. Christopher S. Lynch (University of California, Los Angeles) assumed the position of editor-in-chief in 2015 and was succeeded by Alper Erturk (Georgia Institute of Technology) in 2023, who serves as the current editor-in-chief.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.7.
References
External links
IOP Publishing academic journals
Materials science journals
Monthly journals
Academic journals established in 1992
English-language journals | Smart Materials and Structures | [
"Materials_science",
"Engineering"
] | 214 | [
"Materials science journals",
"Materials science"
] |
14,674,709 | https://en.wikipedia.org/wiki/Trace%20diagram | In mathematics, trace diagrams are a graphical means of performing computations in linear and multilinear algebra. They can be represented as (slightly modified) graphs in which some edges are labeled by matrices. The simplest trace diagrams represent the trace and determinant of a matrix. Several results in linear algebra, such as Cramer's Rule and the Cayley–Hamilton theorem, have simple diagrammatic proofs. They are closely related to Penrose's graphical notation.
Formal definition
Let V be a vector space of dimension n over a field F (with n≥2), and let Hom(V,V) denote the linear transformations on V. An n-trace diagram is a graph , where the sets Vi (i = 1, 2, n) are composed of vertices of degree i, together with the following additional structures:
a ciliation at each vertex in the graph, which is an explicit ordering of the adjacent edges at that vertex;
a labeling V2 → Hom(V,V) associating each degree-2 vertex to a linear transformation.
Note that V2 and Vn should be considered as distinct sets in the case n = 2. A framed trace diagram is a trace diagram together with a partition of the degree-1 vertices V1 into two disjoint ordered collections called the inputs and the outputs.
The "graph" underlying a trace diagram may have the following special features, which are not always included in the standard definition of a graph:
Loops are permitted (a loop is an edge that connects a vertex to itself).
Edges that have no vertices are permitted, and are represented by small circles.
Multiple edges between the same two vertices are permitted.
Drawing conventions
When trace diagrams are drawn, the ciliation on an n-vertex is commonly represented by a small mark between two of the incident edges (in the figure above, a small red dot); the specific ordering of edges follows by proceeding counter-clockwise from this mark.
The ciliation and labeling at a degree-2 vertex are combined into a single directed node that allows one to differentiate the first edge (the incoming edge) from the second edge (the outgoing edge).
Framed diagrams are drawn with inputs at the bottom of the diagram and outputs at the top of the diagram. In both cases, the ordering corresponds to reading from left to right.
Correspondence with multilinear functions
Every framed trace diagram corresponds to a multilinear function between tensor powers of the vector space V. The degree-1 vertices correspond to the inputs and outputs of the function, while the degree-n vertices correspond to the generalized Levi-Civita symbol (which is an anti-symmetric tensor related to the determinant). If a diagram has no output strands, its function maps tensor products to a scalar. If there are no degree-1 vertices, the diagram is said to be closed and its corresponding function may be identified with a scalar.
By definition, a trace diagram's function is computed using signed graph coloring. For each edge coloring of the graph's edges by n labels, so that no two edges adjacent to the same vertex have the same label, one assigns a weight based on the labels at the vertices and the labels adjacent to the matrix labels. These weights become the coefficients of the diagram's function.
In practice, a trace diagram's function is typically computed by decomposing the diagram into smaller pieces whose functions are known. The overall function can then be computed by re-composing the individual functions.
Examples
3-Vector diagrams
Several vector identities have easy proofs using trace diagrams. This section covers 3-trace diagrams. In the translation of diagrams to functions, it can be shown that the positions of ciliations at the degree-3 vertices has no influence on the resulting function, so they may be omitted.
It can be shown that the cross product and dot product of 3-dimensional vectors are represented by
In this picture, the inputs to the function are shown as vectors in yellow boxes at the bottom of the diagram. The cross product diagram has an output vector, represented by the free strand at the top of the diagram. The dot product diagram does not have an output vector; hence, its output is a scalar.
As a first example, consider the scalar triple product identity
To prove this diagrammatically, note that all of the following figures are different depictions of the same 3-trace diagram (as specified by the above definition):
Combining the above diagrams for the cross product and the dot product, one can read off the three leftmost diagrams as precisely the three leftmost scalar triple products in the above identity. It can also be shown that the rightmost diagram represents det[u v w]. The scalar triple product identity follows because each is a different representation of the same diagram's function.
As a second example, one can show that
(where the equality indicates that the identity holds for the underlying multilinear functions). One can show that this kind of identity does not change by "bending" the diagram or attaching more diagrams, provided the changes are consistent across all diagrams in the identity. Thus, one can bend the top of the diagram down to the bottom, and attach vectors to each of the free edges, to obtain
which reads
a well-known identity relating four 3-dimensional vectors.
Diagrams with matrices
The simplest closed diagrams with a single matrix label correspond to the coefficients of the characteristic polynomial, up to a scalar factor that depends only on the dimension of the matrix. One representation of these diagrams is shown below, where is used to indicate equality up to a scalar factor that depends only on the dimension n of the underlying vector space.
.
Properties
Let G be the group of n×n matrices. If a closed trace diagram is labeled by k different matrices, it may be interpreted as a function from to an algebra of multilinear functions. This function is invariant under simultaneous conjugation, that is, the function corresponding to is the same as the function corresponding to for any invertible .
Extensions and applications
Trace diagrams may be specialized for particular Lie groups by altering the definition slightly. In this context, they are sometimes called birdtracks, tensor diagrams, or Penrose graphical notation.
Trace diagrams have primarily been used by physicists as a tool for studying Lie groups. The most common applications use representation theory to construct spin networks from trace diagrams. In mathematics, they have been used to study character varieties.
See also
Multilinear map
Gain graph
References
Books:
Diagram Techniques in Group Theory, G. E. Stedman, Cambridge University Press, 1990
Group Theory: Birdtracks, Lie's, and Exceptional Groups, Predrag Cvitanović, Princeton University Press, 2008, http://birdtracks.eu/
Multilinear algebra
Tensors
Linear algebra
Matrix theory
Diagram algebras
Application-specific graphs
Diagrams | Trace diagram | [
"Mathematics",
"Engineering"
] | 1,398 | [
"Linear algebra",
"Tensors",
"Algebra"
] |
14,675,761 | https://en.wikipedia.org/wiki/Birkhoff%E2%80%93Grothendieck%20theorem | In mathematics, the Birkhoff–Grothendieck theorem classifies holomorphic vector bundles over the complex projective line. In particular every holomorphic vector bundle over is a direct sum of holomorphic line bundles. The theorem was proved by , and is more or less equivalent to Birkhoff factorization introduced by .
Statement
More precisely, the statement of the theorem is as the following.
Every holomorphic vector bundle on is holomorphically isomorphic to a direct sum of line bundles:
The notation implies each summand is a Serre twist some number of times of the trivial bundle. The representation is unique up to permuting factors.
Generalization
The same result holds in algebraic geometry for algebraic vector bundle over for any field .
It also holds for with one or two orbifold points, and for chains of projective lines meeting along nodes.
Applications
One application of this theorem is it gives a classification of all coherent sheaves on . We have two cases, vector bundles and coherent sheaves supported along a subvariety, so where n is the degree of the fat point at . Since the only subvarieties are points, we have a complete classification of coherent sheaves.
See also
Algebraic geometry of projective spaces
Euler sequence
Splitting principle
K-theory
Jumping line
References
Further reading
External links
Roman Bezrukavnikov. 18.725 Algebraic Geometry (LEC # 24 Birkhoff–Grothendieck, Riemann-Roch, Serre Duality) Fall 2015. Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons BY-NC-SA.
Vector bundles
Theorems in projective geometry
Theorems in algebraic geometry
Theorems in complex geometry | Birkhoff–Grothendieck theorem | [
"Mathematics"
] | 354 | [
"Theorems in algebraic geometry",
"Theorems in projective geometry",
"Theorems in complex geometry",
"Topology stubs",
"Topology",
"Theorems in geometry"
] |
14,678,069 | https://en.wikipedia.org/wiki/Nitronium%20tetrafluoroborate | Nitronium tetrafluoroborate is an inorganic compound with formula NO2BF4. It is a salt of nitronium cation and tetrafluoroborate anion. It is a colorless crystalline solid, which reacts with water to form the corrosive acids HF and HNO3. As such, it must be handled under water-free conditions. It is sparsely soluble in many organic solvents.
Preparation
Nitronium tetrafluoroborate can be prepared by adding a mixture of anhydrous hydrogen fluoride and boron trifluoride to a nitromethane solution of nitric acid or dinitrogen pentoxide.
Applications
Nitronium tetrafluoroborate is used in organic synthesis as an electrophilic nitrating agent and a mild oxidant.
References
Tetrafluoroborates
Nitronium compounds | Nitronium tetrafluoroborate | [
"Chemistry"
] | 190 | [
"Nitronium compounds",
"Salts",
"Inorganic compounds",
"Inorganic compound stubs"
] |
14,679,497 | https://en.wikipedia.org/wiki/Nicotinamide-nucleotide%20adenylyltransferase | In enzymology, nicotinamide-nucleotide adenylyltransferase (NMNAT) () are enzymes that catalyzes the chemical reaction
ATP + nicotinamide mononucleotide diphosphate + NAD+
Thus, the two substrates of this enzyme are ATP and nicotinamide mononucleotide (NMN), whereas its two products are diphosphate and NAD+.
This enzyme participates in nicotinate and nicotinamide metabolism.
Humans have three protein isoforms: NMNAT1 (widespread), NMNAT2 (predominantly in brain), and NMNAT3 (highest in liver, heart, skeletal muscle, and erythrocytes). Mutations in the NMNAT1 gene lead to the LCA9 form of Leber congenital amaurosis. Mutations in NMNAT2 or NMNAT3 genes are not known to cause any human disease. NMNAT2 is critical for neurons: loss of NMNAT2 is associated with neurodegeneration. All NMNAT isoforms reportedly decline with age.
Belongs to
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is ATP:nicotinamide-nucleotide adenylyltransferase. Other names in common use include NAD+ pyrophosphorylase, adenosine triphosphate-nicotinamide mononucleotide transadenylase, ATP:NMN adenylyltransferase, diphosphopyridine nucleotide pyrophosphorylase, nicotinamide adenine dinucleotide pyrophosphorylase, nicotinamide mononucleotide adenylyltransferase, and NMN adenylyltransferase.
Structural studies
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , and .
Isoform cellular localization
The three protein isoforms have the following cellular localizations
NMNAT1 : Nucleus
NMNAT2 : Cytoplasm
NMNAT3 : Mitochondrion or cytoplasm
All three NMNATs compete for the NMN produced by NAMPT.
Clinical significance
Chronic inflammation due to obesity and other causes reduced NMNAT and NAD+ levels in many tissues.
References
EC 2.7.7
NADH-dependent enzymes
Enzymes of known structure
Anti-aging substances | Nicotinamide-nucleotide adenylyltransferase | [
"Chemistry",
"Biology"
] | 517 | [
"Senescence",
"Anti-aging substances"
] |
14,682,596 | https://en.wikipedia.org/wiki/Finite%20topological%20space | In mathematics, a finite topological space is a topological space for which the underlying point set is finite. That is, it is a topological space which has only finitely many elements.
Finite topological spaces are often used to provide examples of interesting phenomena or counterexamples to plausible sounding conjectures. William Thurston has called the study of finite topologies in this sense "an oddball topic that can
lend good insight to a variety of questions".
Topologies on a finite set
Let be a finite set. A topology on is a subset of (the power set of ) such that
and .
if then .
if then .
In other words, a subset of is a topology if contains both and and is closed under arbitrary unions and intersections. Elements of are called open sets. The general description of topological spaces requires that a topology be closed under arbitrary (finite or infinite) unions of open sets, but only under intersections of finitely many open sets. Here, that distinction is unnecessary. Since the power set of a finite set is finite there can be only finitely many open sets (and only finitely many closed sets).
A topology on a finite set can also be thought of as a sublattice of which includes both the bottom element and the top element .
Examples
0 or 1 points
There is a unique topology on the empty set ∅. The only open set is the empty one. Indeed, this is the only subset of ∅.
Likewise, there is a unique topology on a singleton set {a}. Here the open sets are ∅ and {a}. This topology is both discrete and trivial, although in some ways it is better to think of it as a discrete space since it shares more properties with the family of finite discrete spaces.
For any topological space X there is a unique continuous function from ∅ to X, namely the empty function. There is also a unique continuous function from X to the singleton space {a}, namely the constant function to a. In the language of category theory the empty space serves as an initial object in the category of topological spaces while the singleton space serves as a terminal object.
2 points
Let X = {a,b} be a set with 2 elements. There are four distinct topologies on X:
{∅, {a,b}} (the trivial topology)
{∅, {a}, {a,b}}
{∅, {b}, {a,b}}
{∅, {a}, {b}, {a,b}} (the discrete topology)
The second and third topologies above are easily seen to be homeomorphic. The function from X to itself which swaps a and b is a homeomorphism. A topological space homeomorphic to one of these is called a Sierpiński space. So, in fact, there are only three inequivalent topologies on a two-point set: the trivial one, the discrete one, and the Sierpiński topology.
The specialization preorder on the Sierpiński space {a,b} with {b} open is given by: a ≤ a, b ≤ b, and a ≤ b.
3 points
Let X = {a,b,c} be a set with 3 elements. There are 29 distinct topologies on X but only 9 inequivalent topologies:
{∅, {a,b,c}}
{∅, {c}, {a,b,c}}
{∅, {a,b}, {a,b,c}}
{∅, {c}, {a,b}, {a,b,c}}
{∅, {c}, {b,c}, {a,b,c}} (T0)
{∅, {c}, {a,c}, {b,c}, {a,b,c}} (T0)
{∅, {a}, {b}, {a,b}, {a,b,c}} (T0)
{∅, {b}, {c}, {a,b}, {b,c}, {a,b,c}} (T0)
{∅, {a}, {b}, {c}, {a,b}, {a,c}, {b,c}, {a,b,c}} (T0)
The last 5 of these are all T0. The first one is trivial, while in 2, 3, and 4 the points a and b are topologically indistinguishable.
4 points
Let X = {a,b,c,d} be a set with 4 elements. There are 355 distinct topologies on X but only 33 inequivalent topologies:
{∅, {a, b, c, d}}
{∅, {a, b, c}, {a, b, c, d}}
{∅, {a}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {a, b, c, d}}
{∅, {a, b}, {a, b, c, d}}
{∅, {a, b}, {a, b, c}, {a, b, c, d}}
{∅, {a}, {a, b}, {a, b, c, d}}
{∅, {a}, {b}, {a, b}, {a, b, c, d}}
{∅, {a, b, c}, {d}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {a, d}, {a, b, c, d}}
{∅, {a}, {a, b, c}, {d}, {a, d}, {a, b, c, d}}
{∅, {a}, {b, c}, {a, b, c}, {a, d}, {a, b, c, d}}
{∅, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {a, b, d}, {a, b, c, d}}
{∅, {a, b}, {c}, {a, b, c}, {d}, {a, b, d}, {c, d}, {a, b, c, d}}
{∅, {b, c}, {a, d}, {a, b, c, d}}
{∅, {a}, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {c}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {b, c}, {a, b, c}, {a, d}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {a, b}, {a, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {a, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, b, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, d}, {a, b, d}, {a, c, d}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {a, b, c, d}} (T0)
{∅, {a}, {b}, {a, b}, {c}, {a, c}, {b, c}, {a, b, c}, {d}, {a, d}, {b, d}, {a, b, d}, {c, d}, {a, c, d}, {b, c, d}, {a, b, c, d}} (T0)
The last 16 of these are all T0.
Properties
Specialization preorder
Topologies on a finite set X are in one-to-one correspondence with preorders on X. Recall that a preorder on X is a binary relation on X which is reflexive and transitive.
Given a (not necessarily finite) topological space X we can define a preorder on X by
x ≤ y if and only if x ∈ cl{y}
where cl{y} denotes the closure of the singleton set {y}. This preorder is called the specialization preorder on X. Every open set U of X will be an upper set with respect to ≤ (i.e. if x ∈ U and x ≤ y then y ∈ U). Now if X is finite, the converse is also true: every upper set is open in X. So for finite spaces, the topology on X is uniquely determined by ≤.
Going in the other direction, suppose (X, ≤) is a preordered set. Define a topology τ on X by taking the open sets to be the upper sets with respect to ≤. Then the relation ≤ will be the specialization preorder of (X, τ). The topology defined in this way is called the Alexandrov topology determined by ≤.
The equivalence between preorders and finite topologies can be interpreted as a version of Birkhoff's representation theorem, an equivalence between finite distributive lattices (the lattice of open sets of the topology) and partial orders (the partial order of equivalence classes of the preorder). This correspondence also works for a larger class of spaces called finitely generated spaces. Finitely generated spaces can be characterized as the spaces in which an arbitrary intersection of open sets is open. Finite topological spaces are a special class of finitely generated spaces.
Compactness and countability
Every finite topological space is compact since any open cover must already be finite. Indeed, compact spaces are often thought of as a generalization of finite spaces since they share many of the same properties.
Every finite topological space is also second-countable (there are only finitely many open sets) and separable (since the space itself is countable).
Separation axioms
If a finite topological space is T1 (in particular, if it is Hausdorff) then it must, in fact, be discrete. This is because the complement of a point is a finite union of closed points and therefore closed. It follows that each point must be open.
Therefore, any finite topological space which is not discrete cannot be T1, Hausdorff, or anything stronger.
However, it is possible for a non-discrete finite space to be T0. In general, two points x and y are topologically indistinguishable if and only if x ≤ y and y ≤ x, where ≤ is the specialization preorder on X. It follows that a space X is T0 if and only if the specialization preorder ≤ on X is a partial order. There are numerous partial orders on a finite set. Each defines a unique T0 topology.
Similarly, a space is R0 if and only if the specialization preorder is an equivalence relation. Given any equivalence relation on a finite set X the associated topology is the partition topology on X. The equivalence classes will be the classes of topologically indistinguishable points. Since the partition topology is pseudometrizable, a finite space is R0 if and only if it is completely regular.
Non-discrete finite spaces can also be normal. The excluded point topology on any finite set is a completely normal T0 space which is non-discrete.
Connectivity
Connectivity in a finite space X is best understood by considering the specialization preorder ≤ on X. We can associate to any preordered set X a directed graph Γ by taking the points of X as vertices and drawing an edge x → y whenever x ≤ y. The connectivity of a finite space X can be understood by considering the connectivity of the associated graph Γ.
In any topological space, if x ≤ y then there is a path from x to y. One can simply take f(0) = x and f(t) = y for t > 0. It is easily to verify that f is continuous. It follows that the path components of a finite topological space are precisely the (weakly) connected components of the associated graph Γ. That is, there is a topological path from x to y if and only if there is an undirected path between the corresponding vertices of Γ.
Every finite space is locally path-connected since the set
is a path-connected open neighborhood of x that is contained in every other neighborhood. In other words, this single set forms a local base at x.
Therefore, a finite space is connected if and only if it is path-connected. The connected components are precisely the path components. Each such component is both closed and open in X.
Finite spaces may have stronger connectivity properties. A finite space X is
hyperconnected if and only if there is a greatest element with respect to the specialization preorder. This is an element whose closure is the whole space X.
ultraconnected if and only if there is a least element with respect to the specialization preorder. This is an element whose only neighborhood is the whole space X.
For example, the particular point topology on a finite space is hyperconnected while the excluded point topology is ultraconnected. The Sierpiński space is both.
Additional structure
A finite topological space is pseudometrizable if and only if it is R0. In this case, one possible pseudometric is given by
where x ≡ y means x and y are topologically indistinguishable. A finite topological space is metrizable if and only if it is discrete.
Likewise, a topological space is uniformizable if and only if it is R0. The uniform structure will be the pseudometric uniformity induced by the above pseudometric.
Algebraic topology
Perhaps surprisingly, there are finite topological spaces with nontrivial fundamental groups. A simple example is the pseudocircle, which is space X with four points, two of which are open and two of which are closed. There is a continuous map from the unit circle S1 to X which is a weak homotopy equivalence (i.e. it induces an isomorphism of homotopy groups). It follows that the fundamental group of the pseudocircle is infinite cyclic.
More generally it has been shown that for any finite abstract simplicial complex K, there is a finite topological space XK and a weak homotopy equivalence f : |K| → XK where |K| is the geometric realization of K. It follows that the homotopy groups of |K| and XK are isomorphic. In fact, the underlying set of XK can be taken to be K itself, with the topology associated to the inclusion partial order.
Number of topologies on a finite set
As discussed above, topologies on a finite set are in one-to-one correspondence with preorders on the set, and T0 topologies are in one-to-one correspondence with partial orders. Therefore, the number of topologies on a finite set is equal to the number of preorders and the number of T0 topologies is equal to the number of partial orders.
The table below lists the number of distinct (T0) topologies on a set with n elements. It also lists the number of inequivalent (i.e. nonhomeomorphic) topologies.
Let T(n) denote the number of distinct topologies on a set with n points. There is no known simple formula to compute T(n) for arbitrary n. The Online Encyclopedia of Integer Sequences presently lists T(n) for n ≤ 18.
The number of distinct T0 topologies on a set with n points, denoted T0(n), is related to T(n) by the formula
where S(n,k) denotes the Stirling number of the second kind.
See also
Finite geometry
Finite metric space
Topological combinatorics
References
External links
Topological spaces
Combinatorics | Finite topological space | [
"Mathematics"
] | 4,033 | [
"Discrete mathematics",
"Mathematical structures",
"Space (mathematics)",
"Combinatorics",
"Topological spaces",
"Topology"
] |
56,135 | https://en.wikipedia.org/wiki/Touchstone%20%28assaying%20tool%29 | A touchstone is a small tablet of dark stone such as slate or lydite, used for assaying precious metal alloys. It has a finely grained surface on which soft metals leave a visible trace.
History
The touchstone was used during the Harappa period of the Indus Valley civilization ca. 2600–1900 BC for testing the purity of soft metals. It was also used in Ancient Greece.
The touchstone allowed anyone to easily and quickly determine the purity of a metal sample. This, in turn, led to the widespread adoption of gold as a standard of exchange. Although mixing gold with less expensive materials was common in coinage, using a touchstone one could easily determine the quantity of gold in the coin, and thereby calculate its intrinsic worth.
Operation
Drawing a line with gold on a touchstone will leave a visible trace. Because different alloys of gold have different colors (see gold), the unknown sample can be compared to samples of known purity. This method has been used since ancient times. In modern times, additional tests can be done. The trace will react in different ways to specific concentrations of nitric acid or aqua regia, thereby identifying the quality of the gold: 24 karat gold is not affected but 14 karat gold will show chemical activity.
See also
Litmus test
Spot analysis
Streak test
References
Materials science
Jewellery
Gold
Lithics
Inventions of the Indus Valley Civilisation
Indian inventions | Touchstone (assaying tool) | [
"Physics",
"Materials_science",
"Engineering"
] | 284 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
56,226 | https://en.wikipedia.org/wiki/Electrum | Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".
Electrum was used as early as the third millennium BC in the Old Kingdom of Egypt, sometimes as an exterior coating to the pyramidions atop ancient Egyptian pyramids and obelisks. It was also used in the making of ancient drinking vessels. The first known metal coins made were of electrum, dating back to the end of the 7th century or the beginning of the 6th century BC.
Etymology
The name electrum is the Latinized form of the Greek word ἤλεκτρον (ḗlektron), mentioned in the Odyssey, referring to a metallic substance consisting of gold alloyed with silver. The same word was also used for the substance amber, likely because of the pale yellow color of certain varieties. (It is from amber’s electrostatic properties that the modern English words electron and electricity are derived.) Electrum was often referred to as "white gold" in ancient times but could be more accurately described as pale gold because it is usually pale yellow or yellowish-white in color. The modern use of the term white gold usually refers to gold alloyed with any one or a combination of nickel, silver, platinum and palladium to produce a silver-colored gold.
Composition
Electrum consists primarily of gold and silver but is sometimes found with traces of platinum, copper and other metals. The name is mostly applied informally to compositions between 20–80% gold and 80–20% silver, but these are strictly called gold or silver depending on the dominant element. Analysis of the composition of electrum in ancient Greek coinage dating from about 600 BC shows that the gold content was about 55.5% in the coinage issued by Phocaea. In the early classical period the gold content of electrum ranged from 46% in Phokaia to 43% in Mytilene. In later coinage from these areas, dating to 326 BC, the gold content averaged 40% to 41%. In the Hellenistic period electrum coins with a regularly decreasing proportion of gold were issued by the Carthaginians. In the later Eastern Roman Empire controlled from Constantinople, the purity of the gold coinage was reduced.
History
Electrum is mentioned in an account of an expedition sent by Pharaoh Sahure of the Fifth Dynasty of Egypt. It is also discussed by Pliny the Elder in his Naturalis Historia. It is also mentioned in the Bible, in the first chapter of the book of the prophet Ezekiel.
Early coinage
The earliest known electrum coins, Lydian coins and East Greek coins found under the Temple of Artemis at Ephesus, are currently dated to the last quarter of the 7th century BC (625–600 BC). Electrum is believed to have been used in coins c. 600 BC in Lydia during the reign of Alyattes.
Electrum was much better for coinage than gold, mostly because it was harder and more durable, but also because techniques for refining gold were not widespread at the time. The gold content of naturally occurring electrum in modern western Anatolia ranges from 70% to 90%, in contrast to the 45–55% of gold in electrum used in ancient Lydian coinage of the same geographical area. This suggests that the Lydians had already solved the refining technology for silver and were adding refined silver to the local native electrum some decades before introducing pure silver coins.
In Lydia, electrum was minted into coins weighing , each valued at stater (meaning "standard"). Three of these coins—with a weight of about —totaled one stater, about one month's pay for a soldier. To complement the stater, fractions were made: the trite (third), the hekte (sixth), and so forth, including of a stater, and even down to and of a stater. The stater was about to . Larger denominations, such as a one stater coin, were minted as well.
Because of variation in the composition of electrum, it was difficult to determine the exact worth of each coin. Widespread trading was hampered by this problem, as the intrinsic value of each electrum coin could not be easily determined. This suggests that one reason for the invention of coinage in that area was to increase the profits from seigniorage by issuing currency with a lower gold content than the commonly circulating metal.
These difficulties were eliminated circa 570 BC when the Croeseids, coins of pure gold and silver, were introduced. However, electrum currency remained common until approximately 350 BC. The simplest reason for this was that, because of the gold content, one 14.1 gram stater was worth as much as ten 14.1 gram silver pieces.
See also
Corinthian bronze – a highly prized alloy in antiquity that may have contained electrum
Crown gold - A 22 carat gold alloy highly valued for its use in gold coins from the 16th century onwards
Hepatizon
Orichalcum – another distinct metal or alloy mentioned in texts from classical antiquity, later used to refer to brass
Panchaloha
Shakudō – a Japanese billon of gold and copper with a dark blue-purple patina
Shibuichi – another Japanese alloy known for its patina
Thokcha – an alloy of meteoric iron or "thunderbolt iron" commonly used in Tibet
Tumbaga – a similar material, originating in Pre-Columbian America
References
External links
Electrum lion coins of the ancient Lydians (about 600 BC)
An image of the obverse of a Lydian coin made of electrum
Gold
Coinage metals and alloys
Precious metal alloys
Silver
Copper alloys | Electrum | [
"Chemistry"
] | 1,176 | [
"Precious metal alloys",
"Alloys",
"Copper alloys",
"Coinage metals and alloys"
] |
56,239 | https://en.wikipedia.org/wiki/Acrylamide | Acrylamide (or acrylic amide) is an organic compound with the chemical formula CH2=CHC(O)NH2. It is a white odorless solid, soluble in water and several organic solvents. From the chemistry perspective, acrylamide is a vinyl-substituted primary amide (CONH2). It is produced industrially mainly as a precursor to polyacrylamides, which find many uses as water-soluble thickeners and flocculation agents.
Acrylamide forms in burnt areas of food, particularly starchy foods like potatoes, when cooked with high heat, above . Despite health scares following this discovery in 2002, and its classification as a probable carcinogen, acrylamide from diet is thought unlikely to cause cancer in humans; Cancer Research UK categorized the idea that eating burnt food causes cancer as a "myth".
Production
Acrylamide can be prepared by the hydration of acrylonitrile, which is catalyzed enzymatically:
CH2=CHCN + H2O → CH2=CHC(O)NH2
This reaction also is catalyzed by sulfuric acid as well as various metal salts. Treatment of acrylonitrile with sulfuric acid gives acrylamide sulfate, . This salt can be converted to acrylamide with a base or to methyl acrylate with methanol.
Uses
The majority of acrylamide is used to manufacture various polymers, especially polyacrylamide. This water-soluble polymer, which has very low toxicity, is widely used as thickener and flocculating agent. These functions are valuable in the purification of drinking water, corrosion inhibition, mineral extraction, and paper making. Polyacrylamide gels are routinely used in medicine and biochemistry for purification and assays.
Toxicity and carcinogenicity
Acrylamide can arise in some cooked foods via a series of steps by the reaction of the amino acid asparagine and glucose. This condensation, one of the Maillard reactions, followed by dehydrogenation produces N-(D-glucos-1-yl)-L-asparagine, which upon pyrolysis generates some acrylamide.
The discovery in 2002 that some cooked foods contain acrylamide attracted significant attention to its possible biological effects. IARC, NTP, and the EPA have classified it as a probable carcinogen, although epidemiological studies (as of 2019) suggest that dietary acrylamide consumption does not significantly increase people's risk of developing cancer.
Europe
According to the EFSA, the main toxicity risks of acrylamide are "Neurotoxicity, adverse effects on male reproduction, developmental toxicity and carcinogenicity". However, according to their research, there is no concern on non-neoplastic effects. Furthermore, while the relation between consumption of acrylamide and cancer in rats and mice has been shown, it is still unclear whether acrylamide consumption has an effect on the risk of developing cancer in humans, and existing epidemiological studies in humans are very limited and do not show any relation between acrylamide and cancer in humans. Food industry workers exposed to twice the average level of acrylamide do not exhibit higher cancer rates.
United States
Acrylamide is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Acrylamide is considered a potential occupational carcinogen by U.S. government agencies and classified as a Group 2A carcinogen by the IARC. The Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set dermal occupational exposure limits at 0.03 mg/m3 over an eight-hour workday.
Opinions of health organizations
Baking, grilling or broiling food causes significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
The American Cancer Society says that laboratory studies have shown that acrylamide is likely to be a carcinogen, but that evidence from epidemiological studies suggests that dietary acrylamide is unlikely to raise the risk of people developing most common types of cancer.
Hazards
Radiolabeled acrylamide is also a skin irritant and may be a tumor initiator in the skin, potentially increasing risk for skin cancer. Symptoms of acrylamide exposure include dermatitis in the exposed area, and peripheral neuropathy.
Laboratory research has found that some phytochemicals may have the potential to be developed into drugs which could alleviate the toxicity of acrylamide.
Mechanism of action
Acrylamide is metabolized to the genotoxic derivative glycidamide. On the other hand, acrylamide and glycidamide can be detoxified via conjugation with glutathione.
Occurrence in food
Acrylamide was discovered in foods, mainly in starchy foods, such as potato chips (UK: potato crisps), French fries (UK: chips), and bread that had been heated higher than . Production of acrylamide in the heating process was shown to be temperature-dependent. It was not found in food that had been boiled, or in foods that were not heated.
Acrylamide has been found in roasted barley tea, called mugicha in Japanese. The barley is roasted so it is dark brown prior to being steeped in hot water. The roasting process produced 200–600 micrograms/kg of acrylamide in mugicha. This is less than the >1000 micrograms/kg found in potato crisps and other fried whole potato snack foods cited in the same study and it is unclear how much of this enters the drink to be ingested. Rice cracker and sweet potato levels were lower than in potatoes. Potatoes cooked whole were found to have significantly lower acrylamide levels than the others, suggesting a link between food preparation method and acrylamide levels.
Acrylamide levels appear to rise as food is heated for longer periods of time. Although researchers are still unsure of the precise mechanisms by which acrylamide forms in foods, many believe it is a byproduct of the Maillard reaction. In fried or baked goods, acrylamide may be produced by the reaction between asparagine and reducing sugars (fructose, glucose, etc.) or reactive carbonyls at temperatures above .
Later studies have found acrylamide in black olives, dried plums, dried pears, coffee, and peanuts.
The US FDA has analyzed a variety of U.S. food products for levels of acrylamide since 2002.
Occurrence in cigarettes
Cigarette smoking is a major acrylamide source. It has been shown in one study to cause an increase in blood acrylamide levels three-fold greater than any dietary factor.
See also
Acrydite: research on this compound casts light on acrylamide
Acrolein
Alkyl nitrites
Deep-frying
Deep fryer
Vacuum fryer
Substance of very high concern
Heterocyclic amines
Polycyclic aromatic hydrocarbons
References
Further reading
External links
Carboxamides
Hazardous air pollutants
IARC Group 2A carcinogens
Monomers
Reproductive toxicants
Suspected fetotoxicants | Acrylamide | [
"Chemistry",
"Materials_science"
] | 1,627 | [
"Endocrine disruptors",
"Reproductive toxicants",
"Monomers",
"Polymer chemistry"
] |
56,369 | https://en.wikipedia.org/wiki/Bell%27s%20theorem | Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
The first such result was introduced by Bell in 1964, building upon the Einstein–Podolsky–Rosen paradox, which had called attention to the phenomenon of quantum entanglement. Bell deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. Such a constraint would later be named a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Multiple variations on Bell's theorem were put forward in the following years, using different assumptions and obtaining different Bell (or "Bell-type") inequalities.
The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with local hidden-variable theories.
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, different interpretations of quantum mechanics disagree about what exactly it implies.
Theorem
There are many variations on the basic idea, some employing stronger mathematical assumptions than others. Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example.
Hypothetical characters Alice and Bob stand in widely separated locations. Their colleague Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements by and . Both and are binary measurements: the result of is either or , and likewise for . When Bob receives his particle, he chooses one of two measurements, and , which are also both binary.
Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measure and obtains the result , then the particle she received carried a value of for a property . Consider the combinationBecause both and take the values , then either or . In the former case, the quantity must equal 0, while in the latter case, . So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal . Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the absolute value of the average of the combination across all the trials will be less than or equal to 2. No single trial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages
This is a Bell inequality, specifically, the CHSH inequality. Its derivation here depends upon two assumptions: first, that the underlying physical properties and exist independently of being observed or measured (sometimes called the assumption of realism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption of locality).
Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair of qubits which he describes by the Bell state
where and are the eigenstates of one of the Pauli matrices,
Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possible measurements are also defined in terms of the Pauli matrices. Alice measures either of the two observables and :
and Bob measures either of the two observables
Victor can calculate the quantum expectation values for pairs of these observables using the Born rule:
While only one of these four measurements can be made in a single trial of the experiment, the sum
gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables. The value is in fact the largest that quantum physics permits for this combination of expectation values, making it a Tsirelson bound.
The CHSH inequality can also be thought of as a game in which Alice and Bob try to coordinate their actions. Victor prepares two bits, and , independently and at random. He sends bit to Alice and bit to Bob. Alice and Bob win if they return answer bits and to Victor, satisfying
Or, equivalently, Alice and Bob win if the logical AND of and is the logical XOR of and . Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than , regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as
Variations and related results
Bell (1964)
Bell's 1964 paper points out that under restricted conditions, local hidden-variable models can reproduce the predictions of quantum mechanics. He then demonstrates that this cannot hold true in general. Bell considers a refinement by David Bohm of the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by a spin singlet state (which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by a Stern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by and . The configuration of each measuring instrument is represented by a unit vector, and the quantum-mechanical prediction for the correlation between two detectors with settings and is
In particular, if the orientation of the two detectors is the same (), then the outcome of one measurement is certain to be the negative of the outcome of the other, giving . And if the orientations of the two detectors are orthogonal (), then the outcomes are uncorrelated, and . Bell proves by example that these special cases can be explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate angles cannot.
Bell posited that a local hidden-variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameter :
where is a probability density function. The two functions and provide the responses of the two detectors given the orientation vectors and the hidden variable:
Crucially, the outcome of detector does not depend upon , and likewise the outcome of does not depend upon , because the two detectors are physically separated. Now we suppose that the experimenter has a choice of settings for the second detector: it can be set either to or to . Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality
However, it is easy to find situations where quantum mechanics violates the Bell inequality. For example, let the vectors and be orthogonal, and let lie in their plane at a 45° angle from both of them. Then
while
but
Therefore, there is no local hidden-variable model that can reproduce the predictions of quantum mechanics for all choices of , , and Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for.
Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a probability-1 prediction about the result from the second detector, knowing the result from the first. This is related to the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."
GHZ–Mermin (1990)
Daniel Greenberger, Michael A. Horne, and Anton Zeilinger presented a four-particle thought experiment in 1990, which David Mermin then simplified to use only three particles. In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state
where as above, and are the eigenvectors of the Pauli matrix . Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely separated locations. Alice measures either or on her particle, and so do Bob and Charlie. The result of each measurement is either or . Applying the Born rule to the three-qubit state , Victor predicts that whenever the three measurements include one and two 's, the product of the outcomes will always be . This follows because is an eigenvector of with eigenvalue , and likewise for and . Therefore, knowing Alice's result for a measurement and Bob's result for a measurement, Victor can predict with probability 1 what result Charlie will return for a measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of a measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of a or measurement upon it. The set of all three particles would then be described by the instruction set
with each entry being either or , and each or measurement simply returning the appropriate value.
If Alice, Bob, and Charlie all perform the measurement, then the product of their results would be . This value can be deduced from
because the square of either or is . Each factor in parentheses equals , so
and the product of Alice, Bob, and Charlie's results will be with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state that the measurement will instead yield with probability unity.
This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game. In it, Alice, Bob, and Charlie receive bits from Victor, promised to always have an even number of ones, that is, , and send him back bits . They win the game if have an odd number of ones for all inputs except , when they need to have an even number of ones. That is, they win the game iff . With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example of quantum pseudo-telepathy.
Kochen–Specker theorem (1967)
In quantum theory, orthonormal bases for a Hilbert space represent measurements that can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement. Suppose that a hidden variable exists, so that knowing the value of would imply certainty about the outcome of any measurement. Given a value of , each measurement outcome – that is, each vector in the Hilbert space – is either impossible or guaranteed. A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always be impossible when considered as belonging to one basis and guaranteed when taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variable can be controlling the measurement outcomes.
Free will theorem
The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen, Heywood and Redhead, Stairs, and Brown and Svetlichny. As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it. In other words, by this criterion, a hidden variable must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed.
This type of argument gained attention when an instance of it was advanced by John Conway and Simon Kochen under the name of the free will theorem. The Conway–Kochen theorem uses a pair of entangled qutrits and a Kochen–Specker configuration discovered by Asher Peres.
Quasiclassical entanglement
As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden-variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989, Reinhard Werner introduced what are now called Werner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model. Werner states are bipartite quantum states that are invariant under unitaries of symmetric tensor-product form:
In 2004, Robert Spekkens introduced a toy model that starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, like monogamy, but by construction, the toy model can never violate a Bell inequality.
History
Background
The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his 1932 textbook on quantum mechanics, the Hungarian-born polymath John von Neumann presented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned by Hans Reichenbach, in more detail by Grete Hermann, and possibly in conversation though not in print by Albert Einstein. (Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.)
Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality:
Consider a mechanical system constituted of two partial systems A and B which have interaction with each other only during limited time. Let the ψ function before their interaction be given. Then the Schrödinger equation will furnish the ψ function after their interaction has taken place. Let us now determine the physical condition of the partial system A as completely as possible by measurements. Then the quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the determining magnitudes specifying the condition of A has been measured (for instance coordinates or momenta). Since there can be only one physical condition of B after the interaction and which can reasonably not be considered as dependent on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated with the physical condition. This coordination of several ψ functions with the same physical condition of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical condition of a unit system.
The EPR thought experiment is similar, also considering two separated systems A and B described by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement upon B implies the existence of an "element of reality" within B.
In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The year before, Chien-Shiung Wu and Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible.
By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.
Tsung-Dao Lee came close to deriving Bell's theorem in 1960. He considered events where two kaons were produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality.
Bell's publications
Bell chose to publish his theorem in a comparatively obscure journal because it did not require page charges, in fact paying the authors who published there at the time. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian.
Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966. In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least."
Experiments
In 1967, the unusual title Physics Physique Физика caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test in 1972. This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982, Alain Aspect and collaborators performed the first Bell test to remove this limitation. This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000. By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses.
In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics.
The most prevalent loopholes in real experiments are the detection and locality loopholes. The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible.
Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015.
Regarding these results, Alain Aspect writes that "no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."
These efforts to experimentally validate violations of the Bell inequalities would later result in Clauser, Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics.
Interpretations
Reactions to Bell's theorem have been many and varied. Maximilian Schlosshauer, Johannes Kofler, and Zeilinger write that Bell inequalities provide "a wonderful example of how we can have a rigorous theoretical result tested by numerous experiments, and yet disagree about the implications."
The Copenhagen interpretation
Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness or "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense. For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be". Likewise, Rudolf Peierls took the message of Bell's theorem to be that, because the premise of locality is physically reasonable, "hidden variables cannot be introduced without abandoning some of the results of quantum mechanics".
This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), as well as QBism.
Many-worlds interpretation of quantum mechanics
The Many-worlds interpretation, also known as the Everett interpretation, is dynamically local, meaning that it does not call for action at a distance, and deterministic, because it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore, a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes.
The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore, the violation of a Bell inequality cannot be interpreted as a proof of non-locality.
Non-local hidden variables
Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. One challenge for non-local hidden variable theories is to explain why this instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.
The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local.
Superdeterminism
A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is necessarily correlated with the system being measured is known as superdeterministic.
A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that superdeterminism cannot be dismissed.
See also
Einstein's thought experiments
Epistemological Letters
Fundamental Fysiks Group
Leggett inequality
Leggett–Garg inequality
Mermin's device
Mott problem
PBR theorem
Quantum contextuality
Quantum nonlocality
Renninger negative-result experiment
Notes
References
Further reading
The following are intended for general audiences.
The following are more technically oriented.
External links
Mermin: Spooky Actions At A Distance? Oppenheimer Lecture.
Quantum information science
Quantum measurement
Theorems in quantum mechanics
Hidden variable theory
Inequalities
1964 introductions
No-go theorems | Bell's theorem | [
"Physics",
"Mathematics"
] | 5,685 | [
"Theorems in quantum mechanics",
"Mathematical theorems",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Binary relations",
"Theorems in mathematical physics",
"Quantum measurement",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Physics theo... |
56,398 | https://en.wikipedia.org/wiki/Phase%20diagram | A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
Overview
Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases.
Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question.
The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry").
Working fluids are often categorized on the basis of the shape of their phase diagram.
Types
2-dimensional diagrams
Pressure vs temperature
The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas.
The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic, correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries.
In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, in what is known as a supercritical fluid. In water, the critical point occurs at around Tc = , pc = and ρc = 356 kg/m3.
The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group.
For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces. Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes.
Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. Other exceptions include antimony and bismuth.
At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid.
The value of the slope dP/dT is given by the Clausius–Clapeyron equation for fusion (melting)
where ΔHfus is the heat of fusion which is always positive, and ΔVfus is the volume change for fusion. For most substances ΔVfus is positive so that the slope is positive. However for water and other exceptions, ΔVfus is negative so that the slope is negative.
Other thermodynamic properties
In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume, specific enthalpy, or specific entropy. For example, single-component graphs of temperature vs. specific entropy (T vs. s) for water/steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle, Rankine cycle, or vapor-compression refrigeration cycle.
Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso-lines represents the thermodynamic quantity at a certain constant value.
3-dimensional diagrams
It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. For example, for a single component, a 3D Cartesian coordinate type graph can show temperature (T) on one axis, pressure (p) on a second axis, and specific volume (v) on a third. Such a 3D graph is sometimes called a p–v–T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram.
An orthographic projection of the 3D p–v–T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line.
Binary mixtures
Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter.
One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram, as shown at right. Such a mixture can be either a solid solution, eutectic or peritectic, among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds. For two particular volatile components at a certain pressure such as atmospheric pressure, a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis.
A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure. A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively.
A simple example diagram with hypothetical components 1 and 2 in a non-azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information.
In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid, a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid.
A complex phase diagram of great technological importance is that of the iron–carbon system for less than 7% carbon (see steel).
The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction. A volume-based measure like molarity would be inadvisable.
Ternary phase diagrams
A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot).
The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C.
However, the most common methods to present phase equilibria in a ternary system are the following:
1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces;
2) isothermal sections;
3) vertical sections.
Crystals
Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases.
Mesophases
Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases. Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases.
See also
CALPHAD (method)
Computational thermodynamics
Congruent melting and incongruent melting
Gibbs phase rule
Glass databases
Hamiltonian mechanics
Phase separation
Saturation dome
Schreinemaker's analysis
Simple phase envelope algorithm
References
External links
Iron-Iron Carbide Phase Diagram Example
How to build a phase diagram
Phase Changes: Phase Diagrams: Part 1
Equilibrium Fe-C phase diagram
Phase diagrams for lead free solders
DoITPoMS Phase Diagram Library
DoITPoMS Teaching and Learning Package – "Phase Diagrams and Solidification"
Phase Diagrams: The Beginning of Wisdom – Open Access Journal Article
Binodal curves, tie-lines, lever rule and invariant points – How to read phase diagrams (Video by SciFox on TIB AV-Portal)
The Alloy Phase Diagram International Commission (APDIC)
Periodic table of phase diagrams of the elements (pdf poster)
Diagram
Equilibrium chemistry
Materials science
Metallurgy
Charts
Diagrams
Gases
Chemical engineering thermodynamics | Phase diagram | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,576 | [
"Matter",
"Phase transitions",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Metallurgy",
"Chemical engineering",
"Phases of matter",
"Critical phenomena",
"Materials science",
"Equilibrium chemistry",
"Chemical engineering thermodynamics",
"nan",
"Statistical mechanics",
... |
57,169 | https://en.wikipedia.org/wiki/Heating%2C%20ventilation%2C%20and%20air%20conditioning | Heating, ventilation, and air conditioning (HVAC) is the use of various technologies to control the temperature, humidity, and purity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality. HVAC system design is a subdiscipline of mechanical engineering, based on the principles of thermodynamics, fluid mechanics, and heat transfer. "Refrigeration" is sometimes added to the field's abbreviation as HVAC&R or HVACR, or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers).
HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors.
Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, keeps interior building air circulating, and prevents stagnation of the interior air. Methods for ventilating a building are divided into mechanical/forced and natural types.
Overview
The three major functions of heating, ventilation, and air conditioning are interrelated, especially with the need to provide thermal comfort and acceptable indoor air quality within reasonable installation, operation, and maintenance costs. HVAC systems can be used in both domestic and commercial environments. HVAC systems can provide ventilation, and maintain pressure relationships between spaces. The means of air delivery and removal from spaces is known as room air distribution.
Individual systems
In modern buildings, the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally estimate the capacity and type of system needed and then design the system, selecting the appropriate refrigerant and various components needed. For larger buildings, building service designers, mechanical engineers, or building services engineers analyze, design, and specify the HVAC systems. Specialty mechanical contractors and suppliers then fabricate, install and commission the systems. Building permits and code-compliance inspections of the installations are normally required for all sizes of buildings
District networks
Although HVAC is executed in individual buildings or other enclosed spaces (like NORAD's underground headquarters), the equipment involved is in some cases an extension of a larger district heating (DH) or district cooling (DC) network, or a combined DHC network. In such cases, the operating and maintenance aspects are simplified and metering becomes necessary to bill for the energy that is consumed, and in some cases energy that is returned to the larger system. For example, at a given time one building may be utilizing chilled water for air conditioning and the warm water it returns may be used in another building for heating, or for the overall heating-portion of the DHC network (likely with energy added to boost the temperature).
Basing HVAC on a larger network helps provide an economy of scale that is often not possible for individual buildings, for utilizing renewable energy sources such as solar heat, winter's cold, the cooling potential in some places of lakes or seawater for free cooling, and the enabling function of seasonal thermal energy storage. By utilizing natural sources that can be used for HVAC systems it can make a huge difference for the environment and help expand the knowledge of using different methods.
History
HVAC is based on inventions and discoveries made by Nikolay Lvov, Michael Faraday, Rolla C. Carpenter, Willis Carrier, Edwin Ruud, Reuben Trane, James Joule, William Rankine, Sadi Carnot, Alice Parker and many others.
Multiple inventions within this time frame preceded the beginnings of the first comfort air conditioning system, which was designed in 1902 by Alfred Wolff (Cooper, 2003) for the New York Stock Exchange, while Willis Carrier equipped the Sacketts-Wilhems Printing Company with the process AC unit the same year. Coyne College was the first school to offer HVAC training in 1899. The first residential AC was installed by 1914, and by the 1950s there was "widespread adoption of residential AC".
The invention of the components of HVAC systems went hand-in-hand with the Industrial Revolution, and new methods of modernization, higher efficiency, and system control are constantly being introduced by companies and inventors worldwide.
Heating
Heaters are appliances whose purpose is to generate heat (i.e. warmth) for the building. This can be done via central heating. Such a system contains a boiler, furnace, or heat pump to heat water, steam, or air in a central location such as a furnace room in a home, or a mechanical room in a large building. The heat can be transferred by convection, conduction, or radiation. Space heaters are used to heat single rooms and only consist of a single unit.
Generation
Heaters exist for various types of fuel, including solid fuels, liquids, and gases. Another type of heat source is electricity, normally heating ribbons composed of high resistance wire (see Nichrome). This principle is also used for baseboard heaters and portable heaters. Electrical heaters are often used as backup or supplemental heat for heat pump systems.
The heat pump gained popularity in the 1950s in Japan and the United States. Heat pumps can extract heat from various sources, such as environmental air, exhaust air from a building, or from the ground. Heat pumps transfer heat from outside the structure into the air inside. Initially, heat pump HVAC systems were only used in moderate climates, but with improvements in low temperature operation and reduced loads due to more efficient homes, they are increasing in popularity in cooler climates. They can also operate in reverse to cool an interior.
Distribution
Water/steam
In the case of heated water or steam, piping is used to transport the heat to the rooms. Most modern hot water boiler heating systems have a circulator, which is a pump, to move hot water through the distribution system (as opposed to older gravity-fed systems). The heat can be transferred to the surrounding air using radiators, hot water coils (hydro-air), or other heat exchangers. The radiators may be mounted on walls or installed within the floor to produce floor heat.
The use of water as the heat transfer medium is known as hydronics. The heated water can also supply an auxiliary heat exchanger to supply hot water for bathing and washing.
Air
Warm air systems distribute the heated air through ductwork systems of supply and return air through metal or fiberglass ducts. Many systems use the same ducts to distribute air cooled by an evaporator coil for air conditioning. The air supply is normally filtered through air filters to remove dust and pollen particles.
Dangers
The use of furnaces, space heaters, and boilers as a method of indoor heating could result in incomplete combustion and the emission of carbon monoxide, nitrogen oxides, formaldehyde, volatile organic compounds, and other combustion byproducts. Incomplete combustion occurs when there is insufficient oxygen; the inputs are fuels containing various contaminants and the outputs are harmful byproducts, most dangerously carbon monoxide, which is a tasteless and odorless gas with serious adverse health effects.
Without proper ventilation, carbon monoxide can be lethal at concentrations of 1000 ppm (0.1%). However, at several hundred ppm, carbon monoxide exposure induces headaches, fatigue, nausea, and vomiting. Carbon monoxide binds with hemoglobin in the blood, forming carboxyhemoglobin, reducing the blood's ability to transport oxygen. The primary health concerns associated with carbon monoxide exposure are its cardiovascular and neurobehavioral effects. Carbon monoxide can cause atherosclerosis (the hardening of arteries) and can also trigger heart attacks. Neurologically, carbon monoxide exposure reduces hand to eye coordination, vigilance, and continuous performance. It can also affect time discrimination.
Ventilation
Ventilation is the process of changing or replacing air in any space to control the temperature or remove any combination of moisture, odors, smoke, heat, dust, airborne bacteria, or carbon dioxide, and to replenish oxygen. It plays a critical role in maintaining a healthy indoor environment by preventing the buildup of harmful pollutants and ensuring the circulation of fresh air. Different methods, such as natural ventilation through windows and mechanical ventilation systems, can be used depending on the building design and air quality needs. Ventilation often refers to the intentional delivery of the outside air to the building indoor space. It is one of the most important factors for maintaining acceptable indoor air quality in buildings.
Although ventilation is an integral component of maintaining good indoor air quality, it may not be satisfactory alone. A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of ... In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary.
Methods for ventilating a building may be divided into mechanical/forced and natural types.
Mechanical or forced
Mechanical, or forced, ventilation is provided by an air handler (AHU) and used to control indoor air quality. Excess humidity, odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates more energy is required to remove excess moisture from ventilation air.
Kitchens and bathrooms typically have mechanical exhausts to control odors and sometimes humidity. Factors in the design of such systems include the flow rate (which is a function of the fan speed and exhaust vent size) and noise level. Direct drive fans are available for many applications and can reduce maintenance needs.
In summer, ceiling fans and table/floor fans circulate air within a room for the purpose of reducing the perceived temperature by increasing evaporation of perspiration on the skin of the occupants. Because hot air rises, ceiling fans may be used to keep a room warmer in the winter by circulating the warm stratified air from the ceiling to the floor.
Passive
Natural ventilation is the ventilation of a building with outside air without using fans or other mechanical systems. It can be via operable windows, louvers, or trickle vents when spaces are small and the architecture permits. ASHRAE defined Natural ventilation as the flow of air through open windows, doors, grilles, and other planned building envelope penetrations, and as being driven by natural and/or artificially produced pressure differentials.
Natural ventilation strategies also include cross ventilation, which relies on wind pressure differences on opposite sides of a building. By strategically placing openings, such as windows or vents, on opposing walls, air is channeled through the space to enhance cooling and ventilation. Cross ventilation is most effective when there are clear, unobstructed paths for airflow within the building.
In more complex schemes, warm air is allowed to rise and flow out high building openings to the outside (stack effect), causing cool outside air to be drawn into low building openings. Natural ventilation schemes can use very little energy, but care must be taken to ensure comfort. In warm or humid climates, maintaining thermal comfort solely via natural ventilation might not be possible. Air conditioning systems are used, either as backups or supplements. Air-side economizers also use outside air to condition spaces, but do so using fans, ducts, dampers, and control systems to introduce and distribute cool outdoor air when appropriate.
An important component of natural ventilation is air change rate or air changes per hour: the hourly rate of ventilation divided by the volume of the space. For example, six air changes per hour means an amount of new air, equal to the volume of the space, is added every ten minutes. For human comfort, a minimum of four air changes per hour is typical, though warehouses might have only two. Too high of an air change rate may be uncomfortable, akin to a wind tunnel which has thousands of changes per hour. The highest air change rates are for crowded spaces, bars, night clubs, commercial kitchens at around 30 to 50 air changes per hour.
Room pressure can be either positive or negative with respect to outside the room. Positive pressure occurs when there is more air being supplied than exhausted, and is common to reduce the infiltration of outside contaminants.
Airborne diseases
Natural ventilation is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive.
Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems.
Ventilation is measured in terms of Air Changes Per Hour (ACH). As of 2023, the CDC recommends that all spaces have a minimum of 5 ACH. For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. The challenges in facility ventilation are public unawareness, ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency.
UVC or Ultraviolet Germicidal Irradiation is a function used in modern air conditioners which reduces airborne viruses, bacteria, and fungi, through the use of a built-in LED UV light that emits a gentle glow across the evaporator. As the cross-flow fan circulates the room air, any viruses are guided through the sterilization module’s irradiation range, rendering them instantly inactive.
Air conditioning
An air conditioning system, or a standalone air conditioner, provides cooling and/or humidity control for all or part of a building. Air conditioned buildings often have sealed windows, because open windows would work against the system intended to maintain constant indoor air conditions. Outside, fresh air is generally drawn into the system by a vent into a mix air chamber for mixing with the space return air. Then the mixture air enters an indoor or outdoor heat exchanger section where the air is to be cooled down, then be guided to the space creating positive air pressure. The percentage of return air made up of fresh air can usually be manipulated by adjusting the opening of this vent. Typical fresh air intake is about 10% of the total supply air.
Air conditioning and refrigeration are provided through the removal of heat. Heat can be removed through radiation, convection, or conduction. The heat transfer medium is a refrigeration system, such as water, air, ice, and chemicals are referred to as refrigerants. A refrigerant is employed either in a heat pump system in which a compressor is used to drive thermodynamic refrigeration cycle, or in a free cooling system that uses pumps to circulate a cool refrigerant (typically water or a glycol mix).
It is imperative that the air conditioning horsepower is sufficient for the area being cooled. Underpowered air conditioning systems will lead to power wastage and inefficient usage. Adequate horsepower is required for any air conditioner installed.
Refrigeration cycle
The refrigeration cycle uses four essential elements to cool, which are compressor, condenser, metering device, and evaporator.
At the inlet of a compressor, the refrigerant inside the system is in a low pressure, low temperature, gaseous state. The compressor pumps the refrigerant gas up to high pressure and temperature.
From there it enters a heat exchanger (sometimes called a condensing coil or condenser) where it loses heat to the outside, cools, and condenses into its liquid phase.
An expansion valve (also called metering device) regulates the refrigerant liquid to flow at the proper rate.
The liquid refrigerant is returned to another heat exchanger where it is allowed to evaporate, hence the heat exchanger is often called an evaporating coil or evaporator. As the liquid refrigerant evaporates it absorbs heat from the inside air, returns to the compressor, and repeats the cycle. In the process, heat is absorbed from indoors and transferred outdoors, resulting in cooling of the building.
In variable climates, the system may include a reversing valve that switches from heating in winter to cooling in summer. By reversing the flow of refrigerant, the heat pump refrigeration cycle is changed from cooling to heating or vice versa. This allows a facility to be heated and cooled by a single piece of equipment by the same means, and with the same hardware.
Free cooling
Free cooling systems can have very high efficiencies, and are sometimes combined with seasonal thermal energy storage so that the cold of winter can be used for summer air conditioning. Common storage mediums are deep aquifers or a natural underground rock mass accessed via a cluster of small-diameter, heat-exchanger-equipped boreholes. Some systems with small storages are hybrids, using free cooling early in the cooling season, and later employing a heat pump to chill the circulation coming from the storage. The heat pump is added-in because the storage acts as a heat sink when the system is in cooling (as opposed to charging) mode, causing the temperature to gradually increase during the cooling season.
Some systems include an "economizer mode", which is sometimes called a "free-cooling mode". When economizing, the control system will open (fully or partially) the outside air damper and close (fully or partially) the return air damper. This will cause fresh, outside air to be supplied to the system. When the outside air is cooler than the demanded cool air, this will allow the demand to be met without using the mechanical supply of cooling (typically chilled water or a direct expansion "DX" unit), thus saving energy. The control system can compare the temperature of the outside air vs. return air, or it can compare the enthalpy of the air, as is frequently done in climates where humidity is more of an issue. In both cases, the outside air must be less energetic than the return air for the system to enter the economizer mode.
Packaged split system
Central, "all-air" air-conditioning systems (or package systems) with a combined outdoor condenser/evaporator unit are often installed in North American residences, offices, and public buildings, but are difficult to retrofit (install in a building that was not designed to receive it) because of the bulky air ducts required. (Minisplit ductless systems are used in these situations.) Outside of North America, packaged systems are only used in limited applications involving large indoor space such as stadiums, theatres or exhibition halls.
An alternative to packaged systems is the use of separate indoor and outdoor coils in split systems. Split systems are preferred and widely used worldwide except in North America. In North America, split systems are most often seen in residential applications, but they are gaining popularity in small commercial buildings. Split systems are used where ductwork is not feasible or where the space conditioning efficiency is of prime concern. The benefits of ductless air conditioning systems include easy installation, no ductwork, greater zonal control, flexibility of control, and quiet operation. In space conditioning, the duct losses can account for 30% of energy consumption. The use of minisplits can result in energy savings in space conditioning as there are no losses associated with ducting.
With the split system, the evaporator coil is connected to a remote condenser unit using refrigerant piping between an indoor and outdoor unit instead of ducting air directly from the outdoor unit. Indoor units with directional vents mount onto walls, suspended from ceilings, or fit into the ceiling. Other indoor units mount inside the ceiling cavity so that short lengths of duct handle air from the indoor unit to vents or diffusers around the rooms.
Split systems are more efficient and the footprint is typically smaller than the package systems. On the other hand, package systems tend to have a slightly lower indoor noise level compared to split systems since the fan motor is located outside.
Dehumidification
Dehumidification (air drying) in an air conditioning system is provided by the evaporator. Since the evaporator operates at a temperature below the dew point, moisture in the air condenses on the evaporator coil tubes. This moisture is collected at the bottom of the evaporator in a pan and removed by piping to a central drain or onto the ground outside.
A dehumidifier is an air-conditioner-like device that controls the humidity of a room or building. It is often employed in basements that have a higher relative humidity because of their lower temperature (and propensity for damp floors and walls). In food retailing establishments, large open chiller cabinets are highly effective at dehumidifying the internal air. Conversely, a humidifier increases the humidity of a building.
The HVAC components that dehumidify the ventilation air deserve careful attention because outdoor air constitutes most of the annual humidity load for nearly all buildings.
Humidification
Maintenance
All modern air conditioning systems, even small window package units, are equipped with internal air filters. These are generally of a lightweight gauze-like material, and must be replaced or washed as conditions warrant. For example, a building in a high dust environment, or a home with furry pets, will need to have the filters changed more often than buildings without these dirt loads. Failure to replace these filters as needed will contribute to a lower heat exchange rate, resulting in wasted energy, shortened equipment life, and higher energy bills; low air flow can result in iced-over evaporator coils, which can completely stop airflow. Additionally, very dirty or plugged filters can cause overheating during a heating cycle, which can result in damage to the system or even fire.
Because an air conditioner moves heat between the indoor coil and the outdoor coil, both must be kept clean. This means that, in addition to replacing the air filter at the evaporator coil, it is also necessary to regularly clean the condenser coil. Failure to keep the condenser clean will eventually result in harm to the compressor because the condenser coil is responsible for discharging both the indoor heat (as picked up by the evaporator) and the heat generated by the electric motor driving the compressor.
Energy efficiency
HVAC is significantly responsible for promoting energy efficiency of buildings as the building sector consumes the largest percentage of global energy. Since the 1980s, manufacturers of HVAC equipment have been making an effort to make the systems they manufacture more efficient. This was originally driven by rising energy costs, and has more recently been driven by increased awareness of environmental issues. Additionally, improvements to the HVAC system efficiency can also help increase occupant health and productivity. In the US, the EPA has imposed tighter restrictions over the years. There are several methods for making HVAC systems more efficient.
Heating energy
In the past, water heating was more efficient for heating buildings and was the standard in the United States. Today, forced air systems can double for air conditioning and are more popular.
Some benefits of forced air systems, which are now widely used in churches, schools, and high-end residences, are
Better air conditioning effects
Energy savings of up to 15–20%
Even conditioning
A drawback is the installation cost, which can be slightly higher than traditional HVAC systems.
Energy efficiency can be improved even more in central heating systems by introducing zoned heating. This allows a more granular application of heat, similar to non-central heating systems. Zones are controlled by multiple thermostats. In water heating systems the thermostats control zone valves, and in forced air systems they control zone dampers inside the vents which selectively block the flow of air. In this case, the control system is very critical to maintaining a proper temperature.
Forecasting is another method of controlling building heating by calculating the demand for heating energy that should be supplied to the building in each time unit.
Ground source heat pump
Ground source, or geothermal, heat pumps are similar to ordinary heat pumps, but instead of transferring heat to or from outside air, they rely on the stable, even temperature of the earth to provide heating and air conditioning. Many regions experience seasonal temperature extremes, which would require large-capacity heating and cooling equipment to heat or cool buildings. For example, a conventional heat pump system used to heat a building in Montana's low temperature or cool a building in the highest temperature ever recorded in the US— in Death Valley, California, in 1913 would require a large amount of energy due to the extreme difference between inside and outside air temperatures. A metre below the earth's surface, however, the ground remains at a relatively constant temperature. Utilizing this large source of relatively moderate temperature earth, a heating or cooling system's capacity can often be significantly reduced. Although ground temperatures vary according to latitude, at underground, temperatures generally only range from .
Solar air conditioning
Photovoltaic solar panels offer a new way to potentially decrease the operating cost of air conditioning. Traditional air conditioners run using alternating current, and hence, any direct-current solar power needs to be inverted to be compatible with these units. New variable-speed DC-motor units allow solar power to more easily run them since this conversion is unnecessary, and since the motors are tolerant of voltage fluctuations associated with variance in supplied solar power (e.g., due to cloud cover).
Ventilation energy recovery
Energy recovery systems sometimes utilize heat recovery ventilation or energy recovery ventilation systems that employ heat exchangers or enthalpy wheels to recover sensible or latent heat from exhausted air. This is done by transfer of energy from the stale air inside the home to the incoming fresh air from outside.
Air conditioning energy
The performance of vapor compression refrigeration cycles is limited by thermodynamics. These air conditioning and heat pump devices move heat rather than convert it from one form to another, so thermal efficiencies do not appropriately describe the performance of these devices. The Coefficient of performance (COP) measures performance, but this dimensionless measure has not been adopted. Instead, the Energy Efficiency Ratio (EER) has traditionally been used to characterize the performance of many HVAC systems. EER is the Energy Efficiency Ratio based on a outdoor temperature. To more accurately describe the performance of air conditioning equipment over a typical cooling season a modified version of the EER, the Seasonal Energy Efficiency Ratio (SEER), or in Europe the ESEER, is used. SEER ratings are based on seasonal temperature averages instead of a constant outdoor temperature. The current industry minimum SEER rating is 14 SEER. Engineers have pointed out some areas where efficiency of the existing hardware could be improved. For example, the fan blades used to move the air are usually stamped from sheet metal, an economical method of manufacture, but as a result they are not aerodynamically efficient. A well-designed blade could reduce the electrical power required to move the air by a third.
Demand-controlled kitchen ventilation
Demand-controlled kitchen ventilation (DCKV) is a building controls approach to controlling the volume of kitchen exhaust and supply air in response to the actual cooking loads in a commercial kitchen. Traditional commercial kitchen ventilation systems operate at 100% fan speed independent of the volume of cooking activity and DCKV technology changes that to provide significant fan energy and conditioned air savings. By deploying smart sensing technology, both the exhaust and supply fans can be controlled to capitalize on the affinity laws for motor energy savings, reduce makeup air heating and cooling energy, increasing safety, and reducing ambient kitchen noise levels.
Air filtration and cleaning
Air cleaning and filtration removes particles, contaminants, vapors and gases from the air. The filtered and cleaned air then is used in heating, ventilation, and air conditioning. Air cleaning and filtration should be taken in account when protecting our building environments. If present, contaminants can come out from the HVAC systems if not removed or filtered properly.
Clean air delivery rate (CADR) is the amount of clean air an air cleaner provides to a room or space. When determining CADR, the amount of airflow in a space is taken into account. For example, an air cleaner with a flow rate of per minute and an efficiency of 50% has a CADR of per minute. Along with CADR, filtration performance is very important when it comes to the air in our indoor environment. This depends on the size of the particle or fiber, the filter packing density and depth, and the airflow rate.
Circulation of harmful substances
Poorly maintained air conditioners/ventilation systems can harbor mold, bacteria, and other contaminants, which are then circulated throughout indoor spaces, contributing to ...
Industry and standards
The HVAC industry is a worldwide enterprise, with roles including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry was historically regulated by the manufacturers of HVAC equipment, but regulating and standards organizations such as HARDI (Heating, Air-conditioning and Refrigeration Distributors International), ASHRAE, SMACNA, ACCA (Air Conditioning Contractors of America), Uniform Mechanical Code, International Mechanical Code, and AMCA have been established to support the industry and encourage high standards and achievement. (UL as an omnibus agency is not specific to the HVAC industry.)
The starting point in carrying out an estimate both for cooling and heating depends on the exterior climate and interior specified conditions. However, before taking up the heat load calculation, it is necessary to find fresh air requirements for each area in detail, as pressurization is an important consideration.
International
ISO 16813:2006 is one of the ISO building environment standards. It establishes the general principles of building environment design. It takes into account the need to provide a healthy indoor environment for the occupants as well as the need to protect the environment for future generations and promote collaboration among the various parties involved in building environmental design for sustainability. ISO16813 is applicable to new construction and the retrofit of existing buildings.
The building environmental design standard aims to:
provide the constraints concerning sustainability issues from the initial stage of the design process, with building and plant life cycle to be considered together with owning and operating costs from the beginning of the design process;
assess the proposed design with rational criteria for indoor air quality, thermal comfort, acoustical comfort, visual comfort, energy efficiency, and HVAC system controls at every stage of the design process;
iterate decisions and evaluations of the design throughout the design process.
United States
Licensing
In the United States, federal licensure is generally handled by EPA certified (for installation and service of HVAC devices).
Many U.S. states have licensing for boiler operation. Some of these are listed as follows:
Arkansas
Georgia
Michigan
Minnesota
Montana
New Jersey
North Dakota
Ohio
Oklahoma
Oregon
Finally, some U.S. cities may have additional labor laws that apply to HVAC professionals.
Societies
Many HVAC engineers are members of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE regularly organizes two annual technical committees and publishes recognized standards for HVAC design, which are updated every four years.
Another popular society is AHRI, which provides regular information on new refrigeration technology, and publishes relevant standards and codes.
Codes
Codes such as the UMC and IMC do include much detail on installation requirements, however. Other useful reference materials include items from SMACNA, ACGIH, and technical trade journals.
American design standards are legislated in the Uniform Mechanical Code or International Mechanical Code. In certain states, counties, or cities, either of these codes may be adopted and amended via various legislative processes. These codes are updated and published by the International Association of Plumbing and Mechanical Officials (IAPMO) or the International Code Council (ICC) respectively, on a 3-year code development cycle. Typically, local building permit departments are charged with enforcement of these standards on private and certain public properties.
Technicians
An HVAC technician is a tradesman who specializes in heating, ventilation, air conditioning, and refrigeration. HVAC technicians in the US can receive training through formal training institutions, where most earn associate degrees. Training for HVAC technicians includes classroom lectures and hands-on tasks, and can be followed by an apprenticeship wherein the recent graduate works alongside a professional HVAC technician for a temporary period. HVAC techs who have been trained can also be certified in areas such as air conditioning, heat pumps, gas heating, and commercial refrigeration.
United Kingdom
The Chartered Institution of Building Services Engineers is a body that covers the essential Service (systems architecture) that allow buildings to operate. It includes the electrotechnical, heating, ventilating, air conditioning, refrigeration and plumbing industries. To train as a building services engineer, the academic requirements are GCSEs (A-C) / Standard Grades (1-3) in Maths and Science, which are important in measurements, planning and theory. Employers will often want a degree in a branch of engineering, such as building environment engineering, electrical engineering or mechanical engineering. To become a full member of CIBSE, and so also to be registered by the Engineering Council UK as a chartered engineer, engineers must also attain an Honours Degree and a master's degree in a relevant engineering subject. CIBSE publishes several guides to HVAC design relevant to the UK market, and also the Republic of Ireland, Australia, New Zealand and Hong Kong. These guides include various recommended design criteria and standards, some of which are cited within the UK building regulations, and therefore form a legislative requirement for major building services works. The main guides are:
Guide A: Environmental Design
Guide B: Heating, Ventilating, Air Conditioning and Refrigeration
Guide C: Reference Data
Guide D: Transportation systems in Buildings
Guide E: Fire Safety Engineering
Guide F: Energy Efficiency in Buildings
Guide G: Public Health Engineering
Guide H: Building Control Systems
Guide J: Weather, Solar and Illuminance Data
Guide K: Electricity in Buildings
Guide L: Sustainability
Guide M: Maintenance Engineering and Management
Within the construction sector, it is the job of the building services engineer to design and oversee the installation and maintenance of the essential services such as gas, electricity, water, heating and lighting, as well as many others. These all help to make buildings comfortable and healthy places to live and work in. Building Services is part of a sector that has over 51,000 businesses and employs represents 2–3% of the GDP.
Australia
The Air Conditioning and Mechanical Contractors Association of Australia (AMCA), Australian Institute of Refrigeration, Air Conditioning and Heating (AIRAH), Australian Refrigeration Mechanical Association and CIBSE are responsible.
Asia
Asian architectural temperature-control have different priorities than European methods. For example, Asian heating traditionally focuses on maintaining temperatures of objects such as the floor or furnishings such as Kotatsu tables and directly warming people, as opposed to the Western focus, in modern periods, on designing air systems.
Philippines
The Philippine Society of Ventilating, Air Conditioning and Refrigerating Engineers (PSVARE) along with Philippine Society of Mechanical Engineers (PSME) govern on the codes and standards for HVAC / MVAC (MVAC means "mechanical ventilation and air conditioning") in the Philippines.
India
The Indian Society of Heating, Refrigerating and Air Conditioning Engineers (ISHRAE) was established to promote the HVAC industry in India. ISHRAE is an associate of ASHRAE. ISHRAE was founded at New Delhi in 1981 and a chapter was started in Bangalore in 1989. Between 1989 & 1993, ISHRAE chapters were formed in all major cities in India.
See also
Air speed (HVAC)
Architectural engineering
ASHRAE Handbook
Auxiliary power unit
Cleanroom
Electric heating
Fan coil unit
Glossary of HVAC terms
Head-end power
Hotel electric power
Mechanical engineering
Outdoor wood-fired boiler
Radiant cooling
Sick building syndrome
Uniform Codes
Uniform Mechanical Code
Ventilation (architecture)
World Refrigeration Day
Wrightsoft
References
Further reading
International Mechanical Code (2012 (Second Printing)) by the International Code Council, Thomson Delmar Learning.
Modern Refrigeration and Air Conditioning (August 2003) by Althouse, Turnquist, and Bracciano, Goodheart-Wilcox Publisher; 18th edition.
The Cost of Cool.
Whai is LEV?
External links
Building biology
Building engineering
Mechanical engineering
Construction | Heating, ventilation, and air conditioning | [
"Physics",
"Engineering"
] | 7,667 | [
"Applied and interdisciplinary physics",
"Building engineering",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Building biology",
"Architecture"
] |
57,526 | https://en.wikipedia.org/wiki/P%C3%A9clet%20number | In continuum mechanics, the Péclet number (, after Jean Claude Eugène Péclet) is a class of dimensionless numbers relevant in the study of transport phenomena in a continuum. It is defined to be the ratio of the rate of advection of a physical quantity by the flow to the rate of diffusion of the same quantity driven by an appropriate gradient. In the context of species or mass transfer, the Péclet number is the product of the Reynolds number and the Schmidt number (). In the context of the thermal fluids, the thermal Péclet number is equivalent to the product of the Reynolds number and the Prandtl number ().
The Péclet number is defined as:
For mass transfer, it is defined as:
Such ratio can also be re-written in terms of times, as a ratio between the characteristic temporal intervals of the system:
For the diffusion happens in a much longer time compared to the advection, and therefore the latter of the two phenomena predominates in the mass transport.
For heat transfer, the Péclet number is defined as:
where is the characteristic length, the local flow velocity, the mass diffusion coefficient, the Reynolds number, the Schmidt number, the Prandtl number, and the thermal diffusivity,
where is the thermal conductivity, the density, and the specific heat capacity.
In engineering applications the Péclet number is often very large. In such situations, the dependency of the flow upon downstream locations is diminished, and variables in the flow tend to become 'one-way' properties. Thus, when modelling certain situations with high Péclet numbers, simpler computational models can be adopted.
A flow will often have different Péclet numbers for heat and mass. This can lead to the phenomenon of double diffusive convection.
In the context of particulate motion the Péclet number has also been called Brenner number, with symbol , in honour of Howard Brenner.
The Péclet number also finds applications beyond transport phenomena, as a general measure for the relative importance of the random fluctuations and of the systematic average behavior in mesoscopic systems
See also
Nusselt number
References
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics
Heat conduction | Péclet number | [
"Physics",
"Chemistry",
"Engineering"
] | 464 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Heat conduction",
"Fluid dynamics"
] |
57,555 | https://en.wikipedia.org/wiki/Acid%20dissociation%20constant | In chemistry, an acid dissociation constant (also known as acidity constant, or acid-ionization constant; denoted ) is a quantitative measure of the strength of an acid in solution. It is the equilibrium constant for a chemical reaction
HA <=> A^- + H^+
known as dissociation in the context of acid–base reactions. The chemical species HA is an acid that dissociates into , called the conjugate base of the acid, and a hydrogen ion, . The system is said to be in equilibrium when the concentrations of its components do not change over time, because both forward and backward reactions are occurring at the same rate.
The dissociation constant is defined by
or by its logarithmic form
where quantities in square brackets represent the molar concentrations of the species at equilibrium. For example, a hypothetical weak acid having Ka = 10−5, the value of log Ka is the exponent (−5), giving pKa = 5. For acetic acid, Ka = 1.8 x 10−5, so pKa is 4.7. A higher Ka corresponds to a stronger acid (an acid that is more dissociated at equilibrium). The form pKa is often used because it provides a convenient logarithmic scale, where a lower pKa corresponds to a stronger acid.
Theoretical background
The acid dissociation constant for an acid is a direct consequence of the underlying thermodynamics of the dissociation reaction; the pKa value is directly proportional to the standard Gibbs free energy change for the reaction. The value of the pKa changes with temperature and can be understood qualitatively based on Le Châtelier's principle: when the reaction is endothermic, Ka increases and pKa decreases with increasing temperature; the opposite is true for exothermic reactions.
The value of pKa also depends on molecular structure of the acid in many ways. For example, Pauling proposed two rules: one for successive pKa of polyprotic acids (see Polyprotic acids below), and one to estimate the pKa of oxyacids based on the number of =O and −OH groups (see Factors that affect pKa values below). Other structural factors that influence the magnitude of the acid dissociation constant include inductive effects, mesomeric effects, and hydrogen bonding. Hammett type equations have frequently been applied to the estimation of pKa.
The quantitative behaviour of acids and bases in solution can be understood only if their pKa values are known. In particular, the pH of a solution can be predicted when the analytical concentration and pKa values of all acids and bases are known; conversely, it is possible to calculate the equilibrium concentration of the acids and bases in solution when the pH is known. These calculations find application in many different areas of chemistry, biology, medicine, and geology. For example, many compounds used for medication are weak acids or bases, and a knowledge of the pKa values, together with the octanol-water partition coefficient, can be used for estimating the extent to which the compound enters the blood stream. Acid dissociation constants are also essential in aquatic chemistry and chemical oceanography, where the acidity of water plays a fundamental role. In living organisms, acid–base homeostasis and enzyme kinetics are dependent on the pKa values of the many acids and bases present in the cell and in the body. In chemistry, a knowledge of pKa values is necessary for the preparation of buffer solutions and is also a prerequisite for a quantitative understanding of the interaction between acids or bases and metal ions to form complexes. Experimentally, pKa values can be determined by potentiometric (pH) titration, but for values of pKa less than about 2 or more than about 11, spectrophotometric or NMR measurements may be required due to practical difficulties with pH measurements.
Definitions
According to Arrhenius's original molecular definition, an acid is a substance that dissociates in aqueous solution, releasing the hydrogen ion (a proton):
HA <=> A- + H+
The equilibrium constant for this dissociation reaction is known as a dissociation constant. The liberated proton combines with a water molecule to give a hydronium (or oxonium) ion (naked protons do not exist in solution), and so Arrhenius later proposed that the dissociation should be written as an acid–base reaction:
HA + H2O <=> A- + H3O+
Brønsted and Lowry generalised this further to a proton exchange reaction:
The acid loses a proton, leaving a conjugate base; the proton is transferred to the base, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is and the conjugate acid is the hydronium ion. The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid .
HA + S <=> A- + SH+
In solution chemistry, it is common to use as an abbreviation for the solvated hydrogen ion, regardless of the solvent. In aqueous solution denotes a solvated hydronium ion rather than a proton.
The designation of an acid or base as "conjugate" depends on the context. The conjugate acid of a base B dissociates according to
BH+ + OH- <=> B + H2O
which is the reverse of the equilibrium
The hydroxide ion , a well known base, is here acting as the conjugate base of the acid water. Acids and bases are thus regarded simply as donors and acceptors of protons respectively.
A broader definition of acid dissociation includes hydrolysis, in which protons are produced by the splitting of water molecules. For example, boric acid () produces as if it were a proton donor, but it has been confirmed by Raman spectroscopy that this is due to the hydrolysis equilibrium:
B(OH)3 + 2 H2O <=> B(OH)4- + H3O+
Similarly, metal ion hydrolysis causes ions such as to behave as weak acids:
[Al(H2O)6]^3+ + H2O <=> [Al(H2O)5(OH)]^2+ + H3O+
According to Lewis's original definition, an acid is a substance that accepts an electron pair to form a coordinate covalent bond.
Equilibrium constant
An acid dissociation constant is a particular example of an equilibrium constant. The dissociation of a monoprotic acid, HA, in dilute solution can be written as
HA <=> A- + H+
The thermodynamic equilibrium constant can be defined by
where represents the activity, at equilibrium, of the chemical species X. is dimensionless since activity is dimensionless. Activities of the products of dissociation are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression.
Since activity is the product of concentration and activity coefficient (γ) the definition could also be written as
where represents the concentration of HA and is a quotient of activity coefficients.
To avoid the complications involved in using activities, dissociation constants are determined, where possible, in a medium of high ionic strength, that is, under conditions in which can be assumed to be always constant. For example, the medium might be a solution of 0.1 molar (M) sodium nitrate or 3 M potassium perchlorate. With this assumption,
is obtained. Note, however, that all published dissociation constant values refer to the specific ionic medium used in their determination and that different values are obtained with different conditions, as shown for acetic acid in the illustration above. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Cumulative and stepwise constants
A cumulative equilibrium constant, denoted by is related to the product of stepwise constants, denoted by For a dibasic acid the relationship between stepwise and overall constants is as follows
H2A <=> A^2- + 2H+
Note that in the context of metal-ligand complex formation, the equilibrium constants for the formation of metal complexes are usually defined as association constants. In that case, the equilibrium constants for ligand protonation are also defined as association constants. The numbering of association constants is the reverse of the numbering of dissociation constants; in this example
Association and dissociation constants
When discussing the properties of acids it is usual to specify equilibrium constants as acid dissociation constants, denoted by Ka, with numerical values given the symbol pKa.
On the other hand, association constants are used for bases.
However, general purpose computer programs that are used to derive equilibrium constant values from experimental data use association constants for both acids and bases. Because stability constants for a metal-ligand complex are always specified as association constants, ligand protonation must also be specified as an association reaction. The definitions show that the value of an acid dissociation constant is the reciprocal of the value of the corresponding association constant:
Notes
For a given acid or base in water, , the self-ionization constant of water.
The association constant for the formation of a supramolecular complex may be denoted as Ka; in such cases "a" stands for "association", not "acid".
For polyprotic acids, the numbering of stepwise association constants is the reverse of the numbering of the dissociation constants. For example, for phosphoric acid (details in the polyprotic acids section below):
Temperature dependence
All equilibrium constants vary with temperature according to the van 't Hoff equation
is the gas constant and is the absolute temperature. Thus, for exothermic reactions, the standard enthalpy change, , is negative and K decreases with temperature. For endothermic reactions, is positive and K increases with temperature.
The standard enthalpy change for a reaction is itself a function of temperature, according to Kirchhoff's law of thermochemistry:
where is the heat capacity change at constant pressure. In practice may be taken to be constant over a small temperature range.
Dimensionality
In the equation
Ka appears to have dimensions of concentration. However, since , the equilibrium constant, , cannot have a physical dimension. This apparent paradox can be resolved in various ways.
Assume that the quotient of activity coefficients has a numerical value of 1, so that has the same numerical value as the thermodynamic equilibrium constant .
Express each concentration value as the ratio c/c0, where c0 is the concentration in a [hypothetical] standard state, with a numerical value of 1, by definition.
Express the concentrations on the mole fraction scale. Since mole fraction has no dimension, the quotient of concentrations will, by definition, be a pure number.
The procedures, (1) and (2), give identical numerical values for an equilibrium constant. Furthermore, since a concentration is simply proportional to mole fraction and density :
and since the molar mass is a constant in dilute solutions, an equilibrium constant value determined using (3) will be simply proportional to the values obtained with (1) and (2).
It is common practice in biochemistry to quote a value with a dimension as, for example, "Ka = 30 mM" in order to indicate the scale, millimolar (mM) or micromolar (μM) of the concentration values used for its calculation.
Strong acids and bases
An acid is classified as "strong" when the concentration of its undissociated species is too low to be measured. Any aqueous acid with a pKa value of less than 0 is almost completely deprotonated and is considered a strong acid. All such acids transfer their protons to water and form the solvent cation species (H3O+ in aqueous solution) so that they all have essentially the same acidity, a phenomenon known as solvent leveling. They are said to be fully dissociated in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit. Likewise, any aqueous base with an association constant pKb less than about 0, corresponding to pKa greater than about 14, is leveled to OH− and is considered a strong base.
Nitric acid, with a pK value of around −1.7, behaves as a strong acid in aqueous solutions with a pH greater than 1. At lower pH values it behaves as a weak acid.
pKa values for strong acids have been estimated by theoretical means. For example, the pKa value of aqueous HCl has been estimated as −9.3.
Monoprotic acids
After rearranging the expression defining Ka, and putting , one obtains
This is the Henderson–Hasselbalch equation, from which the following conclusions can be drawn.
At half-neutralization the ratio ; since , the pH at half-neutralization is numerically equal to pKa. Conversely, when , the concentration of HA is equal to the concentration of A−.
The buffer region extends over the approximate range pKa ± 2. Buffering is weak outside the range pKa ± 1. At pH ≤ pKa − 2 the substance is said to be fully protonated and at pH ≥ pKa + 2 it is fully dissociated (deprotonated).
If the pH is known, the ratio may be calculated. This ratio is independent of the analytical concentration of the acid.
In water, measurable pKa values range from about −2 for a strong acid to about 12 for a very weak acid (or strong base).
A buffer solution of a desired pH can be prepared as a mixture of a weak acid and its conjugate base. In practice, the mixture can be created by dissolving the acid in water, and adding the requisite amount of strong acid or base. When the pKa and analytical concentration of the acid are known, the extent of dissociation and pH of a solution of a monoprotic acid can be easily calculated using an ICE table.
Polyprotic acids
A polyprotic acid is a compound which may lose more than 1 proton. Stepwise dissociation constants are each defined for the loss of a single proton. The constant for dissociation of the first proton may be denoted as Ka1 and the constants for dissociation of successive protons as Ka2, etc. Phosphoric acid, , is an example of a polyprotic acid as it can lose three protons.
{| class="wikitable"
! Equilibrium
! pK definition and value
|-
| H3PO4 <=> H2PO4- + H+
|
|-
| H2PO4- <=> HPO4^2- + H+
|
|-
| HPO4^2- <=> PO4^3- + H+
|
|}
When the difference between successive pK values is about four or more, as in this example, each species may be considered as an acid in its own right; In fact salts of may be crystallised from solution by adjustment of pH to about 5.5 and salts of may be crystallised from solution by adjustment of pH to about 10. The species distribution diagram shows that the concentrations of the two ions are maximum at pH 5.5 and 10.
When the difference between successive pK values is less than about four there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. The case of citric acid is shown at the right; solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
According to Pauling's first rule, successive pK values of a given acid increase . For oxyacids with more than one ionizable hydrogen on the same atom, the pKa values often increase by about 5 units for each proton removed, as in the example of phosphoric acid above.
It can be seen in the table above that the second proton is removed from a negatively charged species. Since the proton carries a positive charge extra work is needed to remove it, which is why pKa2 is greater than pKa1. pKa3 is greater than pKa2 because there is further charge separation. When an exception to Pauling's rule is found, it indicates that a major change in structure is also occurring. In the case of (aq), the vanadium is octahedral, 6-coordinate, whereas vanadic acid is tetrahedral, 4-coordinate. This means that four "particles" are released with the first dissociation, but only two "particles" are released with the other dissociations, resulting in a much greater entropy contribution to the standard Gibbs free energy change for the first reaction than for the others.
{| class="wikitable"
! Equilibrium
! pKa
|-
| [VO2(H2O)4]+ <=> H3VO4 + H+ + 2H2O
|
|-
| H3VO4 <=> H2VO4- + H+
|
|-
| H2VO4- <=> HVO4^2- + H+
|
|-
| HVO4^2- <=> VO4^3- + H+
|
|}
Isoelectric point
For substances in solution, the isoelectric point (pI) is defined as the pH at which the sum, weighted by charge value, of concentrations of positively charged species is equal to the weighted sum of concentrations of negatively charged species. In the case that there is one species of each type, the isoelectric point can be obtained directly from the pK values. Take the example of glycine, defined as AH. There are two dissociation equilibria to consider.
AH2+ <=> AH~+ H+ \qquad [AH][H+] = \mathit{K}_1 [AH2+]
AH <=> A^-~+H+ \qquad [A^- ][H+] = \mathit{K}_2 [AH]
Substitute the expression for [AH] from the second equation into the first equation
[A^- ][H+]^2 = \mathit{K}_1 \mathit{K}_2 [AH2+]
At the isoelectric point the concentration of the positively charged species, , is equal to the concentration of the negatively charged species, , so
Therefore, taking cologarithms, the pH is given by
pI values for amino acids are listed at proteinogenic amino acid. When more than two charged species are in equilibrium with each other a full speciation calculation may be needed.
Bases and basicity
The equilibrium constant Kb for a base is usually defined as the association constant for protonation of the base, B, to form the conjugate acid, .
B + H2O <=> HB+ + OH-
Using similar reasoning to that used before
Kb is related to Ka for the conjugate acid. In water, the concentration of the hydroxide ion, , is related to the concentration of the hydrogen ion by , therefore
Substitution of the expression for into the expression for Kb gives
When Ka, Kb and Kw are determined under the same conditions of temperature and ionic strength, it follows, taking cologarithms, that pKb = pKw − pKa. In aqueous solutions at 25 °C, pKw is 13.9965, so
with sufficient accuracy for most practical purposes. In effect there is no need to define pKb separately from pKa, but it is done here as often only pKb values can be found in the older literature.
For an hydrolyzed metal ion, Kb can also be defined as a stepwise dissociation constant
This is the reciprocal of an association constant for formation of the complex.
Basicity expressed as dissociation constant of conjugate acid
Because the relationship pKb = pKw − pKa holds only in aqueous solutions (though analogous relationships apply for other amphoteric solvents), subdisciplines of chemistry like organic chemistry that usually deal with nonaqueous solutions generally do not use pKb as a measure of basicity. Instead, the pKa of the conjugate acid, denoted by pKaH, is quoted when basicity needs to be quantified. For base B and its conjugate acid BH+ in equilibrium, this is defined as
A higher value for pKaH corresponds to a stronger base. For example, the values and indicate that (triethylamine) is a stronger base than (pyridine).
Amphoteric substances
An amphoteric substance is one that can act as an acid or as a base, depending on pH. Water (below) is amphoteric. Another example of an amphoteric molecule is the bicarbonate ion that is the conjugate base of the carbonic acid molecule H2CO3 in the equilibrium
H2CO3 + H2O <=> HCO3- + H3O+
but also the conjugate acid of the carbonate ion in (the reverse of) the equilibrium
HCO3- + OH- <=> CO3^2- + H2O
Carbonic acid equilibria are important for acid–base homeostasis in the human body.
An amino acid is also amphoteric with the added complication that the neutral molecule is subject to an internal acid–base equilibrium in which the basic amino group attracts and binds the proton from the acidic carboxyl group, forming a zwitterion.
NH2CHRCO2H <=> NH3+CHRCO2-
At pH less than about 5 both the carboxylate group and the amino group are protonated. As pH increases the acid dissociates according to
NH3+CHRCO2H <=> NH3+CHRCO2- + H+
At high pH a second dissociation may take place.
NH3+CHRCO2- <=> NH2CHRCO2- + H+
Thus the amino acid molecule is amphoteric because it may either be protonated or deprotonated.
Water self-ionization
The water molecule may either gain or lose a proton. It is said to be amphiprotic. The ionization equilibrium can be written
H2O <=> OH- + H+
where in aqueous solution denotes a solvated proton. Often this is written as the hydronium ion , but this formula is not exact because in fact there is solvation by more than one water molecule and species such as , , and are also present.
The equilibrium constant is given by
With solutions in which the solute concentrations are not very high, the concentration can be assumed to be constant, regardless of solute(s); this expression may then be replaced by
The self-ionization constant of water, Kw, is thus just a special case of an acid dissociation constant. A logarithmic form analogous to pKa may also be defined
These data can be modelled by a parabola with
From this equation, pKw = 14 at 24.87 °C. At that temperature both hydrogen and hydroxide ions have a concentration of 10−7 M.
Acidity in nonaqueous solutions
A solvent will be more likely to promote ionization of a dissolved acidic molecule in the following circumstances:
It is a protic solvent, capable of forming hydrogen bonds.
It has a high donor number, making it a strong Lewis base.
It has a high dielectric constant (relative permittivity), making it a good solvent for ionic species.
pKa values of organic compounds are often obtained using the aprotic solvents dimethyl sulfoxide (DMSO) and acetonitrile (ACN).
DMSO is widely used as an alternative to water because it has a lower dielectric constant than water, and is less polar and so dissolves non-polar, hydrophobic substances more easily. It has a measurable pKa range of about 1 to 30. Acetonitrile is less basic than DMSO, and, so, in general, acids are weaker and bases are stronger in this solvent. Some pKa values at 25 °C for acetonitrile (ACN) and dimethyl sulfoxide (DMSO). are shown in the following tables. Values for water are included for comparison.
Ionization of acids is less in an acidic solvent than in water. For example, hydrogen chloride is a weak acid when dissolved in acetic acid. This is because acetic acid is a much weaker base than water.
HCl + CH3CO2H <=> Cl- + CH3C(OH)2+
Compare this reaction with what happens when acetic acid is dissolved in the more acidic solvent pure sulfuric acid:
H2SO4 + CH3CO2H <=> HSO4- + CH3C(OH)2+
The unlikely geminal diol species is stable in these environments. For aqueous solutions the pH scale is the most convenient acidity function. Other acidity functions have been proposed for non-aqueous media, the most notable being the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media.
In aprotic solvents, oligomers, such as the well-known acetic acid dimer, may be formed by hydrogen bonding. An acid may also form hydrogen bonds to its conjugate base. This process, known as homoconjugation, has the effect of enhancing the acidity of acids, lowering their effective pKa values, by stabilizing the conjugate base. Homoconjugation enhances the proton-donating power of toluenesulfonic acid in acetonitrile solution by a factor of nearly 800.
In aqueous solutions, homoconjugation does not occur, because water forms stronger hydrogen bonds to the conjugate base than does the acid.
Mixed solvents
When a compound has limited solubility in water it is common practice (in the pharmaceutical industry, for example) to determine pKa values in a solvent mixture such as water/dioxane or water/methanol, in which the compound is more soluble. In the example shown at the right, the pKa value rises steeply with increasing percentage of dioxane as the dielectric constant of the mixture is decreasing.
A pKa value obtained in a mixed solvent cannot be used directly for aqueous solutions. The reason for this is that when the solvent is in its standard state its activity is defined as one. For example, the standard state of water:dioxane mixture with 9:1 mixing ratio is precisely that solvent mixture, with no added solutes. To obtain the pKa value for use with aqueous solutions it has to be extrapolated to zero co-solvent concentration from values obtained from various co-solvent mixtures.
These facts are obscured by the omission of the solvent from the expression that is normally used to define pKa, but pKa values obtained in a given mixed solvent can be compared to each other, giving relative acid strengths. The same is true of pKa values obtained in a particular non-aqueous solvent such a DMSO.
A universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
Factors that affect pKa values
Pauling's second rule is that the value of the first pKa for acids of the formula XOm(OH)n depends primarily on the number of oxo groups m, and is approximately independent of the number of hydroxy groups n, and also of the central atom X. Approximate values of pKa are 8 for m = 0, 2 for m = 1, −3 for m = 2 and < −10 for m = 3. Alternatively, various numerical formulas have been proposed including pKa = 8 − 5m (known as Bell's rule), pKa = 7 − 5m, or pKa = 9 − 7m. The dependence on m correlates with the oxidation state of the central atom, X: the higher the oxidation state the stronger the oxyacid.
For example, pKa for HClO is 7.2, for HClO2 is 2.0, for HClO3 is −1 and HClO4 is a strong acid (). The increased acidity on adding an oxo group is due to stabilization of the conjugate base by delocalization of its negative charge over an additional oxygen atom. This rule can help assign molecular structure: for example, phosphorous acid, having molecular formula H3PO3, has a pKa near 2, which suggested that the structure is HPO(OH)2, as later confirmed by NMR spectroscopy, and not P(OH)3, which would be expected to have a pKa near 8.
Inductive effects and mesomeric effects affect the pKa values. A simple example is provided by the effect of replacing the hydrogen atoms in acetic acid by the more electronegative chlorine atom. The electron-withdrawing effect of the substituent makes ionisation easier, so successive pKa values decrease in the series 4.7, 2.8, 1.4, and 0.7 when 0, 1, 2, or 3 chlorine atoms are present. The Hammett equation, provides a general expression for the effect of substituents.
log(Ka) = log(K) + ρσ.
Ka is the dissociation constant of a substituted compound, K is the dissociation constant when the substituent is hydrogen, ρ is a property of the unsubstituted compound and σ has a particular value for each substituent. A plot of log(Ka) against σ is a straight line with intercept log(K) and slope ρ. This is an example of a linear free energy relationship as log(Ka) is proportional to the standard free energy change. Hammett originally formulated the relationship with data from benzoic acid with different substituents in the ortho- and para- positions: some numerical values are in Hammett equation. This and other studies allowed substituents to be ordered according to their electron-withdrawing or electron-releasing power, and to distinguish between inductive and mesomeric effects.
Alcohols do not normally behave as acids in water, but the presence of a double bond adjacent to the OH group can substantially decrease the pKa by the mechanism of keto–enol tautomerism. Ascorbic acid is an example of this effect. The diketone 2,4-pentanedione (acetylacetone) is also a weak acid because of the keto–enol equilibrium. In aromatic compounds, such as phenol, which have an OH substituent, conjugation with the aromatic ring as a whole greatly increases the stability of the deprotonated form.
Structural effects can also be important. The difference between fumaric acid and maleic acid is a classic example. Fumaric acid is (E)-1,4-but-2-enedioic acid, a trans isomer, whereas maleic acid is the corresponding cis isomer, i.e. (Z)-1,4-but-2-enedioic acid (see cis-trans isomerism). Fumaric acid has pKa values of approximately 3.0 and 4.5. By contrast, maleic acid has pKa values of approximately 1.5 and 6.5. The reason for this large difference is that when one proton is removed from the cis isomer (maleic acid) a strong intramolecular hydrogen bond is formed with the nearby remaining carboxyl group. This favors the formation of the maleate H+, and it opposes the removal of the second proton from that species. In the trans isomer, the two carboxyl groups are always far apart, so hydrogen bonding is not observed.
Proton sponge, 1,8-bis(dimethylamino)naphthalene, has a pKa value of 12.1. It is one of the strongest amine bases known. The high basicity is attributed to the relief of strain upon protonation and strong internal hydrogen bonding.
Effects of the solvent and solvation should be mentioned also in this section. It turns out, these influences are more subtle than that of a dielectric medium mentioned above. For example, the expected (by electronic effects of methyl substituents) and observed in gas phase order of basicity of methylamines, Me3N > Me2NH > MeNH2 > NH3, is changed by water to Me2NH > MeNH2 > Me3N > NH3. Neutral methylamine molecules are hydrogen-bonded to water molecules mainly through one acceptor, N–HOH, interaction and only occasionally just one more donor bond, NH–OH2. Hence, methylamines are stabilized to about the same extent by hydration, regardless of the number of methyl groups. In stark contrast, corresponding methylammonium cations always utilize all the available protons for donor NH–OH2 bonding. Relative stabilization of methylammonium ions thus decreases with the number of methyl groups explaining the order of water basicity of methylamines.
Thermodynamics
An equilibrium constant is related to the standard Gibbs energy change for the reaction, so for an acid dissociation constant
.
R is the gas constant and T is the absolute temperature. Note that and . At 25 °C, ΔG in kJ·mol−1 ≈ 5.708 pKa (1 kJ·mol−1 = 1000 joules per mole). Free energy is made up of an enthalpy term and an entropy term.
The standard enthalpy change can be determined by calorimetry or by using the van 't Hoff equation, though the calorimetric method is preferable. When both the standard enthalpy change and acid dissociation constant have been determined, the standard entropy change is easily calculated from the equation above. In the following table, the entropy terms are calculated from the experimental values of pKa and ΔH. The data were critically selected and refer to 25 °C and zero ionic strength, in water.
The first point to note is that, when pKa is positive, the standard free energy change for the dissociation reaction is also positive. Second, some reactions are exothermic and some are endothermic, but, when ΔH is negative TΔS is the dominant factor, which determines that ΔG is positive. Last, the entropy contribution is always unfavourable () in these reactions. Ions in aqueous solution tend to orient the surrounding water molecules, which orders the solution and decreases the entropy. The contribution of an ion to the entropy is the partial molar entropy which is often negative, especially for small or highly charged ions. The ionization of a neutral acid involves formation of two ions so that the entropy decreases (). On the second ionization of the same acid, there are now three ions and the anion has a charge, so the entropy again decreases.
Note that the standard free energy change for the reaction is for the changes from the reactants in their standard states to the products in their standard states. The free energy change at equilibrium is zero since the chemical potentials of reactants and products are equal at equilibrium.
Experimental determination
The experimental determination of pKa values is commonly performed by means of titrations, in a medium of high ionic strength and at constant temperature. A typical procedure would be as follows. A solution of the compound in the medium is acidified with a strong acid to the point where the compound is fully protonated. The solution is then titrated with a strong base until all the protons have been removed. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constants are found by fitting calculated pH values to the observed values, using the method of least squares.
The total volume of added strong base should be small compared to the initial volume of titrand solution in order to keep the ionic strength nearly constant. This will ensure that pKa remains invariant during the titration.
A calculated titration curve for oxalic acid is shown at the right. Oxalic acid has pKa values of 1.27 and 4.27. Therefore, the buffer regions will be centered at about pH 1.3 and pH 4.3. The buffer regions carry the information necessary to get the pKa values as the concentrations of acid and conjugate base change along a buffer region.
Between the two buffer regions there is an end-point, or equivalence point, at about pH 3. This end-point is not sharp and is typical of a diprotic acid whose buffer regions overlap by a small amount: pKa2 − pKa1 is about three in this example. (If the difference in pK values were about two or less, the end-point would not be noticeable.) The second end-point begins at about pH 6.3 and is sharp. This indicates that all the protons have been removed. When this is so, the solution is not buffered and the pH rises steeply on addition of a small amount of strong base. However, the pH does not continue to rise indefinitely. A new buffer region begins at about pH 11 (pKw − 3), which is where self-ionization of water becomes important.
It is very difficult to measure pH values of less than two in aqueous solution with a glass electrode, because the Nernst equation breaks down at such low pH values. To determine pK values of less than about 2 or more than about 11 spectrophotometric or NMR measurements may be used instead of, or combined with, pH measurements.
When the glass electrode cannot be employed, as with non-aqueous solutions, spectrophotometric methods are frequently used. These may involve absorbance or fluorescence measurements. In both cases the measured quantity is assumed to be proportional to the sum of contributions from each photo-active species; with absorbance measurements the Beer–Lambert law is assumed to apply.
Isothermal titration calorimetry (ITC) may be used to determine both a pK value and the corresponding standard enthalpy for acid dissociation. Software to perform the calculations is supplied by the instrument manufacturers for simple systems.
Aqueous solutions with normal water cannot be used for 1H NMR measurements but heavy water, , must be used instead. 13C NMR data, however, can be used with normal water and 1H NMR spectra can be used with non-aqueous media. The quantities measured with NMR are time-averaged chemical shifts, as proton exchange is fast on the NMR time-scale. Other chemical shifts, such as those of 31P can be measured.
Micro-constants
For some polyprotic acids, dissociation (or association) occurs at more than one nonequivalent site, and the observed macroscopic equilibrium constant, or macro-constant, is a combination of micro-constants involving distinct species. When one reactant forms two products in parallel, the macro-constant is a sum of two micro-constants, This is true for example for the deprotonation of the amino acid cysteine, which exists in solution as a neutral zwitterion . The two micro-constants represent deprotonation either at sulphur or at nitrogen, and the macro-constant sum here is the acid dissociation constant
Similarly, a base such as spermine has more than one site where protonation can occur. For example, mono-protonation can occur at a terminal group or at internal groups. The Kb values for dissociation of spermine protonated at one or other of the sites are examples of micro-constants. They cannot be determined directly by means of pH, absorbance, fluorescence or NMR measurements; a measured Kb value is the sum of the K values for the micro-reactions.
Nevertheless, the site of protonation is very important for biological function, so mathematical methods have been developed for the determination of micro-constants.
When two reactants form a single product in parallel, the macro-constant For example, the abovementioned equilibrium for spermine may be considered in terms of Ka values of two tautomeric conjugate acids, with macro-constant In this case This is equivalent to the preceding expression since is proportional to
When a reactant undergoes two reactions in series, the macro-constant for the combined reaction is the product of the micro-constant for the two steps. For example, the abovementioned cysteine zwitterion can lose two protons, one from sulphur and one from nitrogen, and the overall macro-constant for losing two protons is the product of two dissociation constants This can also be written in terms of logarithmic constants as
Applications and significance
A knowledge of pKa values is important for the quantitative treatment of systems involving acid–base equilibria in solution. Many applications exist in biochemistry; for example, the pKa values of proteins and amino acid side chains are of major importance for the activity of enzymes and the stability of proteins. Protein pKa values cannot always be measured directly, but may be calculated using theoretical methods. Buffer solutions are used extensively to provide solutions at or near the physiological pH for the study of biochemical reactions; the design of these solutions depends on a knowledge of the pKa values of their components. Important buffer solutions include MOPS, which provides a solution with pH 7.2, and tricine, which is used in gel electrophoresis. Buffering is an essential part of acid base physiology including acid–base homeostasis, and is key to understanding disorders such as acid–base disorder. The isoelectric point of a given molecule is a function of its pK values, so different molecules have different isoelectric points. This permits a technique called isoelectric focusing, which is used for separation of proteins by 2-D gel polyacrylamide gel electrophoresis.
Buffer solutions also play a key role in analytical chemistry. They are used whenever there is a need to fix the pH of a solution at a particular value. Compared with an aqueous solution, the pH of a buffer solution is relatively insensitive to the addition of a small amount of strong acid or strong base. The buffer capacity of a simple buffer solution is largest when pH = pKa. In acid–base extraction, the efficiency of extraction of a compound into an organic phase, such as an ether, can be optimised by adjusting the pH of the aqueous phase using an appropriate buffer. At the optimum pH, the concentration of the electrically neutral species is maximised; such a species is more soluble in organic solvents having a low dielectric constant than it is in water. This technique is used for the purification of weak acids and bases.
A pH indicator is a weak acid or weak base that changes colour in the transition pH range, which is approximately pKa ± 1. The design of a universal indicator requires a mixture of indicators whose adjacent pKa values differ by about two, so that their transition pH ranges just overlap.
In pharmacology, ionization of a compound alters its physical behaviour and macro properties such as solubility and lipophilicity, log p). For example, ionization of any compound will increase the solubility in water, but decrease the lipophilicity. This is exploited in drug development to increase the concentration of a compound in the blood by adjusting the pKa of an ionizable group.
Knowledge of pKa values is important for the understanding of coordination complexes, which are formed by the interaction of a metal ion, Mm+, acting as a Lewis acid, with a ligand, L, acting as a Lewis base. However, the ligand may also undergo protonation reactions, so the formation of a complex in aqueous solution could be represented symbolically by the reaction
To determine the equilibrium constant for this reaction, in which the ligand loses a proton, the pKa of the protonated ligand must be known. In practice, the ligand may be polyprotic; for example EDTA4− can accept four protons; in that case, all pKa values must be known. In addition, the metal ion is subject to hydrolysis, that is, it behaves as a weak acid, so the pK values for the hydrolysis reactions must also be known.
Assessing the hazard associated with an acid or base may require a knowledge of pKa values. For example, hydrogen cyanide is a very toxic gas, because the cyanide ion inhibits the iron-containing enzyme cytochrome c oxidase. Hydrogen cyanide is a weak acid in aqueous solution with a pKa of about 9. In strongly alkaline solutions, above pH 11, say, it follows that sodium cyanide is "fully dissociated" so the hazard due to the hydrogen cyanide gas is much reduced. An acidic solution, on the other hand, is very hazardous because all the cyanide is in its acid form. Ingestion of cyanide by mouth is potentially fatal, independently of pH, because of the reaction with cytochrome c oxidase.
In environmental science acid–base equilibria are important for lakes and rivers; for example, humic acids are important components of natural waters. Another example occurs in chemical oceanography: in order to quantify the solubility of iron(III) in seawater at various salinities, the pKa values for the formation of the iron(III) hydrolysis products , and were determined, along with the solubility product of iron hydroxide.
Values for common substances
There are multiple techniques to determine the pKa of a chemical, leading to some discrepancies between different sources. Well measured values are typically within 0.1 units of each other. Data presented here were taken at 25 °C in water. More values can be found in the Thermodynamics section, above. A table of pKa of carbon acids, measured in DMSO, can be found on the page on carbanions.
See also
Acidosis
Acids in wine: tartaric, malic and citric are the principal acids in wine.
Alkalosis
Arterial blood gas
Chemical equilibrium
Conductivity (electrolytic)
Grotthuss mechanism: how protons are transferred between hydronium ions and water molecules, accounting for the exceptionally high ionic mobility of the proton (animation).
Hammett acidity function: a measure of acidity that is used for very concentrated solutions of strong acids, including superacids.
Ion transport number
Ocean acidification: dissolution of atmospheric carbon dioxide affects seawater pH. The reaction depends on total inorganic carbon and on solubility equilibria with solid carbonates such as limestone and dolomite.
Law of dilution
pCO2
pH
Predominance diagram: relates to equilibria involving polyoxyanions. pKa values are needed to construct these diagrams.
Proton affinity: a measure of basicity in the gas phase.
Stability constants of complexes: formation of a complex can often be seen as a competition between proton and metal ion for a ligand, which is the product of dissociation of an acid.
Notes
References
Further reading
(Previous edition published as )
(Non-aqueous solvents)
(translation editor: Mary R. Masson)
Chapter 4: Solvent Effects on the Position of Homogeneous Chemical Equilibria.
External links
Acidity–Basicity Data in Nonaqueous Solvents Extensive bibliography of pKa values in DMSO, acetonitrile, THF, heptane, 1,2-dichloroethane, and in the gas phase
Curtipot All-in-one freeware for pH and acid–base equilibrium calculations and for simulation and analysis of potentiometric titration curves with spreadsheets
SPARC Physical/Chemical property calculator Includes a database with aqueous, non-aqueous, and gaseous phase pKa values than can be searched using SMILES or CAS registry numbers
Aqueous-Equilibrium Constants pKa values for various acid and bases. Includes a table of some solubility products
Free guide to pKa and log p interpretation and measurement Explanations of the relevance of these properties to pharmacology
Free online prediction tool (Marvin) pKa, log p, log d etc. From ChemAxon
Chemicalize.org:List of predicted structure based properties
pKa Chart by David A. Evans
Equilibrium chemistry
Acids
Bases (chemistry)
Analytical chemistry
Physical chemistry | Acid dissociation constant | [
"Physics",
"Chemistry"
] | 10,292 | [
"Applied and interdisciplinary physics",
"Acids",
"Equilibrium chemistry",
"Bases (chemistry)",
"nan",
"Physical chemistry"
] |
57,763 | https://en.wikipedia.org/wiki/Aerosol | An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas. Aerosols can be generated from natural or human causes. The term aerosol commonly refers to the mixture of particulates in air, and not to the particulate matter alone. Examples of natural aerosols are fog, mist or dust. Examples of human caused aerosols include particulate air pollutants, mist from the discharge at hydroelectric dams, irrigation mist, perfume from atomizers, smoke, dust, sprayed pesticides, and medical treatments for respiratory illnesses.
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds. When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
The liquid or solid particles in an aerosol have diameters typically less than 1 μm. Larger particles with a significant settling speed make the mixture a suspension, but the distinction is not clear. In everyday language, aerosol often refers to a dispensing system that delivers a consumer product from a spray can.
Diseases can spread by means of small droplets in the breath, sometimes called bioaerosols.
Definitions
Aerosol is defined as a suspension system of solid or liquid particles in a gas. An aerosol includes both the particles and the suspending gas, which is usually air. Meteorologists and climatologists often refer to them as particle matter, while the classification in sizes ranges like PM2.5 or PM10, is useful in the field of atmospheric pollution as these size range play a role in ascertain the harmful effects in human health. Frederick G. Donnan presumably first used the term aerosol during World War I to describe an aero-solution, clouds of microscopic particles in air. This term developed analogously to the term hydrosol, a colloid system with water as the dispersed medium. Primary aerosols contain particles introduced directly into the gas; secondary aerosols form through gas-to-particle conversion.
Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt, they usually clump together to form a complex mixture. Various types of aerosol, classified according to physical form and how they were generated, include dust, fume, mist, smoke and fog.
There are several measures of aerosol concentration. Environmental science and environmental health often use the mass concentration (M), defined as the mass of particulate matter per unit volume, in units such as μg/m3. Also commonly used is the number concentration (N), the number of particles per unit volume, in units such as number per m3 or number per cm3.
Particle size has a major influence on particle properties, and the aerosol particle radius or diameter (dp) is a key property used to characterise aerosols.
Aerosols vary in their dispersity. A monodisperse aerosol, producible in the laboratory, contains particles of uniform size. Most aerosols, however, as polydisperse colloidal systems, exhibit a range of particle sizes. Liquid droplets are almost always nearly spherical, but scientists use an equivalent diameter to characterize the properties of various shapes of solid particles, some very irregular. The equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The equivalent volume diameter (de) is defined as the diameter of a sphere of the same volume as that of the irregular particle. Also commonly used is the aerodynamic diameter, da.
Generation and applications
People generate aerosols for various purposes, including:
as test aerosols for calibrating instruments, performing research, and testing sampling equipment and air filters;
to deliver deodorants, paints, and other consumer products in sprays;
for dispersal and agricultural application
for medical treatment of respiratory disease; and
in fuel injection systems and other combustion technology.
Some devices for generating aerosols are:
Aerosol spray
Atomizer nozzle or nebulizer
Electrospray
Electronic cigarette
Vibrating orifice aerosol generator (VOAG)
In the atmosphere
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds.
Although all hydrometeors, solid and liquid, can be described as aerosols, a distinction is commonly made between such dispersions (i.e. clouds) containing activated drops and crystals, and aerosol particles. The atmosphere of Earth contains aerosols of various types and concentrations, including quantities of:
natural inorganic materials: fine dust, sea salt, or water droplets
natural organic materials: smoke, pollen, spores, or bacteria
anthropogenic products of combustion such as: smoke, ashes or dusts
Aerosols can be found in urban ecosystems in various forms, for example:
Dust
Cigarette smoke
Mist from aerosol spray cans
Soot or fumes in car exhaust
The presence of aerosols in the Earth's atmosphere can influence its climate, as well as human health.
Effects
Volcanic eruptions release large amounts of sulphuric acid, hydrogen sulfide and hydrochloric acid into the atmosphere. These gases represent aerosols and eventually return to earth as acid rain, having a number of adverse effects on the environment and human life.
When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Aerosols interact with the Earth's energy budget in two ways, directly and indirectly.
E.g., a direct effect is that aerosols scatter and absorb incoming solar radiation. This will mainly lead to a cooling of the surface (solar radiation is scattered back to space) but may also contribute to a warming of the surface (caused by the absorption of incoming solar energy). This will be an additional element to the greenhouse effect and therefore contributing to the global climate change.
The indirect effects refer to the aerosol interfering with formations that interact directly with radiation. For example, they are able to modify the size of the cloud particles in the lower atmosphere, thereby changing the way clouds reflect and absorb light and therefore modifying the Earth's energy budget.
There is evidence to suggest that anthropogenic aerosols actually offset the effects of greenhouse gases in some areas, which is why the Northern Hemisphere shows slower surface warming than the Southern Hemisphere, although that just means that the Northern Hemisphere will absorb the heat later through ocean currents bringing warmer waters from the South. On a global scale however, aerosol cooling decreases greenhouse-gases-induced heating without offsetting it completely.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
Aerosols in the 20 μm range show a particularly long persistence time in air conditioned rooms due to their "jet rider" behaviour (move with air jets, gravitationally fall out in slowly moving air); as this aerosol size is most effectively adsorbed in the human nose, the primordial infection site in COVID-19, such aerosols may contribute to the pandemic.
Aerosol particles with an effective diameter smaller than 10 μm can enter the bronchi, while the ones with an effective diameter smaller than 2.5 μm can enter as far as the gas exchange region in the lungs, which can be hazardous to human health.
Size distribution
For a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol. This distribution defines the relative amounts of particles, sorted according to size. One approach to defining the particle size distribution uses a list of the sizes of every particle in a sample. However, this approach proves tedious to ascertain in aerosols with millions of particles and awkward to use. Another approach splits the size range into intervals and finds the number (or proportion) of particles in each interval. These data can be presented in a histogram with the area of each bar representing the proportion of particles in that size bin, usually normalised by dividing the number of particles in a bin by the width of the interval so that the area of each bar is proportionate to the number of particles in the size range that it represents. If the width of the bins tends to zero, the frequency function is:
where
is the diameter of the particles
is the fraction of particles having diameters between and +
is the frequency function
Therefore, the area under the frequency curve between two sizes a and b represents the total fraction of the particles in that size range:
It can also be formulated in terms of the total number density N:
Assuming spherical aerosol particles, the aerosol surface area per unit volume (S) is given by the second moment:
And the third moment gives the total volume concentration (V) of the particles:
The particle size distribution can be approximated. The normal distribution usually does not suitably describe particle size distributions in aerosols because of the skewness associated with a long tail of larger particles. Also for a quantity that varies over a large range, as many aerosol sizes do, the width of the distribution implies negative particles sizes, which is not physically realistic. However, the normal distribution can be suitable for some aerosols, such as test aerosols, certain pollen grains and spores.
A more widely chosen log-normal distribution gives the number frequency as:
where:
is the standard deviation of the size distribution and
is the arithmetic mean diameter.
The log-normal distribution has no negative values, can cover a wide range of values, and fits many observed size distributions reasonably well.
Other distributions sometimes used to characterise particle size include: the Rosin-Rammler distribution, applied to coarsely dispersed dusts and sprays; the Nukiyama–Tanasawa distribution, for sprays of extremely broad size ranges; the power function distribution, occasionally applied to atmospheric aerosols; the exponential distribution, applied to powdered materials; and for cloud droplets, the Khrgian–Mazin distribution.
Physics
Terminal velocity of a particle in a fluid
For low values of the Reynolds number (<1), true for most aerosol motion, Stokes' law describes the force of resistance on a solid spherical particle in a fluid. However, Stokes' law is only valid when the velocity of the gas at the surface of the particle is zero. For small particles (< 1 μm) that characterize aerosols, however, this assumption fails. To account for this failure, one can introduce the Cunningham correction factor, always greater than 1. Including this factor, one finds the relation between the resisting force on a particle and its velocity:
where
is the resisting force on a spherical particle
is the dynamic viscosity of the gas
is the particle velocity
is the Cunningham correction factor.
This allows us to calculate the terminal velocity of a particle undergoing gravitational settling in still air. Neglecting buoyancy effects, we find:
where
is the terminal settling velocity of the particle.
The terminal velocity can also be derived for other kinds of forces. If Stokes' law holds, then the resistance to motion is directly proportional to speed. The constant of proportionality is the mechanical mobility (B) of a particle:
A particle traveling at any reasonable initial velocity approaches its terminal velocity exponentially with an e-folding time equal to the relaxation time:
where:
is the particle speed at time t
is the final particle speed
is the initial particle speed
To account for the effect of the shape of non-spherical particles, a correction factor known as the dynamic shape factor is applied to Stokes' law. It is defined as the ratio of the resistive force of the irregular particle to that of a spherical particle with the same volume and velocity:
where:
is the dynamic shape factor
Aerodynamic diameter
The aerodynamic diameter of an irregular particle is defined as the diameter of the spherical particle with a density of 1000 kg/m3 and the same settling velocity as the irregular particle.
Neglecting the slip correction, the particle settles at the terminal velocity proportional to the square of the aerodynamic diameter, da:
where
= standard particle density (1000 kg/m3).
This equation gives the aerodynamic diameter:
One can apply the aerodynamic diameter to particulate pollutants or to inhaled drugs to predict where in the respiratory tract such particles deposit. Pharmaceutical companies typically use aerodynamic diameter, not geometric diameter, to characterize particles in inhalable drugs.
Dynamics
The previous discussion focused on single aerosol particles. In contrast, aerosol dynamics explains the evolution of complete aerosol populations. The concentrations of particles will change over time as a result of many processes. External processes that move particles outside a volume of gas under study include diffusion, gravitational settling, and electric charges and other external forces that cause particle migration. A second set of processes internal to a given volume of gas include particle formation (nucleation), evaporation, chemical reaction, and coagulation.
A differential equation called the Aerosol General Dynamic Equation (GDE) characterizes the evolution of the number density of particles in an aerosol due to these processes.
Change in time = Convective transport + brownian diffusion + gas-particle interactions + coagulation + migration by external forces
Where:
is number density of particles of size category
is the particle velocity
is the particle Stokes-Einstein diffusivity
is the particle velocity associated with an external force
Coagulation
As particles and droplets in an aerosol collide with one another, they may undergo coalescence or aggregation. This process leads to a change in the aerosol particle-size distribution, with the mode increasing in diameter as total number of particles decreases. On occasion, particles may shatter apart into numerous smaller particles; however, this process usually occurs primarily in particles too large for consideration as aerosols.
Dynamics regimes
The Knudsen number of the particle define three different dynamical regimes that govern the behaviour of an aerosol:
where is the mean free path of the suspending gas and is the diameter of the particle. For particles in the free molecular regime, Kn >> 1; particles small compared to the mean free path of the suspending gas. In this regime, particles interact with the suspending gas through a series of "ballistic" collisions with gas molecules. As such, they behave similarly to gas molecules, tending to follow streamlines and diffusing rapidly through Brownian motion. The mass flux equation in the free molecular regime is:
where a is the particle radius, P∞ and PA are the pressures far from the droplet and at the surface of the droplet respectively, kb is the Boltzmann constant, T is the temperature, CA is mean thermal velocity and α is mass accommodation coefficient. The derivation of this equation assumes constant pressure and constant diffusion coefficient.
Particles are in the continuum regime when Kn << 1. In this regime, the particles are big compared to the mean free path of the suspending gas, meaning that the suspending gas acts as a continuous fluid flowing round the particle. The molecular flux in this regime is:
where a is the radius of the particle A, MA is the molecular mass of the particle A, DAB is the diffusion coefficient between particles A and B, R is the ideal gas constant, T is the temperature (in absolute units like kelvin), and PA∞ and PAS are the pressures at infinite and at the surface respectively.
The transition regime contains all the particles in between the free molecular and continuum regimes or Kn ≈ 1. The forces experienced by a particle are a complex combination of interactions with individual gas molecules and macroscopic interactions. The semi-empirical equation describing mass flux is:
where Icont is the mass flux in the continuum regime. This formula is called the Fuchs-Sutugin interpolation formula. These equations do not take into account the heat release effect.
Partitioning
Aerosol partitioning theory governs condensation on and evaporation from an aerosol surface, respectively. Condensation of mass causes the mode of the particle-size distributions of the aerosol to increase; conversely, evaporation causes the mode to decrease. Nucleation is the process of forming aerosol mass from the condensation of a gaseous precursor, specifically a vapor. Net condensation of the vapor requires supersaturation, a partial pressure greater than its vapor pressure. This can happen for three reasons:
Lowering the temperature of the system lowers the vapor pressure.
Chemical reactions may increase the partial pressure of a gas or lower its vapor pressure.
The addition of additional vapor to the system may lower the equilibrium vapor pressure according to Raoult's law.
There are two types of nucleation processes. Gases preferentially condense onto surfaces of pre-existing aerosol particles, known as heterogeneous nucleation. This process causes the diameter at the mode of particle-size distribution to increase with constant number concentration. With sufficiently high supersaturation and no suitable surfaces, particles may condense in the absence of a pre-existing surface, known as homogeneous nucleation. This results in the addition of very small, rapidly growing particles to the particle-size distribution.
Activation
Water coats particles in aerosols, making them activated, usually in the context of forming a cloud droplet (such as natural cloud seeding by aerosols from trees in a forest). Following the Kelvin equation (based on the curvature of liquid droplets), smaller particles need a higher ambient relative humidity to maintain equilibrium than larger particles do. The following formula gives relative humidity at equilibrium:
where is the saturation vapor pressure above a particle at equilibrium (around a curved liquid droplet), p0 is the saturation vapor pressure (flat surface of the same liquid) and S is the saturation ratio.
Kelvin equation for saturation vapor pressure above a curved surface is:
where rp droplet radius, σ surface tension of droplet, ρ density of liquid, M molar mass, T temperature, and R molar gas constant.
Solution to the general dynamic equation
There are no general solutions to the general dynamic equation (GDE); common methods used to solve the general dynamic equation include:
Moment method
Modal/sectional method, and
Quadrature method of moments/Taylor-series expansion method of moments, and
Monte Carlo method.
Detection
Aerosols can either be measured in-situ or with remote sensing techniques either ground-based on airborne-based.
In situ observations
Some available in situ measurement techniques include:
Aerosol mass spectrometer (AMS)
Differential mobility analyzer (DMA)
Electrical aerosol spectrometer (EAS)
Aerodynamic particle sizer (APS)
Aerodynamic aerosol classifier (AAC)
Wide range particle spectrometer (WPS)
Micro-Orifice Uniform Deposit Impactor(MOUDI)
Condensation particle counter (CPC)
Epiphaniometer
Electrical low pressure impactor (ELPI)
Aerosol particle mass-analyser (APM)
Centrifugal Particle Mass Analyser (CPMA)
Remote sensing approach
Remote sensing approaches include:
Sun photometer
Lidar
Imaging spectroscopy
Size selective sampling
Particles can deposit in the nose, mouth, pharynx and larynx (the head airways region), deeper within the respiratory tract (from the trachea to the terminal bronchioles), or in the alveolar region. The location of deposition of aerosol particles within the respiratory system strongly determines the health effects of exposure to such aerosols. This phenomenon led people to invent aerosol samplers that select a subset of the aerosol particles that reach certain parts of the respiratory system.
Examples of these subsets of the particle-size distribution of an aerosol, important in occupational health, include the inhalable, thoracic, and respirable fractions. The fraction that can enter each part of the respiratory system depends on the deposition of particles in the upper parts of the airway. The inhalable fraction of particles, defined as the proportion of particles originally in the air that can enter the nose or mouth, depends on external wind speed and direction and on the particle-size distribution by aerodynamic diameter. The thoracic fraction is the proportion of the particles in ambient aerosol that can reach the thorax or chest region. The respirable fraction is the proportion of particles in the air that can reach the alveolar region. To measure the respirable fraction of particles in air, a pre-collector is used with a sampling filter. The pre-collector excludes particles as the airways remove particles from inhaled air. The sampling filter collects the particles for measurement. It is common to use cyclonic separation for the pre-collector, but other techniques include impactors, horizontal elutriators, and large pore membrane filters.
Two alternative size-selective criteria, often used in atmospheric monitoring, are PM10 and PM2.5. PM10 is defined by ISO as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 10 μm aerodynamic diameter and PM2.5 as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 2.5 μm aerodynamic diameter. PM10 corresponds to the "thoracic convention" as defined in ISO 7708:1995, Clause 6; PM2.5 corresponds to the "high-risk respirable convention" as defined in ISO 7708:1995, 7.1. The United States Environmental Protection Agency replaced the older standards for particulate matter based on Total Suspended Particulate with another standard based on PM10 in 1987 and then introduced standards for PM2.5 (also known as fine particulate matter) in 1997.
See also
Aerogel
Aeroplankton
Aerosol transmission
Bioaerosol
Deposition (Aerosol physics)
Global dimming
Nebulizer
Monoterpene
Stratospheric aerosol injection
References
Sources
External links
International Aerosol Research Assembly
American Association for Aerosol Research
NIOSH Manual of Analytical Methods (see chapters on aerosol sampling)
Colloidal chemistry
Colloids
Fluid dynamics
Liquids
Physical chemistry
Pollution
Solids | Aerosol | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,026 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Chemical engineering",
"Phases of matter",
"Colloids",
"Surface science",
"Piping",
"Aerosols",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Solids",
"Fluid dynamics",
"Physical chemistry",
"Matter",
"Liquids"... |
57,875 | https://en.wikipedia.org/wiki/Soap | Soap is a salt of a fatty acid (sometimes other carboxylic acids) used for cleaning and lubricating products as well as other applications. In a domestic setting, soaps, specifically "toilet soaps", are surfactants usually used for washing, bathing, and other types of housekeeping. In industrial settings, soaps are used as thickeners, components of some lubricants, emulsifiers, and catalysts.
Soaps are often produced by mixing fats and oils with a base. Humans have used soap for millennia; evidence exists for the production of soap-like materials in ancient Babylon around 2800 BC.
Types
Toilet soaps
In a domestic setting, "soap" usually refers to what is technically called a toilet soap, used for household and personal cleaning. Toilet soaps are salts of fatty acids with the general formula (RCO2−)M+, where M is Na (sodium) or K (potassium).
When used for cleaning, soap solubilizes particles and grime, which can then be separated from the article being cleaned. The insoluble oil/fat "dirt" become associated inside micelles, tiny spheres formed from soap molecules with polar hydrophilic (water-attracting) groups on the outside and encasing a lipophilic (fat-attracting) pocket, which shields the oil/fat molecules from the water, making them soluble. Anything that is soluble will be washed away with the water. In hand washing, as a surfactant, when lathered with a little water, soap kills microorganisms by disorganizing their membrane lipid bilayer and denaturing their proteins. It also emulsifies oils, enabling them to be carried away by running water.
When used in hard water, soap does not lather well but forms soap scum (related to metallic soaps, see below).
Non-toilet soaps
So-called metallic soaps are key components of most lubricating greases and thickeners. A commercially important example is lithium stearate. Greases are usually emulsions of calcium soap or lithium soap and mineral oil. Many other metallic soaps are also useful, including those of aluminium, sodium, and mixtures thereof. Such soaps are also used as thickeners to increase the viscosity of oils. In ancient times, lubricating greases were made by the addition of lime to olive oil, which would produce calcium soaps. Metal soaps are also included in modern artists' oil paints formulations as a rheology modifier. Metal soaps can be prepared by neutralizing fatty acids with metal oxides:
2 RCO2H + CaO → (RCO2)2Ca + H2O
A cation from an organic base such as ammonium can be used instead of a metal; ammonium nonanoate is an ammonium-based soap that is used as an herbicide.
Another class of non-toilet soaps are resin soaps, which are produced in the paper industry by the action of tree rosin with alkaline reagents used to separate cellulose from raw wood. A major component of such soaps is the sodium salt of abietic acid. Resin soaps are used as emulsifiers.
Soapmaking
The production of toilet soaps usually entails saponification of triglycerides, which are vegetable or animal oils and fats. An alkaline solution (often lye or sodium hydroxide) induces saponification whereby the triglyceride fats first hydrolyze into salts of fatty acids. Glycerol (glycerin) is liberated. The glycerin is sometimes left in the soap product as a softening agent, although it is sometimes separated. Handmade soap can differ from industrially made soap in that an excess of fat or coconut oil beyond that needed to consume the alkali is used (in a cold-pour process, this excess fat is called "superfatting"), and the glycerol left in acts as a moisturizing agent. However, the glycerine also makes the soap softer. The addition of glycerol and processing of this soap produces glycerin soap. Superfatted soap is more skin-friendly than one without extra fat, although it can leave a "greasy" feel. Sometimes, an emollient is added, such as jojoba oil or shea butter. Sand or pumice may be added to produce a scouring soap. The scouring agents serve to remove dead cells from the skin surface being cleaned. This process is called exfoliation.
To make antibacterial soap, compounds such as triclosan or triclocarban can be added. There is some concern that use of antibacterial soaps and other products might encourage antimicrobial resistance in microorganisms.
The type of alkali metal used determines the kind of soap product. Sodium soaps, prepared from sodium hydroxide, are firm, whereas potassium soaps, derived from potassium hydroxide, are softer or often liquid. Historically, potassium hydroxide was extracted from the ashes of bracken or other plants. Lithium soaps also tend to be hard. These are used exclusively in greases.
For making toilet soaps, triglycerides (oils and fats) are derived from coconut, olive, or palm oils, as well as tallow. Triglyceride is the chemical name for the triesters of fatty acids and glycerin. Tallow, i.e., rendered fat, is the most available triglyceride from animals. Each species offers quite different fatty acid content, resulting in soaps of distinct feel. The seed oils give softer but milder soaps. Soap made from pure olive oil, sometimes called Castile soap or Marseille soap, is reputed for its particular mildness. The term "Castile" is also sometimes applied to soaps from a mixture of oils with a high percentage of olive oil.
Gallery
History
Proto-soaps in the Ancient world
Proto-soaps, which mixed fat and alkali and were used for cleansing, are mentioned in Sumerian, Babylonian and Egyptian texts.
The earliest recorded evidence of the production of soap-like materials dates back to around 2800 BC in ancient Babylon. A formula for making a soap-like substance was written on a Sumerian clay tablet around 2500 BC. This was produced by heating a mixture of oil and wood ash, the earliest recorded chemical reaction, and used for washing woolen clothing.
The Ebers papyrus (Egypt, 1550 BC) indicates the ancient Egyptians used a soap-like product as a medicine and created this by combining animal fats or vegetable oils with a soda ash substance called trona. Egyptian documents mention a similar substance was used in the preparation of wool for weaving.
In the reign of Nabonidus (556–539 BC), a recipe for a soap-like substance consisted of uhulu [ashes], cypress [oil] and sesame [seed oil] "for washing the stones for the servant girls".
True soaps in the Ancient world
True soaps, which we might recognise as soaps today, were different to proto-soaps. They foamed, were made deliberately, and could be produced in a hard or soft form because of an understanding of lye sources. It is uncertain as to who was the first to invent true soap.
Knowledge of how to produce true soap emerged at some point between early mentions of proto-soaps and the first century AD. Alkali was used to clean textiles such as wool for thousands of years but soap only forms when there is enough fat, and experiments show that washing wool does not create visible quantities of soap. Experiments by Sally Pointer show that the repeated laundering of materials used in perfume-making lead to noticeable amounts of soap forming. This fits with other evidence from Mesopotamian culture.
Pliny the Elder, whose writings chronicle life in the first century AD, describes soap as "an invention of the Gauls". The word , Latin for soap, has connected to a mythical Mount Sapo, a hill near the River Tiber where animals were sacrificed. But in all likelihood, the word was borrowed from an early Germanic language and is cognate with Latin , "tallow". It first appears in Pliny the Elder's account, Historia Naturalis, which discusses the manufacture of soap from tallow and ashes. There he mentions its use in the treatment of scrofulous sores, as well as among the Gauls as a dye to redden hair which the men in Germania were more likely to use than women. The Romans avoided washing with harsh soaps before encountering the milder soaps used by the Gauls around 58 BC. Aretaeus of Cappadocia, writing in the 2nd century AD, observes among "Celts, which are men called Gauls, those alkaline substances that are made into balls [...] called soap". The Romans' preferred method of cleaning the body was to massage oil into the skin and then scrape away both the oil and any dirt with a strigil. The standard design is a curved blade with a handle, all of which is made of metal.
The 2nd-century AD physician Galen describes soap-making using lye and prescribes washing to carry away impurities from the body and clothes. The use of soap for personal cleanliness became increasingly common in this period. According to Galen, the best soaps were Germanic, and soaps from Gaul were second best. Zosimos of Panopolis, circa 300 AD, describes soap and soapmaking.
In the Southern Levant, the ashes from barilla plants, such as species of Salsola, saltwort (Seidlitzia rosmarinus) and Anabasis, were used to make potash. Traditionally, olive oil was used instead of animal lard throughout the Levant, which was boiled in a copper cauldron for several days. As the boiling progresses, alkali ashes and smaller quantities of quicklime are added and constantly stirred. In the case of lard, it required constant stirring while kept lukewarm until it began to trace. Once it began to thicken, the brew was poured into a mold and left to cool and harden for two weeks. After hardening, it was cut into smaller cakes. Aromatic herbs were often added to the rendered soap to impart their fragrance, such as yarrow leaves, lavender, germander, etc.
Ancient China
A detergent similar to soap was manufactured in ancient China from the seeds of Gleditsia sinensis. Another traditional detergent is a mixture of pig pancreas and plant ash called zhuyizi (). Soap made of animal fat did not appear in China until the modern era. Soap-like detergents were not as popular as ointments and creams.
Islamic Golden Age
Hard toilet soap with a pleasant smell was produced in the Middle East during the Islamic Golden Age, when soap-making became an established industry. Recipes for soap-making are described by Muhammad ibn Zakariya al-Razi (c. 865–925), who also gave a recipe for producing glycerine from olive oil. In the Middle East, soap was produced from the interaction of fatty oils and fats with alkali. In Syria, soap was produced using olive oil together with alkali and lime. Soap was exported from Syria to other parts of the Muslim world and to Europe.
A 12th-century document describes the process of soap production. It mentions the key ingredient, alkali, which later became crucial to modern chemistry, derived from al-qaly or "ashes".
By the 13th century, the manufacture of soap in the Middle East had become a major cottage industry, with sources in Nablus, Fes, Damascus, and Aleppo.
Medieval Europe
Soapmakers in Naples were members of a guild in the late sixth century (then under the control of the Eastern Roman Empire), and in the eighth century, soap-making was well known in Italy and Spain. The Carolingian capitulary De Villis, dating to around 800, representing the royal will of Charlemagne, mentions soap as being one of the products the stewards of royal estates are to tally. The lands of Medieval Spain were a leading soapmaker by 800, and soapmaking began in the Kingdom of England about 1200. Soapmaking is mentioned both as "women's work" and as the produce of "good workmen" alongside other necessities, such as the produce of carpenters, blacksmiths, and bakers.
In Europe, soap in the 9th century was produced from animal fats and had an unpleasant smell. This changed when olive oil began to be used in soap formulas instead, after which much of Europe's soap production moved to the Mediterranean olive-growing regions. Hard toilet soap was introduced to Europe by Arabs and gradually spread as a luxury item. It was often perfumed.
By the 15th century, the manufacture of soap in Christendom often took place on an industrial scale, with sources in Antwerp, Castile, Marseille, Naples and Venice.
16th–17th century
In France, by the second half of the 16th century, the semi-industrialized professional manufacture of soap was concentrated in a few centers of Provence—Toulon, Hyères, and Marseille—which supplied the rest of France. In Marseilles, by 1525, production was concentrated in at least two factories, and soap production at Marseille tended to eclipse the other Provençal centers.
English manufacture tended to concentrate in London. The demand for high-quality hard soap was significant enough during the Tudor period that barrels of ashes were imported for the manufacture of soap.
Finer soaps were later produced in Europe from the 17th century, using vegetable oils (such as olive oil) as opposed to animal fats. Many of these soaps are still produced, both industrially and by small-scale artisans. Castile soap is a popular example of the vegetable-only soaps derived from the oldest "white soap" of Italy. In 1634 Charles I granted the newly formed Society of Soapmakers a monopoly in soap production who produced certificates from 'foure Countesses, and five Viscountesses, and divers other Ladies and Gentlewomen of great credite and quality, besides common Laundresses and others', testifying that 'the New White Soap washeth whiter and sweeter than the Old Soap'.
During the Restoration era (February 1665 – August 1714) a soap tax was introduced in England, which meant that until the mid-1800s, soap was a luxury, used regularly only by the well-to-do. The soap manufacturing process was closely supervised by revenue officials who made sure that soapmakers' equipment was kept under lock and key when not being supervised. Moreover, soap could not be produced by small makers because of a law that stipulated that soap boilers must manufacture a minimum quantity of one imperial ton at each boiling, which placed the process beyond the reach of the average person. The soap trade was boosted and deregulated when the tax was repealed in 1853.
Modern period
Industrially manufactured bar soaps became available in the late 18th century, as advertising campaigns in Europe and America promoted popular awareness of the relationship between cleanliness and health. In modern times, the use of soap has become commonplace in industrialized nations due to a better understanding of the role of hygiene in reducing the population size of pathogenic microorganisms.
Until the Industrial Revolution, soapmaking was conducted on a small scale and the product was rough. In 1780, James Keir established a chemical works at Tipton, for the manufacture of alkali from the sulfates of potash and soda, to which he afterwards added a soap manufactory. The method of extraction proceeded on a discovery of Keir's. In 1790, Nicolas Leblanc discovered how to make alkali from common salt. Andrew Pears started making a high-quality, transparent soap, Pears soap, in 1807 in London. His son-in-law, Thomas J. Barratt, became the brand manager (the first of its kind) for Pears in 1865. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears soap, making her the first celebrity to endorse a commercial product.
William Gossage produced low-priced, good-quality soap from the 1850s. Robert Spear Hudson began manufacturing a soap powder in 1837, initially by grinding the soap with a mortar and pestle. American manufacturer Benjamin T. Babbitt introduced marketing innovations that included the sale of bar soap and distribution of product samples. William Hesketh Lever and his brother, James, bought a small soap works in Warrington in 1886 and founded what is still one of the largest soap businesses, formerly called Lever Brothers and now called Unilever. These soap businesses were among the first to employ large-scale advertising campaigns.
Liquid soap
Liquid soap was invented in the nineteenth century; in 1865, William Sheppard patented a liquid version of soap. In 1898, B.J. Johnson developed a soap derived from palm and olive oils; his company, the B.J. Johnson Soap Company, introduced "Palmolive" brand soap that same year. This new brand of soap became popular rapidly, and to such a degree that B.J. Johnson Soap Company changed its name to Palmolive.
In the early 1900s, other companies began to develop their own liquid soaps. Such products as Pine-Sol and Tide appeared on the market, making the process of cleaning things other than skin, such as clothing, floors, and bathrooms, much easier.
Liquid soap also works better for more traditional or non-machine washing methods, such as using a washboard.
See also
Soap-related
Antibiotic misuse
Dishwashing soap
Foam
List of cleaning products
Hand washing
Palm oil
Soap bubble
Soap dish
Soap dispenser
Soap plant
Soap substitute
Soapwort
Shampoo
Shower gel
Toothpaste
Soap made from human corpses
References
Further reading
Free ebook at Google Books.
Donkor, Peter (1986). Small-Scale Soapmaking: A Handbook. Ebook online at SlideShare. .
Dunn, Kevin M. (2010). Scientific Soapmaking: The Chemistry of Cold Process. Clavicula Press. .
Garzena, Patrizia, and Marina Tadiello (2004). Soap Naturally: Ingredients, methods and recipes for natural handmade soap. Online information and Table of Contents. /
Garzena, Patrizia, and Marina Tadiello (2013). The Natural Soapmaking Handbook. Online information and Table of Contents. /
Mohr, Merilyn (1979). The Art of Soap Making. A Harrowsmith Contemporary Primer. Firefly Books. .
Spencer, Bob; Practical Action (2005). SOAPMAKING . Ebook online.
Thomssen, E. G., Ph.D. (1922). Soap-Making Manual. Free ebook at Project Gutenberg.
External links
History of Soap making – SoapHistory
Anionic surfactants
Cleaning products
Salts
Skin care
Bathing
Articles containing video clips | Soap | [
"Chemistry"
] | 3,943 | [
"Products of chemical industry",
"Cleaning products",
"Salts"
] |
57,877 | https://en.wikipedia.org/wiki/Sodium%20hydroxide | Sodium hydroxide, also known as lye and caustic soda, is an inorganic compound with the formula . It is a white solid ionic compound consisting of sodium cations and hydroxide anions .
Sodium hydroxide is a highly corrosive base and alkali that decomposes lipids and proteins at ambient temperatures and may cause severe chemical burns. It is highly soluble in water, and readily absorbs moisture and carbon dioxide from the air. It forms a series of hydrates . The monohydrate crystallizes from water solutions between 12.3 and 61.8 °C. The commercially available "sodium hydroxide" is often this monohydrate, and published data may refer to it instead of the anhydrous compound.
As one of the simplest hydroxides, sodium hydroxide is frequently used alongside neutral water and acidic hydrochloric acid to demonstrate the pH scale to chemistry students.
Sodium hydroxide is used in many industries: in the making of wood pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2022 was approximately 83 million tons.
Properties
Physical properties
Pure sodium hydroxide is a colorless crystalline solid that melts at without decomposition and boils at . It is highly soluble in water, with a lower solubility in polar solvents such as ethanol and methanol. Sodium hydroxide is insoluble in ether and other non-polar solvents.
Similar to the hydration of sulfuric acid, dissolution of solid sodium hydroxide in water is a highly exothermic reaction where a large amount of heat is liberated, posing a threat to safety through the possibility of splashing. The resulting solution is usually colorless and odorless. As with other alkaline solutions, it feels slippery with skin contact due to the process of saponification that occurs between and natural skin oils.
Viscosity
Concentrated (50%) aqueous solutions of sodium hydroxide have a characteristic viscosity, 78 mPa·s, that is much greater than that of water (1.0 mPa·s) and near that of olive oil (85 mPa·s) at room temperature. The viscosity of aqueous , as with any liquid chemical, is inversely related to its temperature, i.e., its viscosity decreases as temperature increases, and vice versa. The viscosity of sodium hydroxide solutions plays a direct role in its application as well as its storage.
Hydrates
Sodium hydroxide can form several hydrates , which result in a complex solubility diagram that was described in detail by Spencer Umfreville Pickering in 1893. The known hydrates and the approximate ranges of temperature and concentration (mass percent of NaOH) of their saturated water solutions are:
Heptahydrate, : from −28 °C (18.8%) to −24 °C (22.2%).
Pentahydrate, : from −24 °C (22.2%) to −17.7 °C (24.8%).
Tetrahydrate, , α form: from −17.7 °C (24.8%) to 5.4 °C (32.5%).
Tetrahydrate, , β form: metastable.
Trihemihydrate, : from 5.4 °C (32.5%) to 15.38 °C (38.8%) and then to 5.0 °C (45.7%).
Trihydrate, : metastable.
Dihydrate, : from 5.0 °C (45.7%) to 12.3 °C (51%).
Monohydrate, : from 12.3 °C (51%) to 65.10 °C (69%) then to 62.63 °C (73.1%).
Early reports refer to hydrates with n = 0.5 or n = 2/3, but later careful investigations failed to confirm their existence.
The only hydrates with stable melting points are (65.10 °C) and (15.38 °C). The other hydrates, except the metastable ones and (β) can be crystallized from solutions of the proper composition, as listed above. However, solutions of NaOH can be easily supercooled by many degrees, which allows the formation of hydrates (including the metastable ones) from solutions with different concentrations.
For example, when a solution of NaOH and water with 1:2 mole ratio (52.6% NaOH by mass) is cooled, the monohydrate normally starts to crystallize (at about 22 °C) before the dihydrate. However, the solution can easily be supercooled down to −15 °C, at which point it may quickly crystallize as the dihydrate. When heated, the solid dihydrate might melt directly into a solution at 13.35 °C; however, once the temperature exceeds 12.58 °C it often decomposes into solid monohydrate and a liquid solution. Even the n = 3.5 hydrate is difficult to crystallize, because the solution supercools so much that other hydrates become more stable.
A hot water solution containing 73.1% (mass) of NaOH is a eutectic that solidifies at about 62.63 °C as an intimate mix of anhydrous and monohydrate crystals.
A second stable eutectic composition is 45.4% (mass) of NaOH, that solidifies at about 4.9 °C into a mixture of crystals of the dihydrate and of the 3.5-hydrate.
The third stable eutectic has 18.4% (mass) of NaOH. It solidifies at about −28.7 °C as a mixture of water ice and the heptahydrate .
When solutions with less than 18.4% NaOH are cooled, water ice crystallizes first, leaving the NaOH in solution.
The α form of the tetrahydrate has density 1.33 g/cm3. It melts congruously at 7.55 °C into a liquid with 35.7% NaOH and density 1.392 g/cm3, and therefore floats on it like ice on water. However, at about 4.9 °C it may instead melt incongruously into a mixture of solid and a liquid solution.
The β form of the tetrahydrate is metastable, and often transforms spontaneously to the α form when cooled below −20 °C. Once initiated, the exothermic transformation is complete in a few minutes, with a 6.5% increase in volume of the solid. The β form can be crystallized from supercooled solutions at −26 °C, and melts partially at −1.83 °C.
The "sodium hydroxide" of commerce is often the monohydrate (density 1.829 g/cm3). Physical data in technical literature may refer to this form, rather than the anhydrous compound.
Crystal structure
NaOH and its monohydrate form orthorhombic crystals with the space groups Cmcm (oS8) and Pbca (oP24), respectively. The monohydrate cell dimensions are a = 1.1825, b = 0.6213, c = 0.6069 nm. The atoms are arranged in a hydrargillite-like layer structure, with each sodium atom surrounded by six oxygen atoms, three each from hydroxide ions and three from water molecules. The hydrogen atoms of the hydroxyls form strong bonds with oxygen atoms within each O layer. Adjacent O layers are held together by hydrogen bonds between water molecules.
Chemical properties
Reaction with acids
Sodium hydroxide reacts with protic acids to produce water and the corresponding salts. For example, when sodium hydroxide reacts with hydrochloric acid, sodium chloride is formed:
In general, such neutralization reactions are represented by one simple net ionic equation:
This type of reaction with a strong acid releases heat, and hence is exothermic. Such acid–base reactions can also be used for titrations. However, sodium hydroxide is not used as a primary standard because it is hygroscopic and absorbs carbon dioxide from air.
Reaction with acidic oxides
Sodium hydroxide also reacts with acidic oxides, such as sulfur dioxide. Such reactions are often used to "scrub" harmful acidic gases (like and ) produced in the burning of coal and thus prevent their release into the atmosphere. For example,
Reaction with metals and oxides
Glass reacts slowly with aqueous sodium hydroxide solutions at ambient temperatures to form soluble silicates. Because of this, glass joints and stopcocks exposed to sodium hydroxide have a tendency to "freeze". Flasks and glass-lined chemical reactors are damaged by long exposure to hot sodium hydroxide, which also frosts the glass. Sodium hydroxide does not attack iron at room temperature, since iron does not have amphoteric properties (i.e., it only dissolves in acid, not base).
Nevertheless, at high temperatures (e.g. above 500 °C), iron can react endothermically with sodium hydroxide to form iron(III) oxide, sodium metal, and hydrogen gas. This is due to the lower enthalpy of formation of iron(III) oxide (−824.2 kJ/mol) compared to sodium hydroxide (−500 kJ/mol) and positive entropy change of the reaction, which implies spontaneity at high temperatures (, ) and non-spontaneity at low temperatures (, ). Consider the following reaction between molten sodium hydroxide and finely divided iron filings:
A few transition metals, however, may react quite vigorously with sodium hydroxide under milder conditions.
In 1986, an aluminium road tanker in the UK was mistakenly used to transport 25% sodium hydroxide solution, causing pressurization of the contents and damage to tankers. The pressurization is due to the hydrogen gas which is produced in the reaction between sodium hydroxide and aluminium:
Precipitant
Unlike sodium hydroxide, which is soluble, the hydroxides of most transition metals are insoluble, and therefore sodium hydroxide can be used to precipitate transition metal hydroxides. The following colours are observed:
Copper - blue
Iron(II) - green
Iron(III) - yellow / brown
Zinc and lead salts dissolve in excess sodium hydroxide to give a clear solution of or .
Aluminium hydroxide is used as a gelatinous flocculant to filter out particulate matter in water treatment. Aluminium hydroxide is prepared at the treatment plant from aluminium sulfate by reacting it with sodium hydroxide or bicarbonate.
Saponification
Sodium hydroxide can be used for the base-driven hydrolysis of esters (also called saponification), amides and alkyl halides. However, the limited solubility of sodium hydroxide in organic solvents means that the more soluble potassium hydroxide (KOH) is often preferred. Touching a sodium hydroxide solution with bare hands, while not recommended, produces a slippery feeling. This happens because oils on the skin such as sebum are converted to soap.
Despite solubility in propylene glycol it is unlikely to replace water in saponification due to propylene glycol's primary reaction with fat before reaction between sodium hydroxide and fat.
Production
Sodium hydroxide is industrially produced, first as a 32% solution, and then evaporated to a 50% solution by variations of the electrolytic chloralkali process. Chlorine gas is also produced in this process. Solid sodium hydroxide is obtained from this solution by the evaporation of water. Solid sodium hydroxide is most commonly sold as flakes, prills, and cast blocks.
In 2022, world production was estimated at 83 million dry tonnes of sodium hydroxide, and demand was estimated at 51 million tonnes. In 1998, total world production was around 45 million tonnes. North America and Asia each contributed around 14 million tonnes, while Europe produced around 10 million tonnes. In the United States, the major producer of sodium hydroxide is Olin, which has annual production around 5.7 million tonnes from sites at Freeport, Texas; Plaquemine, Louisiana; St. Gabriel, Louisiana; McIntosh, Alabama; Charleston, Tennessee; Niagara Falls, New York; and Bécancour, Canada. Other major US producers include Oxychem, Westlake, Shintek, and Formosa. All of these companies use the chloralkali process.
Historically, sodium hydroxide was produced by treating sodium carbonate with calcium hydroxide (slaked lime) in a metathesis reaction which takes advantage of the fact that sodium hydroxide is soluble, while calcium carbonate is not. This process was called causticizing.
The sodium carbonate for this reaction was produced by the Leblanc process in the early 19th century, or the Solvay process in the late 19th century. The conversion of sodium carbonate to sodium hydroxide was superseded entirely by the chloralkali process, which produces sodium hydroxide in a single process.
Sodium hydroxide is also produced by combining pure sodium metal with water. The byproducts are hydrogen gas and heat, often resulting in a flame.
This reaction is commonly used for demonstrating the reactivity of alkali metals in academic environments; however, it is not used commercially aside from a reaction within the mercury cell chloralkali process where sodium amalgam is reacted with water.
Uses
Sodium hydroxide is a popular strong base used in industry. Sodium hydroxide is used in the manufacture of sodium salts and detergents, pH regulation, and organic synthesis. In bulk, it is most often handled as an aqueous solution, since solutions are cheaper and easier to handle.
Sodium hydroxide is used in many scenarios where it is desirable to increase the alkalinity of a mixture, or to neutralize acids. For example, in the petroleum industry, sodium hydroxide is used as an additive in drilling mud to increase alkalinity in bentonite mud systems, to increase the mud viscosity, and to neutralize any acid gas (such as hydrogen sulfide and carbon dioxide) which may be encountered in the geological formation as drilling progresses. Another use is in salt spray testing where pH needs to be regulated. Sodium hydroxide is used with hydrochloric acid to balance pH. The resultant salt, NaCl, is the corrosive agent used in the standard neutral pH salt spray test.
Poor quality crude oil can be treated with sodium hydroxide to remove sulfurous impurities in a process known as caustic washing. Sodium hydroxide reacts with weak acids such as hydrogen sulfide and mercaptans to yield non-volatile sodium salts, which can be removed. The waste which is formed is toxic and difficult to deal with, and the process is banned in many countries because of this. In 2006, Trafigura used the process and then dumped the waste in Ivory Coast.
Other common uses of sodium hydroxide include:
for making soaps and detergents. Sodium hydroxide is used for hard bar soap, while potassium hydroxide is used for liquid soaps. Sodium hydroxide is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed.
as drain cleaners that convert pipe-clogging fats and grease into soap, which dissolves in water
for making artificial textile fibres such as rayon
in the manufacture of paper. Around 56% of sodium hydroxide produced is used by industry, 25% of which is used in the paper industry.
in purifying bauxite ore from which aluminium metal is extracted. This is known as the Bayer process.
de-greasing metals
oil refining
making dyes and bleaches
in water treatment plants for pH regulation
to treat bagels and pretzel dough, giving the distinctive shiny finish
Chemical pulping
Sodium hydroxide is also widely used in pulping of wood for making paper or regenerated fibers. Along with sodium sulfide, sodium hydroxide is a key component of the white liquor solution used to separate lignin from cellulose fibers in the kraft process. It also plays a key role in several later stages of the process of bleaching the brown pulp resulting from the pulping process. These stages include oxygen delignification, oxidative extraction, and simple extraction, all of which require a strong alkaline environment with a pH > 10.5 at the end of the stages.
Tissue digestion
In a similar fashion, sodium hydroxide is used to digest tissues, as in a process that was used with farm animals at one time. This process involved placing a carcass into a sealed chamber, then adding a mixture of sodium hydroxide and water (which breaks the chemical bonds that keep the flesh intact). This eventually turns the body into a liquid with a dark brown color, and the only solids that remain are bone hulls, which can be crushed between one's fingertips.
Sodium hydroxide is frequently used in the process of decomposing roadkill dumped in landfills by animal disposal contractors. Due to its availability and low cost, it has been used by criminals to dispose of corpses. Italian serial killer Leonarda Cianciulli used this chemical to turn dead bodies into soap. In Mexico, a man who worked for drug cartels admitted disposing of over 300 bodies with it.
Sodium hydroxide is a dangerous chemical due to its ability to hydrolyze protein. If a dilute solution is spilled on the skin, burns may result if the area is not washed thoroughly and for several minutes with running water. Splashes in the eye can be more serious and can lead to blindness.
Dissolving amphoteric metals and compounds
Strong bases attack aluminium. Sodium hydroxide reacts with aluminium and water to release hydrogen gas. The aluminium takes an oxygen atom from sodium hydroxide, which in turn takes an oxygen atom from water, and releases two hydrogen atoms. The reaction thus produces hydrogen gas and sodium aluminate. In this reaction, sodium hydroxide acts as an agent to make the solution alkaline, which aluminium can dissolve in.
→ 2 +
Sodium aluminate is an inorganic chemical that is used as an effective source of aluminium hydroxide for many industrial and technical applications. Pure sodium aluminate (anhydrous) is a white crystalline solid having a formula variously given as , , , or . Formation of sodium tetrahydroxoaluminate(III) or hydrated sodium aluminate is given by:
O
This reaction can be useful in etching, removing anodizing, or converting a polished surface to a satin-like finish, but without further passivation such as anodizing or alodining the surface may become degraded, either under normal use or in severe atmospheric conditions.
In the Bayer process, sodium hydroxide is used in the refining of alumina containing ores (bauxite) to produce alumina (aluminium oxide) which is the raw material used to produce aluminium via the electrolytic Hall-Héroult process. Since the alumina is amphoteric, it dissolves in the sodium hydroxide, leaving impurities less soluble at high pH such as iron oxides behind in the form of a highly alkaline red mud.
Other amphoteric metals are zinc and lead which dissolve in concentrated sodium hydroxide solutions to give sodium zincate and sodium plumbate respectively.
Esterification and transesterification reagent
Sodium hydroxide is traditionally used in soap making (cold process soap, saponification). It was made in the nineteenth century for a hard surface rather than liquid product because it was easier to store and transport.
For the manufacture of biodiesel, sodium hydroxide is used as a catalyst for the transesterification of methanol and triglycerides. This only works with anhydrous sodium hydroxide, because combined with water the fat would turn into soap, which would be tainted with methanol. NaOH is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed. Due to production costs, NaOH, which is produced using common salt is cheaper than potassium hydroxide.
Skincare ingredient
Sodium hydroxide is an ingredient used in some skincare and cosmetic products, such as facial cleansers, creams, lotions, and makeup. It is typically used in low concentration as a pH balancer, due its highly alkaline nature.
Food preparation
Food uses of sodium hydroxide include washing or chemical peeling of fruits and vegetables, chocolate and cocoa processing, caramel coloring production, poultry scalding, soft drink processing, and thickening ice cream. Olives are often soaked in sodium hydroxide for softening; pretzels and German lye rolls are glazed with a sodium hydroxide solution before baking to make them crisp. Owing to the difficulty in obtaining food grade sodium hydroxide in small quantities for home use, sodium carbonate is often used in place of sodium hydroxide. It is known as E number E524.
Specific foods processed with sodium hydroxide include:
German pretzels are poached in a boiling sodium carbonate solution or cold sodium hydroxide solution before baking, which contributes to their unique crust.
Lye water is an essential ingredient in the crust of the traditional baked Chinese moon cakes.
Most yellow coloured Chinese noodles are made with lye water but are commonly mistaken for containing egg.
One variety of zongzi uses lye water to impart a sweet flavor.
Sodium hydroxide causes gelling of egg whites in the production of century eggs.
Some methods of preparing olives involve subjecting them to a lye-based brine.
The Filipino dessert () called uses a small quantity of lye water to help give the rice flour batter a jelly-like consistency. A similar process is also used in the kakanin known as or except that the mixture uses grated cassava instead of rice flour.
The Norwegian dish known as lutefisk ().
Bagels are often boiled in a lye solution before baking, contributing to their shiny crust.
Hominy is dried maize (corn) kernels reconstituted by soaking in lye-water. These expand considerably in size and may be further processed by frying to make corn nuts or by drying and grinding to make grits. Hominy is used to create masa, a popular flour used in Mexican cuisine to make corn tortillas and tamales. Nixtamal is similar, but uses calcium hydroxide instead of sodium hydroxide.
Cleaning agent
Sodium hydroxide is frequently used as an industrial cleaning agent where it is often called "caustic". It is added to water, heated, and then used to clean process equipment, storage tanks, etc. It can dissolve grease, oils, fats and protein-based deposits. It is also used for cleaning waste discharge pipes under sinks and drains in domestic properties. Surfactants can be added to the sodium hydroxide solution in order to stabilize dissolved substances and thus prevent redeposition. A sodium hydroxide soak solution is used as a powerful degreaser on stainless steel and glass bakeware. It is also a common ingredient in oven cleaners.
A common use of sodium hydroxide is in the production of parts washer detergents. Parts washer detergents based on sodium hydroxide are some of the most aggressive parts washer cleaning chemicals. The sodium hydroxide-based detergents include surfactants, rust inhibitors and defoamers. A parts washer heats water and the detergent in a closed cabinet and then sprays the heated sodium hydroxide and hot water at pressure against dirty parts for degreasing applications. Sodium hydroxide used in this manner replaced many solvent-based systems in the early 1990s when trichloroethane was outlawed by the Montreal Protocol. Water and sodium hydroxide detergent-based parts washers are considered to be an environmental improvement over the solvent-based cleaning methods.
Sodium hydroxide is used in the home as a type of drain openers to unblock clogged drains, usually in the form of a dry crystal or as a thick liquid gel. The alkali dissolves greases to produce water soluble products. It also hydrolyzes proteins, such as those found in hair, which may block water pipes. These reactions are sped by the heat generated when sodium hydroxide and the other chemical components of the cleaner dissolve in water. Such alkaline drain cleaners and their acidic versions are highly corrosive and should be handled with great caution.
Relaxer
Sodium hydroxide is used in some relaxers to straighten hair. However, because of the high incidence and intensity of chemical burns, manufacturers of chemical relaxers use other alkaline chemicals in preparations available to consumers. Sodium hydroxide relaxers are still available, but they are used mostly by professionals.
Paint stripper
A solution of sodium hydroxide in water was traditionally used as the most common paint stripper on wooden objects. Its use has become less common, because it can damage the wood surface, raising the grain and staining the colour.
Water treatment
Sodium hydroxide is sometimes used during water purification to raise the pH of water supplies. Increased pH makes the water less corrosive to plumbing and reduces the amount of lead, copper and other toxic metals that can dissolve into drinking water.
Historical uses
Sodium hydroxide has been used for detection of carbon monoxide poisoning, with blood samples of such patients turning to a vermilion color upon the addition of a few drops of sodium hydroxide. Today, carbon monoxide poisoning can be detected by CO oximetry.
In cement mixes, mortars, concrete, grouts
Sodium hydroxide is used in some cement mix plasticisers. This helps homogenise cement mixes, preventing segregation of sands and cement, decreases the amount of water required in a mix and increases workability of the cement product, be it mortar, render or concrete.
Safety
Like other corrosive acids and alkalis, a few drops of sodium hydroxide solutions can readily decompose proteins and lipids in living tissues via amide hydrolysis and ester hydrolysis, which consequently cause chemical burns and may induce permanent blindness upon contact with eyes. Solid alkali can also express its corrosive nature if there is water, such as water vapor. Thus, protective equipment, like rubber gloves, safety clothing and eye protection, should always be used when handling this chemical or its solutions. The standard first aid measures for alkali spills on the skin is, as for other corrosives, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes.
Moreover, dissolution of sodium hydroxide is highly exothermic, and the resulting heat may cause heat burns or ignite flammables. It also produces heat when reacted with acids.
Sodium hydroxide is mildly corrosive to glass, which can cause damage to glazing or cause ground glass joints to bind. Sodium hydroxide is corrosive to several metals, like aluminium which reacts with the alkali to produce flammable hydrogen gas on contact.
Storage
Careful storage is needed when handling sodium hydroxide for use, especially bulk volumes. Following proper NaOH storage guidelines and maintaining worker/environment safety is always recommended given the chemical's burn hazard.
Sodium hydroxide is often stored in bottles for small-scale laboratory use, within intermediate bulk containers (medium volume containers) for cargo handling and transport, or within large stationary storage tanks with volumes up to 100,000 gallons for manufacturing or waste water plants with extensive NaOH use. Common materials that are compatible with sodium hydroxide and often utilized for NaOH storage include: polyethylene (HDPE, usual, XLPE, less common), carbon steel, polyvinyl chloride (PVC), stainless steel, and fiberglass reinforced plastic (FRP, with a resistant liner).
Sodium hydroxide must be stored in airtight containers to preserve its normality as it will absorb water and carbon dioxide from the atmosphere.
History
Sodium hydroxide was first prepared by soap makers. A procedure for making sodium hydroxide appeared as part of a recipe for making soap in an Arab book of the late 13th century: (Inventions from the Various Industrial Arts), which was compiled by al-Muzaffar Yusuf ibn 'Umar ibn 'Ali ibn Rasul (d. 1295), a king of Yemen. The recipe called for passing water repeatedly through a mixture of alkali (Arabic: , where is ash from saltwort plants, which are rich in sodium; hence alkali was impure sodium carbonate) and quicklime (calcium oxide, CaO), whereby a solution of sodium hydroxide was obtained. European soap makers also followed this recipe. When in 1791 the French chemist and surgeon Nicolas Leblanc (1742–1806) patented a process for mass-producing sodium carbonate, natural "soda ash" (impure sodium carbonate that was obtained from the ashes of plants that are rich in sodium) was replaced by this artificial version. However, by the 20th century, the electrolysis of sodium chloride had become the primary method for producing sodium hydroxide.
See also
Acid and base
HAZMAT Class 8 Corrosive Substances
List of cleaning agents
References
Bibliography
External links
International Chemical Safety Card 0360
Euro Chlor-How is chlorine made? Chlorine Online
NIOSH Pocket Guide to Chemical Hazards
CDC – Sodium Hydroxide – NIOSH Workplace Safety and Health Topic
Production by brine electrolysis
Data sheets
Technical charts (page 33—41) for enthalpy, temperature and pressure
Sodium Hydroxide MSDS
Certified Lye MSDS
Hill Brothers MSDS
Titration of acids with sodium hydroxide; freeware for data analysis, simulation of curves and pH calculation
Caustic soda production in continuous causticising plant by lime soda process
Chemical engineering
Cleaning products
Deliquescent materials
Desiccants
Household chemicals
Hydroxides
Inorganic compounds
Photographic chemicals
Sodium compounds
E-number additives
Food acidity regulators | Sodium hydroxide | [
"Physics",
"Chemistry",
"Engineering"
] | 6,213 | [
"Inorganic compounds",
"Products of chemical industry",
"Chemical engineering",
"Hydroxides",
"Cleaning products",
"Desiccants",
"Materials",
"nan",
"Deliquescent materials",
"Bases (chemistry)",
"Matter"
] |
57,880 | https://en.wikipedia.org/wiki/In%20vitro%20fertilisation | In vitro fertilisation (IVF) is a process of fertilisation in which an egg is combined with sperm in vitro ("in glass"). The process involves monitoring and stimulating a woman's ovulatory process, then removing an ovum or ova (egg or eggs) from her ovaries and enabling a man's sperm to fertilise them in a culture medium in a laboratory. After a fertilised egg (zygote) undergoes embryo culture for 2–6 days, it is transferred by catheter into the uterus, with the intention of establishing a successful pregnancy.
IVF is a type of assisted reproductive technology used to treat infertility, enable gestational surrogacy, and, in combination with pre-implantation genetic testing, avoid the transmission of abnormal genetic conditions. When a fertilised egg from egg and sperm donors implants in the uterus of a genetically unrelated surrogate, the resulting child is also genetically unrelated to the surrogate. Some countries have banned or otherwise regulated the availability of IVF treatment, giving rise to fertility tourism. Financial cost and age may also restrict the availability of IVF as a means of carrying a healthy pregnancy to term.
In July 1978, Louise Brown was the first child successfully born after her mother received IVF treatment. Brown was born as a result of natural-cycle IVF, where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (later Dr Kershaw's Hospice) in Royton, Oldham, England. Robert Edwards was awarded the Nobel Prize in Physiology or Medicine in 2010. (The physiologist co-developed the treatment together with Patrick Steptoe and embryologist Jean Purdy but the latter two were not eligible for consideration as they had died: the Nobel Prize is not awarded posthumously.)
When assisted by egg donation and IVF, many women who have reached menopause, have infertile partners, or have idiopathic female-fertility issues, can still become pregnant. After the IVF treatment, some couples get pregnant without any fertility treatments. In 2023, it was estimated that twelve million children had been born worldwide using IVF and other assisted reproduction techniques. A 2019 study that evaluated the use of 10 adjuncts with IVF (screening hysteroscopy, DHEA, testosterone, GH, aspirin, heparin, antioxidants, seminal plasma and PRP) suggested that (with the exception of hysteroscopy) these adjuncts should be avoided until there is more evidence to show that they are safe and effective.
Terminology
The Latin term in vitro, meaning "in glass", is used because early biological experiments involving cultivation of tissues outside the living organism were carried out in glass containers, such as beakers, test tubes, or Petri dishes. The modern scientific term "in vitro" refers to any biological procedure that is performed outside the organism in which it would normally have occurred, to distinguish it from an in vivo procedure (such as in vivo fertilisation), where the tissue remains inside the living organism in which it is normally found.
A colloquial term for babies conceived as the result of IVF, "test tube babies", refers to the tube-shaped containers of glass or plastic resin, called test tubes, that are commonly used in chemistry and biology labs. However, IVF is usually performed in Petri dishes, which are both wider and shallower and often used to cultivate cultures.
IVF is a form of assisted reproductive technology.
History
The first successful birth of a child after IVF treatment, Louise Brown, occurred in 1978. Louise Brown was born as a result of natural cycle IVF where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (now Dr Kershaw's Hospice) in Royton, Oldham, England. Robert G. Edwards, the physiologist who co-developed the treatment, was awarded the Nobel Prize in Physiology or Medicine in 2010. His co-workers, Patrick Steptoe and Jean Purdy, were not eligible for consideration as the Nobel Prize is not awarded posthumously.
The second successful birth of a 'test tube baby' occurred in India on October 3, 1978, just 67 days after Louise Brown was born. The girl, named Durga, was conceived in vitro using a method developed independently by Subhash Mukhopadhyay, a physician and researcher from Hazaribag. Mukhopadhyay had been performing experiments on his own with primitive instruments and a household refrigerator. However, state authorities prevented him from presenting his work at scientific conferences, and it was many years before Mukhopadhyay's contribution was acknowledged in works dealing with the subject.
Adriana Iliescu held the record as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. After the IVF treatment some couples are able to get pregnant without any fertility treatments. In 2018 it was estimated that eight million children had been born worldwide using IVF and other assisted reproduction techniques.
Medical uses
Indications
IVF may be used to overcome female infertility when it is due to problems with the fallopian tubes, making in vivo fertilisation difficult. It can also assist in male infertility, in those cases where there is a defect in sperm quality; in such situations intracytoplasmic sperm injection (ICSI) may be used, where a sperm cell is injected directly into the egg cell. This is used when sperm has difficulty penetrating the egg. ICSI is also used when sperm numbers are very low. When indicated, the use of ICSI has been found to increase the success rates of IVF.
According to UK's National Institute for Health and Care Excellence (NICE) guidelines, IVF treatment is appropriate in cases of unexplained infertility for people who have not conceived after 2 years of regular unprotected sexual intercourse.
In people with anovulation, it may be an alternative after 7–12 attempted cycles of ovulation induction, since the latter is expensive and more easy to control.
Success rates
IVF success rates are the percentage of all IVF procedures that result in favourable outcomes. Depending on the type of calculation used, this outcome may represent the number of confirmed pregnancies, called the pregnancy rate, or the number of live births, called the live birth rate. Due to advances in reproductive technology, live birth rates by cycle five of IVF have increased from 76% in 2005 to 80% in 2010, despite a reduction in the number of embryos being transferred (which decreased the multiple birth rate from 25% to 8%).
The success rate depends on variable factors such as age of the woman, cause of infertility, embryo status, reproductive history, and lifestyle factors. Younger candidates of IVF are more likely to get pregnant. People older than 41 are more likely to get pregnant with a donor egg. People who have been previously pregnant are in many cases more successful with IVF treatments than those who have never been pregnant.
Live birth rate
The live birth rate is the percentage of all IVF cycles that lead to a live birth. This rate does not include miscarriage or stillbirth; multiple-order births, such as twins and triplets, are counted as one pregnancy.
A 2021 summary compiled by the Society for Assisted Reproductive Technology (SART) which reports the average IVF success rates in the United States per age group using non-donor eggs compiled the following data:
In 2006, Canadian clinics reported a live birth rate of 27%. Birth rates in younger patients were slightly higher, with a success rate of 35.3% for those 21 and younger, the youngest group evaluated. Success rates for older patients were also lower and decrease with age, with 37-year-olds at 27.4% and no live births for those older than 48, the oldest group evaluated. Some clinics exceeded these rates, but it is impossible to determine if that is due to superior technique or patient selection, since it is possible to artificially increase success rates by refusing to accept the most difficult patients or by steering them into oocyte donation cycles (which are compiled separately). Further, pregnancy rates can be increased by the placement of several embryos at the risk of increasing the chance for multiples.
Because not each IVF cycle that is started will lead to oocyte retrieval or embryo transfer, reports of live birth rates need to specify the denominator, namely IVF cycles started, IVF retrievals, or embryo transfers. The SART summarised 2008–9 success rates for US clinics for fresh embryo cycles that did not involve donor eggs and gave live birth rates by the age of the prospective mother, with a peak at 41.3% per cycle started and 47.3% per embryo transfer for patients under 35 years of age.
IVF attempts in multiple cycles result in increased cumulative live birth rates. Depending on the demographic group, one study reported 45% to 53% for three attempts, and 51% to 71% to 80% for six attempts.
According to the 2021 National Summary Report compiled by the Society for Assisted Reproductive Technology (SART), the mean number of embryos transfers for patients achieving live birth go as follows:
Effective from 15 February 2021 the majority of Australian IVF clinics publish their individual success rate online via YourIVFSuccess.com.au. This site also contains a predictor tool.
Pregnancy rate
Pregnancy rate may be defined in various ways. In the United States, SART and the Centers for Disease Control (and appearing in the table in the Success Rates section above) include statistics on positive pregnancy test and clinical pregnancy rate.
The 2019 summary compiled by the SART the following data for non-donor eggs (first embryo transfer) in the United States:
In 2006, Canadian clinics reported an average pregnancy rate of 35%. A French study estimated that 66% of patients starting IVF treatment finally succeed in having a child (40% during the IVF treatment at the centre and 26% after IVF discontinuation). Achievement of having a child after IVF discontinuation was mainly due to adoption (46%) or spontaneous pregnancy (42%).
Miscarriage rate
According to a study done by the Mayo Clinic, miscarriage rates for IVF are somewhere between 15 and 25% for those under the age of 35. In naturally conceived pregnancies, the rate of miscarriage is between 10 and 20% for those under the age of 35. Risk of miscarriage, regardless of the method of conception, does increase with age.
Predictors of success
The main potential factors that influence pregnancy (and live birth) rates in IVF have been suggested to be maternal age, duration of infertility or subfertility, bFSH and number of oocytes, all reflecting ovarian function. Optimal age is 23–39 years at time of treatment.
Biomarkers that affect the pregnancy chances of IVF include:
Antral follicle count, with higher count giving higher success rates.
Anti-Müllerian hormone levels, with higher levels indicating higher chances of pregnancy, as well as of live birth after IVF, even after adjusting for age.
Level of DNA fragmentation as measured, e.g. by Comet assay, advanced maternal age and semen quality.
People with ovary-specific FMR1 genotypes including het-norm/low have significantly decreased pregnancy chances in IVF.
Progesterone elevation on the day of induction of final maturation is associated with lower pregnancy rates in IVF cycles in women undergoing ovarian stimulation using GnRH analogues and gonadotrophins. At this time, compared to a progesterone level below 0.8 ng/ml, a level between 0.8 and 1.1 ng/ml confers an odds ratio of pregnancy of approximately 0.8, and a level between 1.2 and 3.0 ng/ml confers an odds ratio of pregnancy of between 0.6 and 0.7. On the other hand, progesterone elevation does not seem to confer a decreased chance of pregnancy in frozen–thawed cycles and cycles with egg donation.
Characteristics of cells from the cumulus oophorus and the membrana granulosa, which are easily aspirated during oocyte retrieval. These cells are closely associated with the oocyte and share the same microenvironment, and the rate of expression of certain genes in such cells are associated with higher or lower pregnancy rate.
An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified.
Other determinants of outcome of IVF include:
As maternal age increases, the likelihood of conception decreases and the chance of miscarriage increases.
With increasing paternal age, especially 50 years and older, the rate of blastocyst formation decreases.
Tobacco smoking reduces the chances of IVF producing a live birth by 34% and increases the risk of an IVF pregnancy miscarrying by 30%.
A body mass index (BMI) over 27 causes a 33% decrease in likelihood to have a live birth after the first cycle of IVF, compared to those with a BMI between 20 and 27. Also, pregnant people who are obese have higher rates of miscarriage, gestational diabetes, hypertension, thromboembolism and problems during delivery, as well as leading to an increased risk of fetal congenital abnormality. Ideal body mass index is 19–30, and many clinics restrict this BMI range as a criterion for initiation of the IVF process.
Salpingectomy or laparoscopic tubal occlusion before IVF treatment increases chances for people with hydrosalpinges.
Success with previous pregnancy and/or live birth increases chances
Low alcohol/caffeine intake increases success rate
The number of embryos transferred in the treatment cycle
Embryo quality
Some studies also suggest that autoimmune disease may also play a role in decreasing IVF success rates by interfering with the proper implantation of the embryo after transfer.
Aspirin is sometimes prescribed to people for the purpose of increasing the chances of conception by IVF, but there was no evidence to show that it is safe and effective.
A 2013 review and meta analysis of randomised controlled trials of acupuncture as an adjuvant therapy in IVF found no overall benefit, and concluded that an apparent benefit detected in a subset of published trials where the control group (those not using acupuncture) experienced a lower than average rate of pregnancy requires further study, due to the possibility of publication bias and other factors.
A Cochrane review came to the result that endometrial injury performed in the month prior to ovarian induction appeared to increase both the live birth rate and clinical pregnancy rate in IVF compared with no endometrial injury. There was no evidence of a difference between the groups in miscarriage, multiple pregnancy or bleeding rates. Evidence suggested that endometrial injury on the day of oocyte retrieval was associated with a lower live birth or ongoing pregnancy rate.
Intake of antioxidants (such as N-acetyl-cysteine, melatonin, vitamin A, vitamin C, vitamin E, folic acid, myo-inositol, zinc or selenium) has not been associated with a significantly increased live birth rate or clinical pregnancy rate in IVF according to Cochrane reviews. The review found that oral antioxidants given to the sperm donor with male factor or unexplained subfertility may improve live birth rates, but more evidence is needed.
A Cochrane review in 2015 came to the result that there is no evidence identified regarding the effect of preconception lifestyle advice on the chance of a live birth outcome.
Method
Theoretically, IVF could be performed by collecting the contents from the fallopian tubes or uterus after natural ovulation, mixing it with sperm, and reinserting the fertilised ova into the uterus. However, without additional techniques, the chances of pregnancy would be extremely small. The additional techniques that are routinely used in IVF include ovarian hyperstimulation to generate multiple eggs, ultrasound-guided transvaginal oocyte retrieval directly from the ovaries, co-incubation of eggs and sperm, as well as culture and selection of resultant embryos before embryo transfer into a uterus.
Ovarian hyperstimulation
Ovarian hyperstimulation is the stimulation to induce development of multiple follicles of the ovaries. It should start with response prediction based on factors such as age, antral follicle count and level of anti-Müllerian hormone. The resulting prediction (e.g. poor or hyper-response to ovarian hyperstimulation) determines the protocol and dosage for ovarian hyperstimulation.
Ovarian hyperstimulation also includes suppression of spontaneous ovulation, for which two main methods are available: Using a (usually longer) GnRH agonist protocol or a (usually shorter) GnRH antagonist protocol. In a standard long GnRH agonist protocol the day when hyperstimulation treatment is started and the expected day of later oocyte retrieval can be chosen to conform to personal choice, while in a GnRH antagonist protocol it must be adapted to the spontaneous onset of the previous menstruation. On the other hand, the GnRH antagonist protocol has a lower risk of ovarian hyperstimulation syndrome (OHSS), which is a life-threatening complication.
For the ovarian hyperstimulation in itself, injectable gonadotropins (usually FSH analogues) are generally used under close monitoring. Such monitoring frequently checks the estradiol level and, by means of gynecologic ultrasonography, follicular growth. Typically approximately 10 days of injections will be necessary.
When stimulating ovulation after suppressing endogenous secretion, it is necessary to supply exogenous gonadotropines. The most common one is the human menopausal gonadotropin (hMG), which is obtained by donation of menopausal women. Other pharmacological preparations are FSH+LH or coripholitropine alpha.
Natural IVF
There are several methods termed natural cycle IVF:
IVF using no drugs for ovarian hyperstimulation, while drugs for ovulation suppression may still be used.
IVF using ovarian hyperstimulation, including gonadotropins, but with a GnRH antagonist protocol so that the cycle initiates from natural mechanisms.
Frozen embryo transfer; IVF using ovarian hyperstimulation, followed by embryo cryopreservation, followed by embryo transfer in a later, natural, cycle.
IVF using no drugs for ovarian hyperstimulation was the method for the conception of Louise Brown. This method can be successfully used when people want to avoid taking ovarian stimulating drugs with its associated side-effects. HFEA has estimated the live birth rate to be approximately 1.3% per IVF cycle using no hyperstimulation drugs for women aged between 40 and 42.
Mild IVF is a method where a small dose of ovarian stimulating drugs are used for a short duration during a natural menstrual cycle aimed at producing 2–7 eggs and creating healthy embryos. This method appears to be an advance in the field to reduce complications and side-effects for women, and it is aimed at quality, and not quantity of eggs and embryos. One study comparing a mild treatment (mild ovarian stimulation with GnRH antagonist co-treatment combined with single embryo transfer) to a standard treatment (stimulation with a GnRH agonist long-protocol and transfer of two embryos) came to the result that the proportions of cumulative pregnancies that resulted in term live birth after 1 year were 43.4% with mild treatment and 44.7% with standard treatment. Mild IVF can be cheaper than conventional IVF and with a significantly reduced risk of multiple gestation and OHSS.
Final maturation induction
When the ovarian follicles have reached a certain degree of development, induction of final oocyte maturation is performed, generally by an injection of human chorionic gonadotropin (hCG). Commonly, this is known as the "trigger shot." hCG acts as an analogue of luteinising hormone, and ovulation would occur between 38 and 40 hours after a single HCG injection, but the egg retrieval is performed at a time usually between 34 and 36 hours after hCG injection, that is, just prior to when the follicles would rupture. This avails for scheduling the egg retrieval procedure at a time where the eggs are fully mature. HCG injection confers a risk of ovarian hyperstimulation syndrome. Using a GnRH agonist instead of hCG eliminates most of the risk of ovarian hyperstimulation syndrome, but with a reduced delivery rate if the embryos are transferred fresh. For this reason, many centers will freeze all oocytes or embryos following agonist trigger.
Egg retrieval
The eggs are retrieved from the patient using a transvaginal technique called transvaginal ultrasound aspiration involving an ultrasound-guided needle being injected through follicles upon collection. Through this needle, the oocyte and follicular fluid are aspirated and the follicular fluid is then passed to an embryologist to identify ova. It is common to remove between ten and thirty eggs. The retrieval process, which lasts approximately 20 to 40 minutes, is performed under conscious sedation or general anesthesia to ensure patient comfort. Following optimal follicular development, the eggs are meticulously retrieved using transvaginal ultrasound guidance with the aid of a specialised ultrasound probe and a fine needle aspiration technique. The follicular fluid, containing the retrieved eggs, is expeditiously transferred to the embryology laboratory for subsequent processing.
Egg and sperm preparation
In the laboratory, for ICSI treatments, the identified eggs are stripped of surrounding cells (also known as cumulus cells) and prepared for fertilisation. An oocyte selection may be performed prior to fertilisation to select eggs that can be fertilised, as it is required they are in metaphase II. There are cases in which if oocytes are in the metaphase I stage, they can be kept being cultured so as to undergo a posterior sperm injection. In the meantime, semen is prepared for fertilisation by removing inactive cells and seminal fluid in a process called sperm washing. If semen is being provided by a sperm donor, it will usually have been prepared for treatment before being frozen and quarantined, and it will be thawed ready for use.
Co-incubation
The sperm and the egg are incubated together at a ratio of about 75,000:1 in a culture media in order for the actual fertilisation to take place. A review in 2013 came to the result that a duration of this co-incubation of about 1 to 4 hours results in significantly higher pregnancy rates than 16 to 24 hours. In most cases, the egg will be fertilised during co-incubation and will show two pronuclei. In certain situations, such as low sperm count or motility, a single sperm may be injected directly into the egg using intracytoplasmic sperm injection (ICSI). The fertilised egg is passed to a special growth medium and left for about 48 hours until the embryo consists of six to eight cells.
In gamete intrafallopian transfer, eggs are removed from the woman and placed in one of the fallopian tubes, along with the man's sperm. This allows fertilisation to take place inside the woman's body. Therefore, this variation is actually an in vivo fertilisation, not in vitro.
Embryo culture
The main durations of embryo culture are until cleavage stage (day two to four after co-incubation) or the blastocyst stage (day five or six after co-incubation). Embryo culture until the blastocyst stage confers a significant increase in live birth rate per embryo transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. Transfer day two instead of day three after fertilisation has no differences in live birth rate. There are significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having from embryos cultured until the blastocyst stage compared with cleavage stage.
Embryo selection
Laboratories have developed grading methods to judge ovocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Assistant (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential.
Embryo transfer
The number to be transferred depends on the number available, the age of the patient and other health and diagnostic factors. In countries such as Canada, the UK, Australia and New Zealand, a maximum of two embryos are transferred except in unusual circumstances. In the UK and according to HFEA regulations, a woman over 40 may have up to three embryos transferred, whereas in the US, there is no legal limit on the number of embryos which may be transferred, although medical associations have provided practice guidelines. Most clinics and country regulatory bodies seek to minimise the risk of multiple pregnancy, as it is not uncommon for multiple embryos to implant if multiple embryos are transferred. Embryos are transferred to the patient's uterus through a thin, plastic catheter, which goes through their vagina and cervix. Several embryos may be passed into the uterus to improve chances of implantation and pregnancy.
Luteal support
Luteal support is the administration of medication, generally progesterone, progestins, hCG, or GnRH agonists, and often accompanied by estradiol, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. A Cochrane review found that hCG or progesterone given during the luteal phase may be associated with higher rates of live birth or ongoing pregnancy, but that the evidence is not conclusive. Co-treatment with GnRH agonists appears to improve outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%). On the other hand, growth hormone or aspirin as adjunctive medication in IVF have no evidence of overall benefit.
Expansions
There are various expansions or additional techniques that can be applied in IVF, which are usually not necessary for the IVF procedure itself, but would be virtually impossible or technically difficult to perform without concomitantly performing methods of IVF.
Preimplantation genetic screening or diagnosis
Preimplantation genetic screening (PGS) or preimplantation genetic diagnosis (PGD) has been suggested to be able to be used in IVF to select an embryo that appears to have the greatest chances for successful pregnancy. However, a systematic review and meta-analysis of existing randomised controlled trials came to the result that there is no evidence of a beneficial effect of PGS with cleavage-stage biopsy as measured by live birth rate. On the contrary, for those of advanced maternal age, PGS with cleavage-stage biopsy significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and non-representative samples because of mosaicism are the major underlying factors for inefficacy of PGS.
Still, as an expansion of IVF, patients who can benefit from PGS/PGD include:
Those who have a family history of inherited disease
Those who want prenatal sex discernment. This can be used to diagnose monogenic disorders with sex linkage. It can potentially be used for sex selection, wherein a fetus is aborted if having an undesired sex.
Those who already have a child with an incurable disease and need compatible cells from a second healthy child to cure the first, resulting in a "saviour sibling" that matches the sick child in HLA type.
PGS screens for numeral chromosomal abnormalities while PGD diagnosis the specific molecular defect of the inherited disease. In both PGS and PGD, individual cells from a pre-embryo, or preferably trophectoderm cells biopsied from a blastocyst, are analysed during the IVF process. Before the transfer of a pre-embryo back to a person's uterus, one or two cells are removed from the pre-embryos (8-cell stage), or preferably from a blastocyst. These cells are then evaluated for normality. Typically within one to two days, following completion of the evaluation, only the normal pre-embryos are transferred back to the uterus. Alternatively, a blastocyst can be cryopreserved via vitrification and transferred at a later date to the uterus. In addition, PGS can significantly reduce the risk of multiple pregnancies because fewer embryos, ideally just one, are needed for implantation.
Cryopreservation
Cryopreservation can be performed as oocyte cryopreservation before fertilisation, or as embryo cryopreservation after fertilisation.
The Rand Consulting Group has estimated there to be 400,000 frozen embryos in the United States in 2006. The advantage is that patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another aspiring parent, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm. Also, oocyte cryopreservation can be used for those who are likely to lose their ovarian reserve due to undergoing chemotherapy.
By 2017, many centres have adopted embryo cryopreservation as their primary IVF therapy, and perform few or no fresh embryo transfers. The two main reasons for this have been better endometrial receptivity when embryos are transferred in cycles without exposure to ovarian stimulation and also the ability to store the embryos while awaiting the results of preimplantation genetic testing.
The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities.
Other expansions
Intracytoplasmic sperm injection (ICSI) is where a single sperm is injected directly into an egg. Its main usage as an expansion of IVF is to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in conjunction with sperm donation. It can be used in teratozoospermia, since once the egg is fertilised abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology.
Additional methods of embryo profiling. For example, methods are emerging in making comprehensive analyses of up to entire genomes, transcriptomes, proteomes and metabolomes which may be used to score embryos by comparing the patterns with ones that have previously been found among embryos in successful versus unsuccessful pregnancies.
Assisted zona hatching (AZH) can be performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.
In egg donation and embryo donation, the resultant embryo after fertilisation is inserted in another person than the one providing the eggs. These are resources for those with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilised in the laboratory with sperm, and the resulting healthy embryos are returned to the recipient's uterus.
In oocyte selection, the oocytes with optimal chances of live birth can be chosen. It can also be used as a means of preimplantation genetic screening.
Embryo splitting can be used for twinning to increase the number of available embryos.
Cytoplasmic transfer is where the cytoplasm from a donor egg is injected into an egg with compromised mitochondria. The resulting egg is then fertilised with sperm and introduced into a uterus, usually that of the person who provided the recipient egg and nuclear DNA. Cytoplasmic transfer was created to aid those who experience infertility due to deficient or damaged mitochondria, contained within an egg's cytoplasm.
Complications and health effects
Multiple births
The major complication of IVF is the risk of multiple births. This is directly related to the practice of transferring multiple embryos at embryo transfer. Multiple births are related to increased risk of pregnancy loss, obstetrical complications, prematurity, and neonatal morbidity with the potential for long term damage. Strict limits on the number of embryos that may be transferred have been enacted in some countries (e.g. Britain, Belgium) to reduce the risk of high-order multiples (triplets or more), but are not universally followed or accepted. Spontaneous splitting of embryos in the uterus after transfer can occur, but this is rare and would lead to identical twins. A double blind, randomised study followed IVF pregnancies that resulted in 73 infants, and reported that 8.7% of singleton infants and 54.2% of twins had a birth weight of less than . There is some evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies.
Sex ratio distortions
Certain kinds of IVF have been shown to lead to distortions in the sex ratio at birth. Intracytoplasmic sperm injection (ICSI), which was first applied in 1991, leads to slightly more female births (51.3% female). Blastocyst transfer, which was first applied in 1984, leads to significantly more male births (56.1% male). Standard IVF done at the second or third day leads to a normal sex ratio.
Epigenetic modifications caused by extended culture leading to the death of more female embryos has been theorised as the reason why blastocyst transfer leads to a higher male sex ratio; however, adding retinoic acid to the culture can bring this ratio back to normal. A second theory is that the male-biased sex ratio may due to a higher rate of selection of male embryos. Male embryos develop faster in vitro, and thus may appear more viable for transfer.
Spread of infectious disease
By sperm washing, the risk that a chronic disease in the individual providing the sperm would infect the birthing parent or offspring can be brought to negligible levels.
If the sperm donor has hepatitis B, The Practice Committee of the American Society for Reproductive Medicine advises that sperm washing is not necessary in IVF to prevent transmission, unless the birthing partner has not been effectively vaccinated. In women with hepatitis B, the risk of vertical transmission during IVF is no different from the risk in spontaneous conception. However, there is not enough evidence to say that ICSI procedures are safe in women with hepatitis B in regard to vertical transmission to the offspring.
Regarding potential spread of HIV/AIDS, Japan's government prohibited the use of IVF procedures in which both partners are infected with HIV. Despite the fact that the ethics committees previously allowed the Ogikubo, Tokyo Hospital, located in Tokyo, to use IVF for couples with HIV, the Ministry of Health, Labour and Welfare of Japan decided to block the practice. Hideji Hanabusa, the vice president of the Ogikubo Hospital, states that together with his colleagues, he managed to develop a method through which scientists are able to remove HIV from sperm.
In the United States, people seeking to be an embryo recipient undergo infectious disease screening required by the Food and Drug Administration (FDA), and reproductive tests to determine the best placement location and cycle timing before the actual embryo transfer occurs. The amount of screening the embryo has already undergone is largely dependent on the genetic parents' own IVF clinic and process. The embryo recipient may elect to have their own embryologist conduct further testing.
Other risks to the egg provider/retriever
A risk of ovarian stimulation is the development of ovarian hyperstimulation syndrome, particularly if hCG is used for inducing final oocyte maturation. This results in swollen, painful ovaries. It occurs in 30% of patients. Mild cases can be treated with over the counter medications and cases can be resolved in the absence of pregnancy. In moderate cases, ovaries swell and fluid accumulated in the abdominal cavities and may have symptoms of heartburn, gas, nausea or loss of appetite. In severe cases, patients have sudden excess abdominal pain, nausea, vomiting and will result in hospitalisation.
During egg retrieval, there exists a small chance of bleeding, infection, and damage to surrounding structures such as bowel and bladder (transvaginal ultrasound aspiration) as well as difficulty in breathing, chest infection, allergic reactions to medication, or nerve damage (laparoscopy).
Ectopic pregnancy may also occur if a fertilised egg develops outside the uterus, usually in the fallopian tubes and requires immediate destruction of the foetus.
IVF does not seem to be associated with an elevated risk of cervical cancer, nor with ovarian cancer or endometrial cancer when neutralising the confounder of infertility itself. Nor does it seem to impart any increased risk for breast cancer.
Regardless of pregnancy result, IVF treatment is usually stressful for patients. Neuroticism and the use of escapist coping strategies are associated with a higher degree of distress, while the presence of social support has a relieving effect. A negative pregnancy test after IVF is associated with an increased risk for depression, but not with any increased risk of developing anxiety disorders. Pregnancy test results do not seem to be a risk factor for depression or anxiety among men in the case of relationships between two cisgender, heterosexual people. Hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist) are associated with depression.
Studies show that there is an increased risk of venous thrombosis or pulmonary embolism during the first trimester of IVF. When looking at long-term studies comparing patients who received or did not receive IVF, there seems to be no correlation with increased risk of cardiac events. There are more ongoing studies to solidify this.
Spontaneous pregnancy has occurred after successful and unsuccessful IVF treatments. Within 2 years of delivering an infant conceived through IVF, subfertile patients had a conception rate of 18%.
Birth defects
A review in 2013 came to the result that infants resulting from IVF (with or without ICSI) have a relative risk of birth defects of 1.32 (95% confidence interval 1.24–1.42) compared to naturally conceived infants. In 2008, an analysis of the data of the National Birth Defects Study in the US found that certain birth defects were significantly more common in infants conceived through IVF, notably septal heart defects, cleft lip with or without cleft palate, esophageal atresia, and anorectal atresia; the mechanism of causality is unclear. However, in a population-wide cohort study of 308,974 births (with 6,163 using assisted reproductive technology and following children from birth to age five) researchers found: "The increased risk of birth defects associated with IVF was no longer significant after adjustment for parental factors." Parental factors included known independent risks for birth defects such as maternal age, smoking status, etc. Multivariate correction did not remove the significance of the association of birth defects and ICSI (corrected odds ratio 1.57), although the authors speculate that underlying male infertility factors (which would be associated with the use of ICSI) may contribute to this observation and were not able to correct for these confounders. The authors also found that a history of infertility elevated risk itself in the absence of any treatment (odds ratio 1.29), consistent with a Danish national registry study and "implicates patient factors in this increased risk." The authors of the Danish national registry study speculate: "our results suggest that the reported increased prevalence of congenital malformations seen in singletons born after assisted reproductive technology is partly due to the underlying infertility or its determinants."
Other risks to the offspring
If the underlying infertility is related to abnormalities in spermatogenesis, male offspring will have a higher risk for sperm abnormalities. In some cases genetic testing may be recommended to help assess the risk of transmission of defects to progeny and to consider whether treatment is desirable.
IVF does not seem to confer any risks regarding cognitive development, school performance, social functioning, and behaviour. Also, IVF infants are known to be as securely attached to their parents as those who were naturally conceived, and IVF adolescents are as well-adjusted as those who have been naturally conceived.
Limited long-term follow-up data suggest that IVF may be associated with an increased incidence of hypertension, impaired fasting glucose, increase in total body fat composition, advancement of bone age, subclinical thyroid disorder, early adulthood clinical depression and binge drinking in the offspring. It is not known, however, whether these potential associations are caused by the IVF procedure in itself, by adverse obstetric outcomes associated with IVF, by the genetic origin of the children or by yet unknown IVF-associated causes. Increases in embryo manipulation during IVF result in more deviant fetal growth curves, but birth weight does not seem to be a reliable marker of fetal stress.
IVF, including ICSI, is associated with an increased risk of imprinting disorders (including Prader–Willi syndrome and Angelman syndrome), with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7).
An IVF-associated incidence of cerebral palsy and neurodevelopmental delay are believed to be related to the confounders of prematurity and low birthweight. Similarly, an IVF-associated incidence of autism and attention-deficit disorder are believed to be related to confounders of maternal and obstetric factors.
Overall, IVF does not cause an increased risk of childhood cancer. Studies have shown a decrease in the risk of certain cancers and an increased risks of certain others including retinoblastoma, hepatoblastoma and rhabdomyosarcoma.
Controversial cases
Mix-ups
In some cases, laboratory mix-ups (misidentified gametes, transfer of wrong embryos) have occurred, leading to legal action against the IVF provider and complex paternity suits. An example is the case of a woman in California who received the embryo of another couple and was notified of this mistake after the birth of her son. This has led to many authorities and individual clinics implementing procedures to minimise the risk of such mix-ups. The HFEA, for example, requires clinics to use a double witnessing system, the identity of specimens is checked by two people at each point at which specimens are transferred. Alternatively, technological solutions are gaining favour, to reduce the manpower cost of manual double witnessing, and to further reduce risks with uniquely numbered RFID tags which can be identified by readers connected to a computer. The computer tracks specimens throughout the process and alerts the embryologist if non-matching specimens are identified. Although the use of RFID tracking has expanded in the US, it is still not widely adopted.
Preimplantation genetic diagnosis or screening
Pre-implantation genetic diagnosis (PGD) is criticised for giving select demographic groups disproportionate access to a means of creating a child possessing characteristics that they consider "ideal". Many fertile couples now demand equal access to embryonic screening so that their child can be just as healthy as one created through IVF. Mass use of PGD, especially as a means of population control or in the presence of legal measures related to population or demographic control, can lead to intentional or unintentional demographic effects such as the skewed live-birth sex ratios seen in China following implementation of its one-child policy.
While PGD was originally designed to screen for embryos carrying hereditary genetic diseases, the method has been applied to select features that are unrelated to diseases, thus raising ethical questions. Examples of such cases include the selection of embryos based on histocompatibility (HLA) for the donation of tissues to a sick family member, the diagnosis of genetic susceptibility to disease, and sex selection.
These examples raise ethical issues because of the morality of eugenics. It becomes frowned upon because of the advantage of being able to eliminate unwanted traits and selecting desired traits. By using PGD, individuals are given the opportunity to create a human life unethically and rely on science and not by natural selection.
For example, a deaf British couple, Tom and Paula Lichy, have petitioned to create a deaf baby using IVF. Some medical ethicists have been very critical of this approach. Jacob M. Appel wrote that "intentionally culling out blind or deaf embryos might prevent considerable future suffering, while a policy that allowed deaf or blind parents to select for such traits intentionally would be far more troublesome."
Industry corruption
Robert Winston, professor of fertility studies at Imperial College London, had called the industry "corrupt" and "greedy" stating that "one of the major problems facing us in healthcare is that IVF has become a massive commercial industry," and that "what has happened, of course, is that money is corrupting this whole technology", and accused authorities of failing to protect couples from exploitation: "The regulatory authority has done a consistently bad job. It's not prevented the exploitation of people, it's not put out very good information to couples, it's not limited the number of unscientific treatments people have access to". The IVF industry has been described as a market-driven construction of health, medicine and the human body.
The industry has been accused of making unscientific claims, and distorting facts relating to infertility, in particular through widely exaggerated claims about how common infertility is in society, in an attempt to get as many couples as possible and as soon as possible to try treatments (rather than trying to conceive naturally for a longer time). This risks removing infertility from its social context and reducing the experience to a simple biological malfunction, which not only can be treated through bio-medical procedures, but should be treated by them.
Older patients
All pregnancies can be risky, but there are greater risk for mothers who are older and are over the age of 40. As people get older, they are more likely to develop conditions such as gestational diabetes and pre-eclampsia. If the mother does conceive over the age of 40, their offspring may be of lower birth weight, and more likely to requires intensive care. Because of this, the increased risk is a sufficient cause for concern. The high incidence of caesarean in older patients is commonly regarded as a risk.
Those conceiving at 40 have a greater risk of gestational hypertension and premature birth. The offspring is at risk when being born from older mothers, and the risks associated with being conceived through IVF.
Adriana Iliescu held the record for a while as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66. In September 2019, a 74-year-old woman became the oldest-ever to give birth after she delivered twins at a hospital in Guntur, Andhra Pradesh.
Pregnancy after menopause
Although menopause is a natural barrier to further conception, IVF has allowed people to be pregnant in their fifties and sixties. People whose uteruses have been appropriately prepared receive embryos that originated from an egg donor. Therefore, although they do not have a genetic link with the child, they have a physical link through pregnancy and childbirth. Even after menopause, the uterus is fully capable of carrying out a pregnancy.
Same-sex couples, single and unmarried parents
A 2009 statement from the ASRM found no persuasive evidence that children are harmed or disadvantaged solely by being raised by single parents, unmarried parents, or homosexual parents. It did not support restricting access to assisted reproductive technologies on the basis of a prospective parent's marital status or sexual orientation. A 2018 study found that children's psychological well-being did not differ when raised by either same-sex parents or heterosexual parents, even finding that psychological well-being was better amongst children raised by same-sex parents.
Ethical concerns include reproductive rights, the welfare of offspring, nondiscrimination against unmarried individuals, homosexual, and professional autonomy.
A controversy in California focused on the question of whether physicians opposed to same-sex relationships should be required to perform IVF for a lesbian couple. Guadalupe T. Benitez, a lesbian medical assistant from San Diego, sued doctors Christine Brody and Douglas Fenton of the North Coast Woman's Care Medical Group after Brody told her that she had "religious-based objections to treating her and homosexuals in general to help them conceive children by artificial insemination," and Fenton refused to authorise a refill of her prescription for the fertility drug Clomid on the same grounds. The California Medical Association had initially sided with Brody and Fenton, but the case, North Coast Women's Care Medical Group v. Superior Court, was decided unanimously by the California State Supreme Court in favour of Benitez on 19 August 2008.
Nadya Suleman came to international attention after having twelve embryos implanted, eight of which survived, resulting in eight newborns being added to her existing six-child family. The Medical Board of California sought to have fertility doctor Michael Kamrava, who treated Suleman, stripped of his licence. State officials allege that performing Suleman's procedure is evidence of unreasonable judgment, substandard care, and a lack of concern for the eight children she would conceive and the six she was already struggling to raise. On 1 June 2011 the Medical Board issued a ruling that Kamrava's medical licence be revoked effective 1 July 2011.
Transgender parents
The research on transgender reproduction and family planning is limited. A 2020 comparative study of children born to a transgender father and cisgender mother via donor sperm insemination in France showed no significant differences to IVF and naturally conceived children of cisgender parents.
Transgender men can experience challenges in pregnancy and birthing from the cis-normative structure within the medical system, as well as psychological challenges such as renewed gender dysphoria. The effect of continued testosterone therapy during pregnancy and breastfeeding is undetermined. Ethical concerns include reproductive rights, reproductive justice, physician autonomy, and transphobia within the health care setting.
Anonymous donors
Alana Stewart, who was conceived using donor sperm, began an online forum for donor children called AnonymousUS in 2010. The forum welcomes the viewpoints of anyone involved in the IVF process. In May 2012, a court ruled making anonymous sperm and egg donation in British Columbia illegal.
In the U.K., Sweden, Norway, Germany, Italy, New Zealand, and some Australian states, donors are not paid and cannot be anonymous.
In 2000, a website called Donor Sibling Registry was created to help biological children with a common donor connect with each other.
Leftover embryos or eggs, unwanted embryos
There may be leftover embryos or eggs from IVF procedures if the person for whom they were originally created has successfully carried one or more pregnancies to term, and no longer wishes to use them. With the patient's permission, these may be donated to help others conceive by means of third party reproduction.
In embryo donation, these extra embryos are given to others for transfer, with the goal of producing a successful pregnancy. Embryo recipients have genetic issues or poor-quality embryos or eggs of their own. The resulting child is considered the child of whoever birthed them, and not the child of the donor, the same as occurs with egg donation or sperm donation. As per The National Infertility Association, typically, genetic parents donate the eggs or embryos to a fertility clinic where they are preserved by oocyte cryopreservation or embryo cryopreservation until a carrier is found for them. The process of matching the donation with the prospective parents is conducted by the agency itself, at which time the clinic transfers ownership of the embryos to the prospective parent(s).
Alternatives to donating unused embryos are destroying them (or having them transferred at a time when pregnancy is very unlikely), keeping them frozen indefinitely, or donating them for use in research (rendering them non-viable). Individual moral views on disposing of leftover embryos may depend on personal views on the beginning of human personhood and the definition and/or value of potential future persons, and on the value that is given to fundamental research questions. Some people believe donation of leftover embryos for research is a good alternative to discarding the embryos when patients receive proper, honest and clear information about the research project, the procedures and the scientific values.
During the embryo selection and transfer phases, many embryos may be discarded in favour of others. This selection may be based on criteria such as genetic disorders or the sex. One of the earliest cases of special gene selection through IVF was the case of the Collins family in the 1990s, who selected the sex of their child.
The ethic issues remain unresolved as no worldwide consensus exists in science, religion, and philosophy on when a human embryo should be recognised as a person. For those who believe that this is at the moment of conception, IVF becomes a moral question when multiple eggs are fertilised, begin development, and only a few are chosen for uterus transfer.
If IVF were to involve the fertilisation of only a single egg, or at least only the number that will be transferred, then this would not be an issue. However, this has the chance of increasing costs dramatically as only a few eggs can be attempted at a time. As a result, the couple must decide what to do with these extra embryos. Depending on their view of the embryo's humanity or the chance the couple will want to try to have another child, the couple has multiple options for dealing with these extra embryos. Couples can choose to keep them frozen, donate them to other infertile couples, thaw them, or donate them to medical research. Keeping them frozen costs money, donating them does not ensure they will survive, thawing them renders them immediately unviable, and medical research results in their termination. In the realm of medical research, the couple is not necessarily told what the embryos will be used for, and as a result, some can be used in stem cell research.
In February 2024, the Alabama Supreme Court ruled in LePage v. Center for Reproductive Medicine that cryopreserved embryos were "persons" or "extrauterine children". After Dobbs v. Jackson Women's Health Organization (2022), some antiabortionists had hoped to get a judgement that fetuses and embryos were "person[s]".
Religious response
The Catholic Church opposes all kinds of assisted reproductive technology and artificial contraception, on the grounds that they separate the procreative goal of marital sex from the goal of uniting married couples.
The Catholic Church permits the use of a small number of reproductive technologies and contraceptive methods such as natural family planning, which involves charting ovulation times, and allows other forms of reproductive technologies that allow conception to take place from normative sexual intercourse, such as a fertility lubricant. Pope Benedict XVI had publicly re-emphasised the Catholic Church's opposition to in vitro fertilisation, saying that it replaces love between a husband and wife.
The Catechism of the Catholic Church, in accordance with the Catholic understanding of natural law, teaches that reproduction has an "inseparable connection" to the sexual union of married couples. In addition, the church opposes IVF because it might result in the disposal of embryos; in Catholicism, an embryo is viewed as an individual with a soul that must be treated as a person. The Catholic Church maintains that it is not objectively evil to be infertile, and advocates adoption as an option for such couples who still wish to have children.
Hindus welcome IVF as a gift for those who are unable to bear children and have declared doctors related to IVF to be conducting punya as there are several characters who were claimed to be born without intercourse, mainly Kaurav and five Pandavas.
Regarding the response to IVF by Islam, a general consensus from the contemporary Sunni scholars concludes that IVF methods are immoral and prohibited. However, Gad El-Hak Ali Gad El-Hak's ART fatwa includes that:
IVF of an egg from the wife with the sperm of her husband and the transfer of the fertilised egg back to the uterus of the wife is allowed, provided that the procedure is indicated for a medical reason and is carried out by an expert physician.
Since marriage is a contract between the wife and husband during the span of their marriage, no third party should intrude into the marital functions of sex and procreation. This means that a third party donor is not acceptable, whether he or she is providing sperm, eggs, embryos, or a uterus. The use of a third party is tantamount to zina, or adultery.
Within the Orthodox Jewish community the concept is debated as there is little precedent in traditional Jewish legal textual sources. Regarding laws of sexuality, religious challenges include masturbation (which may be regarded as "seed wasting"), laws related to sexual activity and menstruation (niddah) and the specific laws regarding intercourse. An additional major issue is that of establishing paternity and lineage. For a baby conceived naturally, the father's identity is determined by a legal presumption (chazakah) of legitimacy: rov bi'ot achar ha'baal – a woman's sexual relations are assumed to be with her husband. Regarding an IVF child, this assumption does not exist and as such Rabbi Eliezer Waldenberg (among others) requires an outside supervisor to positively identify the father. Reform Judaism has generally approved IVF.
Society and culture
Many women of sub-Saharan Africa choose to foster their children to infertile women. IVF enables these infertile women to have their own children, which imposes new ideals to a culture in which fostering children is seen as both natural and culturally important. Many infertile women are able to earn more respect in their society by taking care of the children of other mothers, and this may be lost if they choose to use IVF instead. As IVF is seen as unnatural, it may even hinder their societal position as opposed to making them equal with fertile women. It is also economically advantageous for infertile women to raise foster children as it gives these children greater ability to access resources that are important for their development and also aids the development of their society at large. If IVF becomes more popular without the birth rate decreasing, there could be more large family homes with fewer options to send their newborn children. This could result in an increase of orphaned children and/or a decrease in resources for the children of large families. This would ultimately stifle the children's and the community's growth.
In the US, the pineapple has emerged as a symbol of IVF users, possibly because some people thought, without scientific evidence, that eating pineapple might slightly increase the success rate for the procedure.
Emotional involvement with children
Studies have indicated that IVF mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception. Similarly, studies have indicated that IVF fathers express more warmth and emotional involvement than fathers by adoption and natural conception and enjoy fatherhood more. Some IVF parents become overly involved with their children.
Men and IVF
Research has shown that men largely view themselves as "passive contributors" since they have "less physical involvement" in IVF treatment. Despite this, many men feel distressed after seeing the toll of hormonal injections and ongoing physical intervention on their female partner. Fertility was found to be a significant factor in a man's perception of his masculinity, driving many to keep the treatment a secret. In cases where the men did share that he and his partner were undergoing IVF, they reported to have been teased, mainly by other men, although some viewed this as an affirmation of support and friendship. For others, this led to feeling socially isolated. In comparison with females, males showed less deterioration in mental health in the years following a failed treatment. However, many men did feel guilt, disappointment and inadequacy, stating that they were simply trying to provide an "emotional rock" for their partners.
Ability to withdraw consent
In certain countries, including Austria, Italy, Estonia, Hungary, Spain and Israel, the male does not have the full ability to withdraw consent to storage or use of embryos once they are fertilised. In the United States, the matter has been left to the courts on a more or less ad hoc basis. If embryos are implanted and a child is born contrary to the wishes of the male, he still has legal and financial responsibilities of a father.
Availability and utilisation
Cost
Costs of IVF can be broken down into direct and indirect costs. Direct costs include the medical treatments themselves, including doctor consultations, medications, ultrasound scanning, laboratory tests, the actual IVF procedure, and any associated hospital charges and administrative costs. Indirect costs includes the cost of addressing any complications with treatments, compensation for the gestational surrogate, patients' travel costs, and lost hours of productivity. These costs can be exaggerated by the increasing age of the woman undergoing IVF treatment (particularly those over the age of 40), and the increase costs associated with multiple births. For instance, a pregnancy with twins can cost up to three times that of a singleton pregnancy. While some insurances cover one cycle of IVF, it takes multiple cycles of IVF to have a successful outcome. A study completed in Northern California reveals that the IVF procedure alone that results in a successful outcome costs $61,377, and this can be more costly with the use of a donor egg.
The cost of IVF rather reflects the costliness of the underlying healthcare system than the regulatory or funding environment, and ranges, on average for a standard IVF cycle and in 2006 United States dollars, between $12,500 in the United States to $4,000 in Japan. In Ireland, IVF costs around €4,000, with fertility drugs, if required, costing up to €3,000. The cost per live birth is highest in the United States ($41,000) and United Kingdom ($40,000) and lowest in Scandinavia and Japan (both around $24,500).
The high cost of IVF is also a barrier to access for disabled individuals, who typically have lower incomes, face higher health care costs, and seek health care services more often than non-disabled individuals.
Navigating insurance coverage for transgender expectant parents presents a unique challenge. Insurance plans are designed to cater towards a specific population, meaning that some plans can provide adequate coverage for gender-affirming care but fail to provide fertility services for transgender patients. Additionally, insurance coverage is constructed around a person's legally recognised sex and not their anatomy; thus, transgender people may not get coverage for the services they need, including transgender men for fertility services.
Use by LGBT individuals
Same-sex couples
In larger urban centres, studies have noted that lesbian, gay, bisexual, transgender and queer (LGBTQ+) populations are among the fastest-growing users of fertility care. IVF is increasingly being used to allow lesbian and other LGBT couples to share in the reproductive process through a technique called reciprocal IVF. The eggs of one partner are used to create embryos which the other partner carries through pregnancy. For gay male couples, many elect to use IVF through gestational surrogacy, where one partner's sperm is used to fertilise a donor ovum, and the resulting embryo is transplanted into a surrogate carrier's womb. There are various IVF options available for same-sex couples including, but not limited to, IVF with donor sperm, IVF with a partner's oocytes, reciprocal IVF, IVF with donor eggs, and IVF with gestational surrogate. IVF with donor sperm can be considered traditional IVF for lesbian couples, but reciprocal IVF or using a partner's oocytes are other options for lesbian couples trying to conceive to include both partners in the biological process. Using a partner's oocytes is an option for partners who are unsuccessful in conceiving with their own, and reciprocal IVF involves undergoing reproduction with a donor egg and sperm that is then transferred to a partner who will gestate. Donor IVF involves conceiving with a third party's eggs. Typically, for gay male couples hoping to use IVF, the common techniques are using IVF with donor eggs and gestational surrogates.
Transgender parents
Many LGBT communities centre their support around cisgender gay, lesbian and bisexual people and neglect to include proper support for transgender people. The same 2020 literature review analyses the social, emotional and physical experiences of pregnant transgender men. A common obstacle faced by pregnant transgender men is the possibility of gender dysphoria. Literature shows that transgender men report uncomfortable procedures and interactions during their pregnancies as well as feeling misgendered due to gendered terminology used by healthcare providers. Outside of the healthcare system, pregnant transgender men may experience gender dysphoria due to cultural assumptions that all pregnant people are cisgender women. These people use three common approaches to navigating their pregnancy: passing as a cisgender woman, hiding their pregnancy, or being out and visibly pregnant as a transgender man. Some transgender and gender diverse patients describe their experience in seeking gynaecological and reproductive health care as isolating and discriminatory, as the strictly binary healthcare system often leads to denial of healthcare coverage or unnecessary revelation of their transgender status to their employer.
Many transgender people retain their original sex organs and choose to have children through biological reproduction. Advances in assisted reproductive technology and fertility preservation have broadened the options transgender people have to conceive a child using their own gametes or a donor's. Transgender men and women may opt for fertility preservation before any gender affirming surgery, but it is not required for future biological reproduction. It is also recommended that fertility preservation is conducted before any hormone therapy. Additionally, while fertility specialists often suggest that transgender men discontinue their testosterone hormones prior to pregnancy, research on this topic is still inconclusive. However, a 2019 study found that transgender male patients seeking oocyte retrieval via assisted reproductive technology (including IVF) were able to undergo treatment four months after stopping testosterone treatment, on average. All patients experienced menses and normal AMH, FSH and E2 levels and antral follicle counts after coming off testosterone, which allowed for successful oocyte retrieval. Despite assumptions that the long-term androgen treatment negatively impacts fertility, oocyte retrieval, an integral part of the IVF process, does not appear to be affected.
Biological reproductive options available to transgender women include, but are not limited to, IVF and IUI with the trans woman's sperm and a donor or a partner's eggs and uterus. Fertility treatment options for transgender men include, but are not limited to, IUI or IVF using his own eggs with a donor's sperm and/or donor's eggs, his uterus, or a different uterus, whether that is a partner's or a surrogate's.
Use by disabled individuals
People with disabilities who wish to have children are equally or more likely than the non-disabled population to experience infertility, yet disabled individuals are much less likely to have access to fertility treatment such as IVF. There are many extraneous factors that hinder disabled individuals access to IVF, such as assumptions about decision-making capacity, sexual interests and abilities, heritability of a disability, and beliefs about parenting ability. These same misconceptions about people with disabilities that once led health care providers to sterilise thousands of women with disabilities now lead them to provide or deny reproductive care on the basis of stereotypes concerning people with disabilities and their sexuality.
Not only do misconceptions about disabled individuals parenting ability, sexuality, and health restrict and hinder access to fertility treatment such as IVF, structural barriers such as providers uneducated in disability healthcare and inaccessible clinics severely hinder disabled individuals access to receiving IVF.
By country
Australia
In Australia, the average age of women undergoing ART treatment is 35.5 years among those using their own eggs (one in four being 40 or older) and 40.5 years among those using donated eggs. While IVF is available in Australia, Australians using IVF are unable to choose their baby's gender.
Cameroon
Ernestine Gwet Bell supervised the first Cameroonian child born by IVF in 1998.
Canada
In Canada, one cycle of IVF treatment can cost between $7,750 to $12,250 CAD, and medications alone can cost between $2,500 to over $7,000 CAD. The funding mechanisms that influence accessibility in Canada vary by province and territory, with some provinces providing full, partial or no coverage.
New Brunswick provides partial funding through their Infertility Special Assistance Fund – a one time grant of up to $5,000. Patients may only claim up to 50% of treatment costs or $5,000 (whichever is less) occurred after April 2014. Eligible patients must be a full-time New Brunswick resident with a valid Medicare card and have an official medical infertility diagnosis by a physician.
In December 2015, the Ontario provincial government enacted the Ontario Fertility Program for patients with medical and non-medical infertility, regardless of sexual orientation, gender or family composition. Eligible patients for IVF treatment must be Ontario residents under the age of 43 and have a valid Ontario Health Insurance Plan card and have not already undergone any IVF cycles. Coverage is extensive, but not universal. Coverage extends to certain blood and urine tests, physician/nurse counselling and consultations, certain ultrasounds, up to two cycle monitorings, embryo thawing, freezing and culture, fertilisation and embryology services, single transfers of all embryos, and one surgical sperm retrieval using certain techniques only if necessary. Drugs and medications are not covered under this Program, along with psychologist or social worker counselling, storage and shipping of eggs, sperm or embryos, and the purchase of donor sperm or eggs.
China
IVF is expensive in China and not generally accessible to unmarried women. In August 2022, China's National Health Authority announced that it will take steps to make assisted reproductive technology more accessible, including by guiding local governments to include such technology in its national medical system.
Croatia
No egg or sperm donations take place in Croatia, however using donated sperm or egg in ART and IUI is allowed. With donated eggs, sperm or embryo, a heterosexual couple and single women have legal access to IVF. Male or female couples do not have access to ART as a form of reproduction. The minimum age for males and females to access ART in Croatia is 18 there is no maximum age. Donor anonymity applies, but the born child can be given access to the donor's identity at a certain age
India
The penetration of the IVF market in India is quite low, with only 2,800 cycles per million infertile people in the reproductive age group (20–44 years), as compared to China, which has 6,500 cycles. The key challenges are lack of awareness, affordability and accessibility. Since 2018, however, India has become a destination for fertility tourism, because of lower costs than in the Western world. In December 2021, the Lok Sabha passed the Assisted Reproductive Technology (Regulation) Bill 2020, to regulate ART services including IVF centres, sperm and egg banks.
Israel
Israel has the highest rate of IVF in the world, with 1,657 procedures performed per million people per year. Couples without children can receive funding for IVF for up to two children. The same funding is available for people without children who will raise up to two children in a single parent home. IVF is available for people aged 18 to 45. The Israeli Health Ministry says it spends roughly $3450 per procedure.
Sweden
One, two or three IVF treatments are government subsidised for people who are younger than 40 and have no children. The rules for how many treatments are subsidised, and the upper age limit for the people, vary between different county councils. Single people are treated, and embryo adoption is allowed. There are also private clinics that offer the treatment for a fee.
United Kingdom
Availability of IVF in England is determined by Clinical Commissioning Groups (CCGs). The National Institute for Health and Care Excellence (NICE) recommends up to 3 cycles of treatment for people under 40 years old with minimal success conceiving after 2 years of unprotected sex. Cycles will not be continued for people who are older than 40 years. CCGs in Essex, Bedfordshire and Somerset have reduced funding to one cycle, or none, and it is expected that reductions will become more widespread. Funding may be available in "exceptional circumstances" – for example if a male partner has a transmittable infection or one partner is affected by cancer treatment. According to the campaign group Fertility Fairness "at the end of 2014 every CCG in England was funding at least one cycle of IVF". Prices paid by the NHS in England varied between under £3,000 to more than £6,000 in 2014/5. In February 2013, the cost of implementing the NICE guidelines for IVF along with other treatments for infertility was projected to be £236,000 per year per 100,000 members of the population.
IVF increasingly appears on NHS treatments blacklists. In August 2017 five of the 208 CCGs had stopped funding IVF completely and others were considering doing so. By October 2017 only 25 CCGs were delivering the three recommended NHS IVF cycles to eligible people under 40. Policies could fall foul of discrimination laws if they treat same sex couples differently from heterosexual ones. In July 2019 Jackie Doyle-Price said that women were registering with surgeries further away from their own home in order to get around CCG rationing policies.
The Human Fertilisation and Embryology Authority said in September 2018 that parents who are limited to one cycle of IVF, or have to fund it themselves, are more likely choose to implant multiple embryos in the hope it increases the chances of pregnancy. This significantly increases the chance of multiple births and the associated poor outcomes, which would increase NHS costs. The president of the Royal College of Obstetricians and Gynaecologists said that funding 3 cycles was "the most important factor in maintaining low rates of multiple pregnancies and reduce(s) associated complications".
United States
In the United States, overall availability of IVF in 2005 was 2.5 IVF physicians per 100,000 population, and utilisation was 236 IVF cycles per 100,000. 126 procedures are performed per million people per year. Utilisation highly increases with availability and IVF insurance coverage, and to a significant extent also with percentage of single persons and median income. In the US, an average cycle, from egg retrieval to embryo implantation, costs $12,400, and insurance companies that do cover treatment, even partially, usually cap the number of cycles they pay for. As of 2015, more than 1 million babies had been born utilising IVF technologies.
In the US, as of September 2023, 21 states and the District of Columbia had passed laws for fertility insurance coverage. In 15 of those jurisdictions, some level of IVF coverage is included, and in 17, some fertility preservation services are included. Eleven states require coverage for both fertility preservation and IVF: Colorado, Connecticut, Delaware, Maryland, Maine, New Hampshire, New Jersey, New York, Rhode Island, Utah, and Washington D.C. The states that have infertility coverage laws are Arkansas, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Louisiana, Maryland, Massachusetts, Montana, New Hampshire, New Jersey, New York, Ohio, Rhode Island, Texas, Utah, and West Virginia. As of July 2023, New York was reportedly the only state Medicaid program to cover IVF. These laws differ by state but many require an egg be fertilised with sperm from a spouse and that in order to be covered you must show you cannot become pregnant through penile-vaginal sex. These requirements are not possible for a same-sex couple to meet.
Many fertility clinics in the United States limit the upper age at which people are eligible for IVF to 50 or 55 years. These cut-offs make it difficult for people older than fifty-five to utilise the procedure.
Legal status
Government agencies in China passed bans on the use of IVF in 2003 by unmarried people or by couples with certain infectious diseases.
In India, the use of IVF as a means of sex selection (preimplantation genetic diagnosis) is banned under the Pre-Conception and Pre-Natal Diagnostic Techniques Act, 1994.
Sunni Muslim nations generally allow IVF between married couples when conducted with their own respective sperm and eggs, but not with donor eggs from other couples. But Iran, which is Shi'a Muslim, has a more complex scheme. Iran bans sperm donation but allows donation of both fertilised and unfertilised eggs. Fertilised eggs are donated from married couples to other married couples, while unfertilised eggs are donated in the context of mut'ah or temporary marriage to the father.
By 2012 Costa Rica was the only country in the world with a complete ban on IVF technology, it having been ruled unconstitutional by the nation's Supreme Court because it "violated life." Costa Rica had been the only country in the western hemisphere that forbade IVF. A law project sent reluctantly by the government of President Laura Chinchilla was rejected by parliament. President Chinchilla has not publicly stated her position on the question of IVF. However, given the massive influence of the Catholic Church in her government any change in the status quo seems very unlikely. In spite of Costa Rican government and strong religious opposition, the IVF ban has been struck down by the Inter-American Court of Human Rights in a decision of 20 December 2012. The court said that a long-standing Costa Rican guarantee of protection for every human embryo violated the reproductive freedom of infertile couples because it prohibited them from using IVF, which often involves the disposal of embryos not implanted in a woman's uterus. On 10 September 2015, President Luis Guillermo Solís signed a decree legalising in-vitro fertilisation. The decree was added to the country's official gazette on 11 September. Opponents of the practice have since filed a lawsuit before the country's Constitutional Court.
All major restrictions on single but infertile people using IVF were lifted in Australia in 2002 after a final appeal to the Australian High Court was rejected on procedural grounds in the Leesa Meldrum case. A Victorian federal court had ruled in 2000 that the existing ban on all single women and lesbians using IVF constituted sex discrimination. Victoria's government announced changes to its IVF law in 2007 eliminating remaining restrictions on fertile single women and lesbians, leaving South Australia as the only state maintaining them.
United States
Despite strong popular support (7 out of 10 adults consider IVF access a good thing and 67% believe that health insurance plans should cover IVF), IVF can involve complicated legal issues and has become a contentious issue in US politics. Federal regulations include screening requirements and restrictions on donations, but these generally do not affect heterosexually intimate partners. Doctors may be required to provide treatments to unmarried or LGBTQ couples under non-discrimination laws, as for example in California. The state of Tennessee proposed a bill in 2009 that would have defined donor IVF as adoption. During the same session, another bill proposed barring adoption from any unmarried and cohabitating couple, and activist groups stated that passing the first bill would effectively stop unmarried women from using IVF. Neither of these bills passed.
In 2023, the Practice Committee of the American Society for Reproductive Medicine (ASRM) updated its guidelines for the definition of “infertility” to include those who need medical interventions “in order to achieve a successful pregnancy either as an individual or with a partner.” In many states, legal and financial decisions about provision of infertility treatments reference this “official” definition. On September 29, 2024, California Governor Gavin Newsom signed SB 729, legislation which aligns with the ASRM definition of “infertility”.
In the United States, much of the opposition to the use of IVF is associated with the anti-abortion movement, evangelicals, and denominations such as the Southern Baptists. Current legal opposition to IVF and other fertility treatment access has stemmed from recent court rulings regarding women's reproductive healthcare. In the 2022 Dobbs v. Jackson Women's Health Organization decision, the U.S. Supreme Court overturned the 1973 Roe v. Wade decision which had federally protected the right to abortion. The 2024 Alabama Supreme Court decision regarding IVF has since threatened IVF access and legality in the U.S. Frozen embryos at an IVF clinic were accidentally destroyed resulting in a lawsuit during which the attorneys for the plaintiff sought damages under the Wrongful Death of a Minor Act. The court ruled in favor of the plaintiffs, setting a state-level precedent that embryos and fetuses are given the same rights as minors/children, regardless of whether they are in utero or not. This has created confusion over the status of unused embryos and questions surrounding when life begins. After the court's decision, numerous IVF clinics in Alabama halted IVF treatment services for fears of civil and criminal liability associated with the new rights granted to embryos. Since, laws proposing embryonic personhood have been proposed in 13 other states, creating fear of further state restrictions. This ruling raised concerns from The National Infertility Association and the American Society for Reproductive Medicine that the decision would mean Alabama's bans on abortion prohibit IVF as well, while the University of Alabama at Birmingham health system paused IVF treatments. Eight days later the Alabama legislature voted to protect IVF providers and patients from criminal or civil liability.
The Right to IVF Act, federal legislation that would have codified a right to fertility treatments and provided insurance coverage for in vitro fertilisation treatments, was twice brought to a vote in the Senate in 2024. Both times it was blocked by Senate Republicans, of whom only Lisa Murkowski and Susan Collins voted to move the bill forward.
Few American courts have addressed the issue of the "property" status of a frozen embryo. This issue might arise in the context of a divorce case, in which a court would need to determine which spouse would be able to decide the disposition of the embryos. It could also arise in the context of a dispute between a sperm donor and egg donor, even if they were unmarried. In 2015, an Illinois court held that such disputes could be decided by reference to any contract between the parents-to-be. In the absence of a contract, the court would weigh the relative interests of the parties.
Alternatives
Some alternatives to IVF are:
Artificial insemination, including intracervical insemination and intrauterine insemination of semen. It requires that a woman ovulates, but is a relatively simple procedure, and can be used in the home for self-insemination without medical practitioner assistance. The beneficiaries of artificial insemination are people who desire to give birth to their own child who may be single, people who are in a lesbian relationship or females who are in a heterosexual relationship but with a male partner who is infertile or who has a physical impairment which prevents full intercourse from taking place.
Ovulation induction (in the sense of medical treatment aiming for the development of one or two ovulatory follicles) is an alternative for people with anovulation or oligoovulation, since it is less expensive and more easy to control. It generally involves antiestrogens such as clomifene citrate or letrozole, and is followed by natural or artificial insemination.
Surrogacy, the process in which a surrogate agrees to bear a child for another person or persons, who will become the child's parent(s) after birth. People may seek a surrogacy arrangement when pregnancy is medically impossible, when pregnancy risks are too dangerous for the intended gestational carrier, or when a single man or a male couple wish to have a child.
Adoption whereby a person assumes the parenting of another, usually a child, from that person's biological or legal parent or parents.
See also
Semen cryopreservation
Evans v United Kingdom, a key case at the European Court of Human Rights
Sex selection
Stem cell controversy
Reciprocal IVF
Test Tube Babies (film)
References
Further reading
External links
Fertility
Female genital procedures
Cryobiology
Fertility medicine
Obstetrics
Human pregnancy
Reproduction
British inventions
1977 introductions
Egg donation
Sperm donation | In vitro fertilisation | [
"Physics",
"Chemistry",
"Biology"
] | 18,173 | [
"Physical phenomena",
"Phase transitions",
"Behavior",
"Reproduction",
"Biological interactions",
"Cryobiology",
"Biochemistry"
] |
57,980 | https://en.wikipedia.org/wiki/Shortwave%20radio | Shortwave radio is radio transmission using radio frequencies in the shortwave bands (SW). There is no official definition of the band range, but it always includes all of the high frequency band (HF), which extends from 3 to 30 MHz (approximately 100 to 10 metres in wavelength). It lies between the medium frequency band (MF) and the bottom of the VHF band.
Radio waves in the shortwave band can be reflected or refracted from a layer of electrically charged atoms in the atmosphere called the Ionosphere. Therefore, short waves directed at an angle into the sky can be reflected back to Earth at great distances, beyond the horizon. This is called skywave or "skip" propagation. Thus shortwave radio can be used for communication over very long distances, in contrast to radio waves of higher frequency, which travel in straight lines (line-of-sight propagation) and are generally limited by the visual horizon, about 64 km (40 miles).
Shortwave broadcasts of radio programs played an important role in international broadcasting for many decades, serving both to provide news and information and as a propaganda tool for an international audience. The heyday of international shortwave broadcasting was during the Cold War between 1960 and 1990.
With the wide implementation of other technologies for the long-distance distribution of radio programs, such as satellite radio, cable broadcasting and IP-based transmissions, shortwave broadcasting lost importance. Initiatives for the digitization of broadcasting did not bear fruit either, and , relatively few broadcasters continue to broadcast programs on shortwave.
However, shortwave remains important in war zones, such as in the Russo-Ukrainian war, and shortwave broadcasts can be transmitted over thousands of miles from a single transmitter, making it difficult for government authorities to censor them. Shortwave radio is also often used by aircraft.
History
Development
The name "shortwave" originated during the beginning of radio in the early 20th century, when the radio spectrum was divided into long wave (LW), medium wave (MW), and short wave (SW) bands based on the length of the wave. Shortwave radio received its name because the wavelengths in this band are shorter than 200 m (1,500 kHz) which marked the original upper limit of the medium frequency band first used for radio communications. The broadcast medium wave band now extends above the 200 m / 1,500 kHz limit.
Early long-distance radio telegraphy used long waves, below 300 kilohertz (kHz) / above 1000 m. The drawbacks to this system included a very limited spectrum available for long-distance communication, and the very expensive transmitters, receivers and gigantic antennas. Long waves are also difficult to beam directionally, resulting in a major loss of power over long distances. Prior to the 1920s, the shortwave frequencies above 1.5 MHz were regarded as useless for long-distance communication and were designated in many countries for amateur use.
Guglielmo Marconi, pioneer of radio, commissioned his assistant Charles Samuel Franklin to carry out a large-scale study into the transmission characteristics of short-wavelength waves and to determine their suitability for long-distance transmissions. Franklin rigged up a large antenna at Poldhu Wireless Station, Cornwall, running on 25 kW of power. In June and July 1923, wireless transmissions were completed during nights on 97 meters (about 3 MHz) from Poldhu to Marconi's yacht Elettra in the Cape Verde Islands.
In September 1924, Marconi arranged for transmissions to be made day and night on 32 meters (about 9.4 MHz) from Poldhu to his yacht in the harbour at Beirut, to which he had sailed, and was "astonished" to find he could receive signals "throughout the day". Franklin went on to refine the directional transmission by inventing the curtain array aerial system. In July 1924, Marconi entered into contracts with the British General Post Office (GPO) to install high-speed shortwave telegraphy circuits from London to Australia, India, South Africa and Canada as the main element of the Imperial Wireless Chain. The UK-to-Canada shortwave "Beam Wireless Service" went into commercial operation on 25 October 1926. Beam Wireless Services from the UK to Australia, South Africa and India went into service in 1927.
Shortwave communications began to grow rapidly in the 1920s. By 1928, more than half of long-distance communications had moved from transoceanic cables and longwave wireless services to shortwave, and the overall volume of transoceanic shortwave communications had vastly increased. Shortwave stations had cost and efficiency advantages over massive longwave wireless installations. However, some commercial longwave communications stations remained in use until the 1960s. Long-distance radio circuits also reduced the need for new cables, although the cables maintained their advantages of high security and a much more reliable and better-quality signal than shortwave.
The cable companies began to lose large sums of money in 1927. A serious financial crisis threatened viability of cable companies that were vital to strategic British interests. The British government convened the Imperial Wireless and Cable Conference in 1928 "to examine the situation that had arisen as a result of the competition of Beam Wireless with the Cable Services". It recommended and received government approval for all overseas cable and wireless resources of the Empire to be merged into one system controlled by a newly formed company in 1929, Imperial and International Communications Ltd. The name of the company was changed to Cable and Wireless Ltd. in 1934.
A resurgence of long-distance cables began in 1956 with the laying of TAT-1 across the Atlantic Ocean, the first voice frequency cable on this route. This provided 36 high-quality telephone channels and was soon followed by even higher-capacity cables all around the world. Competition from these cables soon ended the economic viability of shortwave radio for commercial communication.
Amateur use of shortwave propagation
Amateur radio operators also discovered that long-distance communication was possible on shortwave bands. Early long-distance services used surface wave propagation at very low frequencies, which are attenuated along the path at wavelengths shorter than 1,000 meters. Longer distances and higher frequencies using this method meant more signal loss. This, and the difficulties of generating and detecting higher frequencies, made discovery of shortwave propagation difficult for commercial services.
Radio amateurs may have conducted the first successful transatlantic tests in December 1921, operating in the 200 meter mediumwave band (near 1,500 kHz, inside the modern AM broadcast band), which at that time was the shortest wavelength / highest frequency available to amateur radio. In 1922 hundreds of North American amateurs were heard in Europe on 200 meters and at least 20 North American amateurs heard amateur signals from Europe. The first two-way communications between North American and Hawaiian amateurs began in 1922 at 200 meters. Although operation on wavelengths shorter than 200 meters was technically illegal (but tolerated at the time as the authorities mistakenly believed that such frequencies were useless for commercial or military use), amateurs began to experiment with those wavelengths using newly available vacuum tubes shortly after World War I.
Extreme interference at the longer edge of the 150–200 meter band – the official wavelengths allocated to amateurs by the Second National Radio Conference in 1923 – forced amateurs to shift to shorter and shorter wavelengths; however, amateurs were limited by regulation to wavelengths longer than 150 meters (2 MHz). A few fortunate amateurs who obtained special permission for experimental communications at wavelengths shorter than 150 meters completed hundreds of long-distance two-way contacts on 100 meters (3 MHz) in 1923 including the first transatlantic two-way contacts.
By 1924 many additional specially licensed amateurs were routinely making transoceanic contacts at distances of 6,000 miles (9,600 km) and more. On 21 September 1924 several amateurs in California completed two-way contacts with an amateur in New Zealand. On 19 October amateurs in New Zealand and England completed a 90 minute two-way contact nearly halfway around the world. On 10 October the Third National Radio Conference made three shortwave bands available to U.S. amateurs at 80 meters (3.75 MHz), 40 meters (7 MHz) and 20 meters (14 MHz). These were allocated worldwide, while the 10 meter band (28 MHz) was created by the Washington International Radiotelegraph Conference on 25 November 1927. The 15 meter band (21 MHz) was opened to amateurs in the United States on 1 May 1952.
Propagation characteristics
Shortwave radio frequency energy is capable of reaching any location on the Earth as it is influenced by ionospheric reflection back to Earth by the ionosphere, (a phenomenon known as "skywave propagation"). A typical phenomenon of shortwave propagation is the occurrence of a skip zone where reception fails. With a fixed working frequency, large changes in ionospheric conditions may create skip zones at night.
As a result of the multi-layer structure of the ionosphere, propagation often simultaneously occurs on different paths, scattered by the ‘E’ or ‘F’ layer and with different numbers of hops, a phenomenon that may be disturbed for certain techniques. Particularly for lower frequencies of the shortwave band, absorption of radio frequency energy in the lowest ionospheric layer, the ‘D’ layer, may impose a serious limit. This is due to collisions of electrons with neutral molecules, absorbing some of a radio frequency's energy and converting it to heat. Predictions of skywave propagation depend on:
The distance from the transmitter to the target receiver.
Time of day. During the day, frequencies higher than approximately 12 MHz can travel longer distances than lower ones. At night, this property is reversed.
With lower frequencies the dependence on the time of the day is mainly due to the lowest ionospheric layer, the ‘D’ Layer, forming only during the day when photons from the sun break up atoms into ions and free electrons.
Season. During the winter months of the Northern or Southern hemispheres, the AM/MW broadcast band tends to be more favorable because of longer hours of darkness.
Solar flares produce a large increase in D region ionization – so great, sometimes for periods of several minutes, that skywave propagation is nonexistent.
Types of modulation
Several different types of modulation are used to incorporate information in a short-wave signal.
Audio modes
AM
Amplitude modulation is the simplest type and the most commonly used for shortwave broadcasting. The instantaneous amplitude of the carrier is controlled by the amplitude of the signal (speech, or music, for example). At the receiver, a simple detector recovers the desired modulation signal from the carrier.
SSB
Single-sideband transmission is a form of amplitude modulation but in effect filters the result of modulation. An amplitude-modulated signal has frequency components both above and below the carrier frequency. If one set of these components is eliminated as well as the residual carrier, only the remaining set is transmitted. This reduces power in the transmission, as roughly of the energy sent by an AM signal is in the carrier, which is not needed to recover the information contained in the signal. It also reduces signal bandwidth, enabling less than one-half the AM signal bandwidth to be used.
The drawback is the receiver is more complicated, since it must re-create the carrier to recover the signal. Small errors in the detection process greatly affect the pitch of the received signal. As a result, single sideband is not used for music or general broadcast. Single sideband is used for long-range voice communications by ships and aircraft, citizen's band, and amateur radio operators. In amateur radio operation lower sideband (LSB) is customarily used below 10 MHz and USB (upper sideband) above 10 MHz, non-amateur services use USB regardless of frequency.
VSB
Vestigial sideband transmits the carrier and one complete sideband, but filters out most of the other sideband. It is a compromise between AM and SSB, enabling simple receivers to be used, but requires almost as much transmitter power as AM. Its main advantage is that only half the bandwidth of an AM signal is used. It is used by the Canadian standard time signal station CHU. Vestigial sideband was used for analog television and by ATSC, the digital TV system used in North America.
NFM
Narrow-band frequency modulation (NBFM or NFM) is used typically above 20 MHz. Because of the larger bandwidth required, NBFM is commonly used for VHF communication. Regulations limit the bandwidth of a signal transmitted in the HF bands, and the advantages of frequency modulation are greatest if the FM signal has a wide bandwidth. NBFM is limited to short-range transmissions due to the multiphasic distortions created by the ionosphere.
DRM
Digital Radio Mondiale (DRM) is a digital modulation for use on bands below 30 MHz. It is a digital signal, like the data modes, below, but is for transmitting audio, like the analog modes above.
Data modes
CW
Continuous wave (CW) is on-and-off keying of a sine-wave carrier, used for Morse code communications and Hellschreiber facsimile-based teleprinter transmissions. It is a data mode, although often listed separately. It is typically received via lower or upper SSB modes.
RTTY, FAX, SSTV
Radioteletype, fax, digital, slow-scan television, and other systems use forms of frequency-shift keying or audio subcarriers on a shortwave carrier. These generally require special equipment to decode, such as software on a computer equipped with a sound card.
Note that on modern computer-driven systems, digital modes are typically sent by coupling a computer's sound output to the SSB input of a radio.
Users
Some established users of the shortwave radio bands may include:
International broadcasting primarily by government-sponsored propaganda, or international news (for example, the BBC World Service), religious or cultural stations to foreign audiences: The most common use of all.
Domestic broadcasting: to widely dispersed populations with few longwave, mediumwave and FM stations serving them; or for speciality political, religious and alternative media networks; or of individual commercial and non-commercial paid broadcasts.
Oceanic air traffic control uses the HF/shortwave band for long-distance communication to aircraft over the oceans and poles, which are far beyond the range of traditional VHF frequencies. Modern systems also include satellite communications, such as ADS-C/CPDLC.
Two-way radio communications by marine and maritime HF stations, aeronautical users, and ground based stations. For example, two way shortwave communication is still used in remote regions by the Royal Flying Doctor Service of Australia.
"Utility" stations transmitting messages not intended for the general public, such as merchant shipping, marine weather, and ship-to-shore stations; for aviation weather and air-to-ground communications; for military communications; for long-distance governmental purposes, and for other non-broadcast communications.
Amateur radio operators at the 80/75, 60, 40, 30, 20, 17, 15, 12, and 10–meter bands. Licenses are granted by authorized government agencies.
Time signal and radio clock stations: In North America, WWV radio and WWVH radio transmit at these frequencies: 2.5 MHz, 5 MHz, 10 MHz, and 15 MHz; and WWV also transmits on 20 MHz. The CHU radio station in Canada transmits on the following frequencies: 3.33 MHz, 7.85 MHz, and 14.67 MHz. Other similar radio clock stations transmit on various shortwave and longwave frequencies around the world. The shortwave transmissions are primarily intended for human reception, while the longwave stations are generally used for automatic synchronization of watches and clocks.
Sporadic or non-traditional users of the shortwave bands may include:
Clandestine stations. These are stations that broadcast on behalf of various political movements such as rebel or insurrectionist forces. They may advocate civil war, insurrection, rebellion against the government-in-charge of the country to which they are directed. Clandestine broadcasts may emanate from transmitters located in rebel-controlled territory or from outside the country entirely, using another country's transmission facilities.
Numbers stations. These stations regularly appear and disappear all over the shortwave radio band, but are unlicensed and untraceable. It is believed that numbers stations are operated by government agencies and are used to communicate with clandestine operatives working within foreign countries. However, no definitive proof of such use has emerged. Because the vast majority of these broadcasts contain nothing but the recitation of blocks of numbers, in various languages, with occasional bursts of music, they have become known colloquially as "number stations". Perhaps the most noted number station is called the "Lincolnshire Poacher", named after the 18th century English folk song, which is transmitted just before the sequences of numbers.
Unlicensed two way radio activity by individuals such as taxi drivers, bus drivers and fishermen in various countries can be heard on various shortwave frequencies. Such unlicensed transmissions by "pirate" or "bootleg" two way radio operators can often cause signal interference to licensed stations. Unlicensed business radio (taxis, trucking companies, among numerous others) land mobile systems may be found in the 20-30 MHz region while unlicensed marine mobile and other similar users may be found over the entire shortwave range.
Pirate radio broadcasters who feature programming such as music, talk and other entertainment, can be heard sporadically and in various modes on the shortwave bands. Pirate broadcasters take advantage of the better propagation characteristics to achieve more range compared to the AM or FM broadcast bands.
Over-the-horizon radar: From 1976 to 1989, the Soviet Union's Russian Woodpecker over-the-horizon radar system blotted out numerous shortwave broadcasts daily.
Ionospheric heaters used for scientific experimentation such as the High Frequency Active Auroral Research Program in Alaska, and the Sura ionospheric heating facility in Russia.
Shortwave broadcasting
See International broadcasting for details on the history and practice of broadcasting to foreign audiences.
See List of shortwave radio broadcasters for a list of international and domestic shortwave radio broadcasters.
See Shortwave relay station for the actual kinds of integrated technologies used to bring high power signals to listeners.
Frequency allocations
The World Radiocommunication Conference (WRC), organized under the auspices of the International Telecommunication Union, allocates bands for various services in conferences every few years. The last WRC took place in 2023.
As of WRC-97 in 1997, these bands were allocated for international broadcasting. AM shortwave broadcasting channels are allocated with a 5 kHz separation for traditional analog audio broadcasting:
Although countries generally follow the assigned bands, there may be small differences between countries or regions. For example, in the official bandplan of the Netherlands, the 49 m band starts at 5.95 MHz, the 41 m band ends at 7.45 MHz, the 11 m band starts at 25.67 MHz, and the 120 m, 90 m, and 60 m bands are absent altogether. International broadcasters sometimes operate outside the normal the WRC-allocated bands or use off-channel frequencies. This is done for practical reasons, or to attract attention in crowded bands (60 m, 49 m, 40 m, 41 m, 31 m, 25 m).
The new digital audio broadcasting format for shortwave DRM operates 10 kHz or 20 kHz channels. There are some ongoing discussions with respect to specific band allocation for DRM, as it mainly transmitted in 10 kHz format.
The power used by shortwave transmitters ranges from less than one watt for some experimental and amateur radio transmissions to 500 kilowatts and higher for intercontinental broadcasters and over-the-horizon radar. Shortwave transmitting centers often use specialized antenna designs (like the ALLISS antenna technology) to concentrate radio energy at the target area.
Advantages
Shortwave possesses a number of advantages over newer technologies:
Difficulty of censoring programming by authorities in restrictive countries. Unlike their relative ease in monitoring and censoring the Internet, over-the air television, cable television, satellite television, satellite radio, mobile phones, landline phones, and satellite phones, government authorities face technical difficulties monitoring which stations (sites) are being listened to (accessed). For example, during the attempted coup against Soviet President Mikhail Gorbachev, when his access to communications was limited (e.g. his phones, television and radio were cut off), Gorbachev was able to stay informed by means of the BBC World Service on shortwave.
Low-cost shortwave radios are widely available in all but the most repressive countries in the world. Simple shortwave regenerative receivers can be easily built with a few parts.
In many countries (particularly in most developing nations and in the Eastern bloc during the Cold War era) ownership of shortwave receivers has been and continues to be widespread (in many of these countries some domestic stations also used shortwave).
Many newer shortwave receivers are portable and can be battery-operated, making them useful in difficult circumstances. Newer technology includes hand-cranked radios which provide power without batteries.
Shortwave radios can be used in situations where over-the-air television, cable television, satellite television, landline phones, mobile phones, satellite phones, satellite communications, or the Internet is temporarily, long-term or permanently unavailable (or unaffordable).
Shortwave radio travels much farther than broadcast FM (88–108 MHz). Shortwave broadcasts can be easily transmitted over a distance of several thousand miles, including from one continent to another.
Particularly in tropical regions, SW is somewhat less prone to interference from thunderstorms than medium wave radio, and is able to cover a large geographic area with relatively low power (and hence cost). Therefore, in many of these countries it is widely used for domestic broadcasting.
Very little infrastructure is required for long-distance two-way communications using shortwave radio. All one needs is a pair of transceivers, each with an antenna, and a source of energy (such as a battery, a portable generator, or the electrical grid). This makes shortwave radio one of the most robust means of communications, which can be disrupted only by interference or bad ionospheric conditions. Modern digital transmission modes such as MFSK and Olivia are even more robust, allowing successful reception of signals well below the noise floor of a conventional receiver.
Disadvantages
Shortwave radio's benefits are sometimes regarded as being outweighed by its drawbacks, including:
In most Western countries, shortwave radio ownership is usually limited to enthusiasts, since most new standard radios do not receive the shortwave band. Therefore, Western audiences are limited.
In the developed world, shortwave reception is very difficult in urban areas because of excessive noise from switched-mode power adapters, fluorescent or LED light sources, internet modems and routers, computers and many other sources of radio interference.
Audio quality may be limited due to interference and the modes that are used.
Shortwave listening
The Asia-Pacific Telecommunity estimates that there are approximately 600 million shortwave broadcast-radio receivers in use in 2002. WWCR claims that there are 1.5 billion shortwave receivers worldwide.
Many hobbyists listen to shortwave broadcasters. In some cases, the goal is to hear as many stations from as many countries as possible (DXing); others listen to specialized shortwave utility, or "ute", transmissions such as maritime, naval, aviation, or military signals. Others focus on intelligence signals from numbers stations, stations which transmit strange broadcast usually for intelligence operations, or the two way communications by amateur radio operators. Some short wave listeners behave analogously to "lurkers" on the Internet, in that they listen only, and never attempt to send out their own signals. Other listeners participate in clubs, or actively send and receive QSL cards, or become involved with amateur radio and start transmitting on their own.
Many listeners tune the shortwave bands for the programmes of stations broadcasting to a general audience (such as Radio Taiwan International, China Radio International, Voice of America, Radio France Internationale, BBC World Service, Voice of Korea, Radio Free Sarawak etc.). Today, through the evolution of the Internet, the hobbyist can listen to shortwave signals via remotely controlled or web controlled shortwave receivers around the world, even without owning a shortwave radio. Many international broadcasters offer live streaming audio on their websites and a number have closed their shortwave service entirely, or severely curtailed it, in favour of internet transmission.
Shortwave listeners, or SWLs, can obtain QSL cards from broadcasters, utility stations or amateur radio operators as trophies of the hobby. Some stations even give out special certificates, pennants, stickers and other tokens and promotional materials to shortwave listeners.
Shortwave broadcasts and music
Some musicians have been attracted to the unique aural characteristics of shortwave radio which – due to the nature of amplitude modulation, varying propagation conditions, and the presence of interference – generally has lower fidelity than local broadcasts (particularly via FM stations). Shortwave transmissions often have bursts of distortion, and "hollow" sounding loss of clarity at certain aural frequencies, altering the harmonics of natural sound and creating at times a strange "spacey" quality due to echoes and phase distortion. Evocations of shortwave reception distortions have been incorporated into rock and classical compositions, by means of delays or feedback loops, equalizers, or even playing shortwave radios as live instruments. Snippets of broadcasts have been mixed into electronic sound collages and live musical instruments, by means of analogue tape loops or digital samples. Sometimes the sounds of instruments and existing musical recordings are altered by remixing or equalizing, with various distortions added, to replicate the garbled effects of shortwave radio reception.
The first attempts by serious composers to incorporate radio effects into music may be those of the Russian physicist and musician Léon Theremin, who perfected a form of radio oscillator as a musical instrument in 1928 (regenerative circuits in radios of the time were prone to breaking into oscillation, adding various tonal harmonics to music and speech); and in the same year, the development of a French instrument called the Ondes Martenot by its inventor Maurice Martenot, a French cellist and former wireless telegrapher. Karlheinz Stockhausen used shortwave radio and effects in works including Hymnen (1966–1967), Kurzwellen (1968) – adapted for the Beethoven Bicentennial in Opus 1970 with filtered and distorted snippets of Beethoven pieces – Spiral (1968), Pole, Expo (both 1969–1970), and Michaelion (1997).
Cypriot composer Yannis Kyriakides incorporated shortwave numbers station transmissions in his 1999 ConSPIracy cantata.
Holger Czukay, a student of Stockhausen, was one of the first to use shortwave in a rock music context. In 1975, German electronic music band Kraftwerk recorded a full length concept album around simulated radiowave and shortwave sounds, entitled Radio-Activity. The The's Radio Cineola monthly broadcasts drew heavily on shortwave radio sound.
Shortwave's future
The development of direct broadcasts from satellites has reduced the demand for shortwave receiver hardware, but there are still a great number of shortwave broadcasters. A new digital radio technology, Digital Radio Mondiale (DRM), is expected to improve the quality of shortwave audio from very poor to adequate. The future of shortwave radio is threatened by the rise of power line communication (PLC), also known as Broadband over Power Lines (BPL), which uses a data stream transmitted over unshielded power lines. As the BPL frequencies used overlap with shortwave bands, severe distortions can make listening to analog shortwave radio signals near power lines difficult or impossible.
According to Andy Sennitt, former editor of the World Radio TV Handbook,
However, Thomas Witherspoon, editor of shortwave news site SWLingPost.com wrote that
In 2018, Nigel Fry, head of Distribution for the BBC World Service Group,
During the 2022 Russian invasion of Ukraine, the BBC World Service launched two new shortwave frequencies for listeners in Ukraine and Russia, broadcasting English-language news updates in an effort to avoid censorship by the Russian state. American commercial shortwave broadcasters WTWW and WRMI also redirected much of their programming to Ukraine.
See also
ALLISS–a very large rotatable antenna system used in international broadcasting
List of American shortwave broadcasters
List of European short wave transmitters
List of shortwave radio broadcasters
References
External links
View live and historical data and images of space weather and radio propagation.
article describing pros and cons of short wave radio since the Cold War.
describes experiments carried out for the French and British governments.
International broadcasting
Radio
Guglielmo Marconi
Radio spectrum
Short wave radio | Shortwave radio | [
"Physics"
] | 5,898 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
57,992 | https://en.wikipedia.org/wiki/Monocoque | Monocoque ( ), also called structural skin, is a structural system in which loads are supported by an object's external skin, in a manner similar to an egg shell. The word monocoque is a French term for "single shell".
First used for boats, a true monocoque carries both tensile and compressive forces within the skin and can be recognised by the absence of a load-carrying internal frame. Few metal aircraft other than those with milled skins can strictly be regarded as pure monocoques, as they use a metal shell or sheeting reinforced with frames riveted to the skin, but most wooden aircraft are described as monocoques, even though they also incorporate frames.
By contrast, a semi-monocoque is a hybrid combining a tensile stressed skin and a compressive structure made up of longerons and ribs or frames. Other semi-monocoques, not to be confused with true monocoques, include vehicle unibodies, which tend to be composites, and inflatable shells or balloon tanks, both of which are pressure stabilised.
Aircraft
Early aircraft were constructed using frames, typically of wood or steel tubing, which could then be covered (or skinned) with fabric such as Irish linen or cotton. The fabric made a minor structural contribution in tension but none in compression and was there for aerodynamic reasons only. By considering the structure as a whole and not just the sum of its parts, monocoque construction integrated the skin and frame into a single load-bearing shell with significant improvements to strength and weight.
To make the shell, thin strips of wood were laminated into a three dimensional shape; a technique adopted from boat hull construction. One of the earliest examples was the Deperdussin Monocoque racer in 1912, which used a laminated fuselage made up of three layers of glued poplar veneer, which provided both the external skin and the main load-bearing structure. This also produced a smoother surface and reduced drag so effectively that it was able to win most of the races it was entered into.
This style of construction was further developed in Germany by LFG Roland using the patented Wickelrumpf (wrapped hull) form later licensed by them to Pfalz Flugzeugwerke who used it on several fighter aircraft. Each half of the fuselage shell was formed over a male mold using two layers of plywood strips with fabric wrapping between them. The early plywood used was prone to damage from moisture and delamination.
While all-metal aircraft such as the Junkers J 1 had appeared as early as 1915, these were not monocoques but added a metal skin to an underlying framework.
The first metal monocoques were built by Claudius Dornier, while working for Zeppelin-Lindau. He had to overcome a number of problems, not least was the quality of aluminium alloys strong enough to use as structural materials, which frequently formed layers instead of presenting a uniform material. After failed attempts with several large flying boats in which a few components were monocoques, he built the Zeppelin-Lindau V1 to test out a monocoque fuselage. Although it crashed, he learned a lot from its construction. The Dornier-Zeppelin D.I was built in 1918 and although too late for operational service during the war was the first all metal monocoque aircraft to enter production.
In parallel to Dornier, Zeppelin also employed Adolf Rohrbach, who built the Zeppelin-Staaken E-4/20, which when it flew in 1920 became the first multi-engined monocoque airliner, before being destroyed under orders of the Inter-Allied Commission. At the end of WWI, the Inter-Allied Technical Commission published details of the last Zeppelin-Lindau flying boat showing its monocoque construction. In the UK, Oswald Short built a number of experimental aircraft with metal monocoque fuselages starting with the 1920 Short Silver Streak in an attempt to convince the air ministry of its superiority over wood. Despite advantages, aluminium alloy monocoques would not become common until the mid 1930s as a result of a number of factors, including design conservatism and production setup costs. Short would eventually prove the merits of the construction method with a series of flying boats, whose metal hulls didn't absorb water as the wooden hulls did, greatly improving performance. In the United States, Northrop was a major pioneer, introducing techniques used by his own company and Douglas with the Northrop Alpha.
Vehicles
Race cars
In motor racing, the safety of the driver depends on the car body, which must meet stringent regulations, and only a few cars have been built with monocoque structures. An aluminum alloy monocoque chassis was first used in the 1962 Lotus 25 Formula 1 race car and McLaren was the first to use carbon-fiber-reinforced polymers to construct the monocoque of the 1981 McLaren MP4/1. In 1990 the Jaguar XJR-15 became the first production car with a carbon-fiber monocoque.
Road cars
The term monocoque is frequently misapplied to unibody cars. Commercial car bodies are almost never true monocoques but instead use the unibody system (also referred to as unitary construction, unitary body–chassis or body–frame integral construction), in which the body of the vehicle, its floor pan, and chassis form a single structure, while the skin adds relatively little strength or stiffness.
Armoured vehicles
Some armoured fighting vehicles use a monocoque structure with a body shell built up from armour plates, rather than attaching them to a frame. This reduces weight for a given amount of armour. Examples include the German TPz Fuchs and RG-33.
Two-wheeled vehicles
French industrialist and engineer Georges Roy attempted in the 1920s to improve on the bicycle-inspired motorcycle frames of the day, which lacked rigidity. This limited their handling and therefore performance. He applied for a patent in 1926, and at the 1929 Paris Automotive Show unveiled his new motorcycle, the Art-Deco styled 1930 Majestic. Its new type of monocoque body solved the problems he had addressed, and along with better rigidity it did double-duty, as frame and bodywork provided some protection from the elements. Strictly considered, it was more of a semi-monocoque, as it used a box-section, pressed-steel frame with twin side rails riveted together via crossmembers, along with floor pans and rear and front bulkheads.
A Piatti light scooter was produced in the 1950s using a monocoque hollow shell of sheet-steel pressings welded together, into which the engine and transmission were installed from underneath. The machine could be tipped onto its side, resting on the bolt-on footboards for mechanical access.
A monocoque framed scooter was produced by Yamaha from 1960–1962. Model MF-1 was powered by a 50 cc engine with a three-speed transmission and a fuel tank incorporated into the frame.
A monocoque-framed motorcycle was developed by Spanish manufacturer Ossa for the 1967 Grand Prix motorcycle racing season. Although the single-cylinder Ossa had less than its rivals, it was lighter and its monocoque frame was much stiffer than conventional motorcycle frames, giving it superior agility on the racetrack. Ossa won four Grands Prix races with the monocoque bike before their rider died after a crash during the 250 cc event at the 1970 Isle of Man TT, causing the Ossa factory to withdraw from Grand Prix competition.
Notable designers such as Eric Offenstadt and Dan Hanebrink created unique monocoque designs for racing in the early 1970s. The F750 event at the 1973 Isle of Man TT races was won by Peter Williams on the monocoque-framed John Player Special that he helped to design based on Norton Commando. Honda also experimented with the NR500, a monocoque Grand Prix racing motorcycle in 1979. The bike had other innovative features, including an engine with oval shaped cylinders, and eventually succumbed to the problems associated with attempting to develop too many new technologies at once. In 1987 John Britten developed the Aero-D One, featuring a composite monocoque chassis that weighed only .
An aluminium monocoque frame was used for the first time on a mass-produced motorcycle from 2000 on Kawasaki's ZX-12R, their flagship production sportbike aimed at being the fastest production motorcycle. It was described by Cycle World in 2000 as a "monocoque backbone ... a single large diameter beam" and "Fabricated from a combination of castings and sheet-metal stampings".
Single-piece carbon fiber bicycle frames are sometimes described as monocoques; however as most use components to form a frame structure (even if molded in a single piece), these are frames not monocoques, and the pedal-cycle industry continues to refer to them as framesets.
Railroads
The P40DC, P42DC and P32ACDM all utilize a monocoque shell.
Rockets
Various rockets have used pressure-stabilized monocoque designs, such as Atlas and Falcon 1. The Atlas was very light since a major portion of its structural support was provided by its single-wall steel balloon fuel tanks, which hold their shape while under acceleration by internal pressure. Balloon tanks are not true monocoques but act in the same way as inflatable shells. A balloon tank skin only handles tensile forces while compression is resisted by internal liquid pressure in a way similar to semi-monocoques braced by a solid frame. This becomes obvious when internal pressure is lost and the structure collapses. Monocoque tanks can also be cheaper to manufacture than more traditional orthogrids. Blue Origin's upcoming New Glenn launch vehicle will use monocoque construction on its second stage despite the mass penalty in order to reduce the cost of production. This is especially important when the stage is expendable, as with the New Glenn second stage.
See also
Backbone chassis
Body-on-frame
Coachbuilder
List of carbon fiber monocoque cars
Space frame
Thin-shell structure
Vehicle frame
References
Citations
Bibliography
.
Automotive chassis types
Motorcycle frames
Airship technology
Structural engineering
Aircraft components | Monocoque | [
"Engineering"
] | 2,086 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
58,005 | https://en.wikipedia.org/wiki/Airship | An airship, dirigible balloon or dirigible is a type of aerostat (lighter-than-air) aircraft that can navigate through the air flying under its own power. Aerostats use buoyancy from a lifting gas that is less dense than the surrounding air to achieve the lift needed to stay airborne.
In early dirigibles, the lifting gas used was hydrogen, due to its high lifting capacity and ready availability, but the inherent flammability led to several fatal accidents that rendered hydrogen airships obsolete. The alternative lifting gas, helium gas is not flammable, but is rare and relatively expensive. Significant amounts were first discovered in the United States and for a while helium was only available for airship usage in North America. Most airships built since the 1960s have used helium, though some have used hot air.
The envelope of an airship may form the gasbag, or it may contain a number of gas-filled cells. An airship also has engines, crew, and optionally also payload accommodation, typically housed in one or more gondolas suspended below the envelope.
The main types of airship are non-rigid, semi-rigid and rigid airships. Non-rigid airships, often called "blimps", rely solely on internal gas pressure to maintain the envelope shape. Semi-rigid airships maintain their shape by internal pressure, but have some form of supporting structure, such as a fixed keel, attached to it. Rigid airships have an outer structural framework that maintains the shape and carries all structural loads, while the lifting gas is contained in one or more internal gasbags or cells. Rigid airships were first flown by Count Ferdinand von Zeppelin and the vast majority of rigid airships built were manufactured by the firm he founded, Luftschiffbau Zeppelin. As a result, rigid airships are often called zeppelins.
Airships were the first aircraft capable of controlled powered flight, and were most commonly used before the 1940s; their use decreased as their capabilities were surpassed by those of aeroplanes. Their decline was accelerated by a series of high-profile accidents, including the 1930 crash and burning of the British R101 in France, the 1933 and 1935 storm-related crashes of the twin airborne aircraft carrier U.S. Navy helium-filled rigids, the and USS Macon respectively, and the 1937 burning of the German hydrogen-filled Hindenburg. From the 1960s, helium airships have been used where the ability to hover for a long time outweighs the need for speed and manoeuvrability, such as advertising, tourism, camera platforms, geological surveys and aerial observation.
Terminology
Airship
During the pioneer years of aeronautics, terms such as "airship", "air-ship", "air ship" and "ship of the air" meant any kind of navigable or dirigible flying machine. In 1919 Frederick Handley Page was reported as referring to "ships of the air", with smaller passenger types as "air yachts". In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". Nowadays the term "airship" is used only for powered, dirigible balloons, with sub-types being classified as rigid, semi-rigid or non-rigid. Semi-rigid architecture is the more recent, following advances in deformable structures and the exigency of reducing weight and volume of the airships. They have a minimal structure that keeps the shape jointly with overpressure of the gas envelope.
Aerostat
An aerostat is an aircraft that remains aloft using buoyancy or static lift, as opposed to the aerodyne, which obtains lift by moving through the air. Airships are a type of aerostat. The term aerostat has also been used to indicate a tethered or moored balloon as opposed to a free-floating balloon. Aerostats today are capable of lifting a payload of to an altitude of more than above sea level. They can also stay in the air for extended periods of time, particularly when powered by an on-board generator or if the tether contains electrical conductors. Due to this capability, aerostats can be used as platforms for telecommunication services. For instance, Platform Wireless International Corporation announced in 2001 that it would use a tethered airborne payload to deliver cellular phone service to a region in Brazil. The European Union's ABSOLUTE project was also reportedly exploring the use of tethered aerostat stations to provide telecommunications during disaster response.
Blimp
A blimp is a non-rigid aerostat. In British usage it refers to any non-rigid aerostat, including barrage balloons and other kite balloons, having a streamlined shape and stabilising tail fins. Some blimps may be powered dirigibles, as in early versions of the Goodyear Blimp. Later Goodyear dirigibles, though technically semi-rigid airships, have still been called "blimps" by the company.
Zeppelin
The term zeppelin originally referred to airships manufactured by the German Zeppelin Company, which built and operated the first rigid airships in the early years of the twentieth century. The initials LZ, for (German for "Zeppelin airship"), usually prefixed their craft's serial identifiers.
Streamlined rigid (or semi-rigid) airships are often referred to as "Zeppelins", because of the fame that this company acquired due to the number of airships it produced, although its early rival was the Parseval semi-rigid design.
Hybrid airship
Hybrid airships fly with a positive aerostatic contribution, usually equal to the empty weight of the system, and the variable payload is sustained by propulsion or aerodynamic contribution.
Classification
Airships are classified according to their method of construction into rigid, semi-rigid and non-rigid types.
Rigid
A rigid airship has a rigid framework covered by an outer skin or envelope. The interior contains one or more gasbags, cells or balloons to provide lift. Rigid airships are typically unpressurised and can be made to virtually any size. Most, but not all, of the German Zeppelin airships have been of this type.
Semi-rigid
A semi-rigid airship has some kind of supporting structure but the main envelope is held in shape by the internal pressure of the lifting gas. Typically the airship has an extended, usually articulated keel running along the bottom of the envelope to stop it kinking in the middle by distributing suspension loads into the envelope, while also allowing lower envelope pressures.
Non-rigid
Non-rigid airships are often called "blimps". Most, but not all, of the American Goodyear airships have been blimps.
A non-rigid airship relies entirely on internal gas pressure to retain its shape during flight. Unlike the rigid design, the non-rigid airship's gas envelope has no compartments. However, it still typically has smaller internal bags containing air (ballonets). As altitude is increased, the lifting gas expands and air from the ballonets is expelled through valves to maintain the hull's shape. To return to sea level, the process is reversed: air is forced back into the ballonets by scooping air from the engine exhaust and using auxiliary blowers.
Construction
Envelope
The envelope is the structure which contains the buoyant gas. Envelopes in the early 19th century were made from goldbeater's skin, selected for its low weight, relatively high strength, and impermeability compared to paper or linen. By the 1920s, natural rubber treated with cotton became the predominant elastomer used in envelope construction. The natural rubber was succeeded by neoprene in the 1930s and Nylon and PET in the 1950s. A few airships have been metal-clad. The most successful of which is the Detroit ZMC-2, which logged 2265 hours of flight time from 1929 to 1941 before being scrapped, as it was considered too small for operational use on anti-submarine patrols.
The problem of the exact determination of the pressure on an airship envelope is still problematic and has fascinated major scientists such as Theodor Von Karman.
The envelope may contain ballonets (see below), allowingadjust the density of the buoyant gas by adding or subtracting envelope volume.
Ballonet
A ballonet is an air bag inside the outer envelope of an airship which, when inflated, reduces the volume available for the lifting gas, making it more dense. Because air is also denser than the lifting gas, inflating the ballonet reduces the overall lift, while deflating it increases lift. In this way, the ballonet can be used to adjust the lift as required by controlling the buoyancy. By inflating or deflating ballonets strategically, the pilot can control the airship's altitude and attitude.
Ballonets may typically be used in non-rigid or semi-rigid airships, commonly with multiple ballonets located both fore and aft to maintain balance and to control the pitch of the airship.
Lifting gas
Lifting gas is generally hydrogen, helium or hot air.
Hydrogen gives the highest lift and is inexpensive and easily obtained, but is highly flammable and can detonate if mixed with air. Helium is completely non flammable, but gives lower performance- and is a rare element and much more expensive.
Thermal airships use a heated lifting gas, usually air, in a fashion similar to hot air balloons. The first to do so was flown in 1973 by the British company Cameron Balloons.
Gondola
Propulsion and control
Small airships carry their engine(s) in their gondola. Where there were multiple engines on larger airships, these were placed in separate nacelles, termed power cars or engine cars. To allow asymmetric thrust to be applied for maneuvering, these power cars were mounted towards the sides of the envelope, away from the centre line gondola. This also raised them above the ground, reducing the risk of a propeller strike when landing. Widely spaced power cars were also termed wing cars, from the use of "wing" to mean being on the side of something, as in a theater, rather than the aerodynamic device. These engine cars carried a crew during flight who maintained the engines as needed, but who also worked the engine controls, throttle etc., mounted directly on the engine. Instructions were relayed to them from the pilot's station by a telegraph system, as on a ship.
If fuel is burnt for propulsion, then progressive reduction in the airship's overall weight occurs. In hydrogen airships, this is usually dealt with by simply venting cheap hydrogen lifting gas. In helium airships water is often condensed from the exhaust and stored as ballast.
Fins and rudders
To control the airship's direction and stability, it is equipped with fins and rudders. Fins are typically located on the tail section and provide stability and resistance to rolling. Rudders are movable surfaces on the tail that allow the pilot to steer the airship left or right.
Empennage
The empennage refers to the tail section of the airship, which includes the fins, rudders, and other aerodynamic surfaces. It plays a crucial role in maintaining stability and controlling the airship's attitude.
Fuel and power systems
Airships require a source of power to operate their propulsion systems. This includes engines, generators, or batteries, depending on the type of airship and its design. Fuel tanks or batteries are typically located within the envelope or gondola.
Navigation and communication equipment
To navigate safely and communicate with ground control or other aircraft, airships are equipped with a range of instruments, including GPS systems, radios, radar, and navigation lights.
Landing gear
Some airships have landing gear that allows them to land on runways or other surfaces. This landing gear may include wheels, skids, or landing pads.
Performance
Efficiency
The main advantage of airships with respect to any other vehicle is that they require less energy to remain in flight, compared to other air vehicles. The proposed Varialift airship, powered by a mixture of solar-powered engines and conventional jet engines, would use only an estimated 8 percent of the fuel required by jet aircraft. Furthermore, utilizing the jet stream could allow for a faster and more energy-efficient cargo transport alternative to maritime shipping. This is one of the reasons why China has embraced their use recently.
History
Early pioneers
17th–18th century
In 1670, the Jesuit Father Francesco Lana de Terzi, sometimes referred to as the "Father of Aeronautics", published a description of an "Aerial Ship" supported by four copper spheres from which the air was evacuated. Although the basic principle is sound, such a craft was unrealizable then and remains so to the present day, since external air pressure would cause the spheres to collapse unless their thickness was such as to make them too heavy to be buoyant. A hypothetical craft constructed using this principle is known as a vacuum airship.
In 1709, the Brazilian-Portuguese Jesuit priest Bartolomeu de Gusmão made a hot air balloon, the Passarola, ascend to the skies, before an astonished Portuguese court. It would have been on August 8, 1709, when Father Bartolomeu de Gusmão held, in the courtyard of the Casa da Índia, in the city of Lisbon, the first Passarola demonstration. The balloon caught fire without leaving the ground, but, in a second demonstration, it rose to 95 meters in height. It was a small balloon of thick brown paper, filled with hot air, produced by the "fire of material contained in a clay bowl embedded in the base of a waxed wooden tray". The event was witnessed by King John V of Portugal and the future Pope Innocent XIII.
A more practical dirigible airship was described by Lieutenant Jean Baptiste Marie Meusnier in a paper entitled "" (Memorandum on the equilibrium of aerostatic machines) presented to the French Academy on 3 December 1783. The 16 water-color drawings published the following year depict a streamlined envelope with internal ballonets that could be used for regulating lift: this was attached to a long carriage that could be used as a boat if the vehicle was forced to land in water. The airship was designed to be driven by three propellers and steered with a sail-like aft rudder. In 1784, Jean-Pierre Blanchard fitted a hand-powered propeller to a balloon, the first recorded means of propulsion carried aloft. In 1785, he crossed the English Channel in a balloon equipped with flapping wings for propulsion and a birdlike tail for steering.
19th century
The 19th century saw continued attempts to add methods of propulsion to balloons. Rufus Porter built and flew scale models of his "Aerial Locomotive", but never a successful full-size implementation. The Australian William Bland sent designs for his "Atmotic airship" to the Great Exhibition held in London in 1851, where a model was displayed. This was an elongated balloon with a steam engine driving twin propellers suspended underneath. The lift of the balloon was estimated as 5 tons and the car with the fuel as weighing 3.5 tons, giving a payload of 1.5 tons. Bland believed that the machine could be driven at and could fly from Sydney to London in less than a week.
In 1852, Henri Giffard became the first person to make an engine-powered flight when he flew in a steam-powered airship. Airships would develop considerably over the next two decades. In 1863, Solomon Andrews flew his aereon design, an unpowered, controllable dirigible in Perth Amboy, New Jersey and offered the device to the U.S. Military during the Civil War. He flew a later design in 1866 around New York City and as far as Oyster Bay, New York. This concept used changes in lift to provide propulsive force, and did not need a powerplant. In 1872, the French naval architect Dupuy de Lome launched a large navigable balloon, which was driven by a large propeller turned by eight men. It was developed during the Franco-Prussian war and was intended as an improvement to the balloons used for communications between Paris and the countryside during the siege of Paris, but was completed only after the end of the war.
In 1872, Paul Haenlein flew an airship with an internal combustion engine running on the coal gas used to inflate the envelope, the first use of such an engine to power an aircraft. Charles F. Ritchel made a public demonstration flight in 1878 of his hand-powered one-man rigid airship, and went on to build and sell five of his aircraft.
In 1874, Micajah Clark Dyer filed U.S. Patent 154,654 "Apparatus for Navigating the Air". It is believed successful trial flights were made between 1872 and 1874, but detailed dates are not available. The apparatus used a combination of wings and paddle wheels for navigation and propulsion.
More details can be found in the book about his life.
In 1883, the first electric-powered flight was made by Gaston Tissandier, who fitted a Siemens electric motor to an airship.
The first fully controllable free flight was made in 1884 by Charles Renard and Arthur Constantin Krebs in the French Army airship La France. La France made the first flight of an airship that landed where it took off; the long, airship covered in 23 minutes with the aid of an electric motor, and a battery. It made seven flights in 1884 and 1885.
In 1888, the design of the Campbell Air Ship, designed by Professor Peter C. Campbell, was built by the Novelty Air Ship Company. It was lost at sea in 1889 while being flown by Professor Hogan during an exhibition flight.
From 1888 to 1897, Friedrich Wölfert built three airships powered by Daimler Motoren Gesellschaft-built petrol engines, the last of which, Deutschland, caught fire in flight and killed both occupants in 1897. The 1888 version used a single cylinder Daimler engine and flew from Canstatt to Kornwestheim.
In 1897, an airship with an aluminum envelope was built by the Hungarian-Croatian engineer David Schwarz. It made its first flight at Tempelhof field in Berlin after Schwarz had died. His widow, Melanie Schwarz, was paid 15,000 marks by Count Ferdinand von Zeppelin to release the industrialist Carl Berg from his exclusive contract to supply Schwartz with aluminium.
From 1897 to 1899, Konstantin Danilewsky, medical doctor and inventor from Kharkiv (now Ukraine, then Russian Empire), built four muscle-powered airships, of gas volume . About 200 ascents were made within a framework of experimental flight program, at two locations, with no significant incidents.
Early 20th century
In July 1900, the Luftschiff Zeppelin LZ1 made its first flight. This led to the most successful airships of all time: the Zeppelins, named after Count Ferdinand von Zeppelin who began working on rigid airship designs in the 1890s, leading to the flawed LZ1 in 1900 and the more successful LZ2 in 1906. The Zeppelin airships had a framework composed of triangular lattice girders covered with fabric that contained separate gas cells. At first multiplane tail surfaces were used for control and stability: later designs had simpler cruciform tail surfaces. The engines and crew were accommodated in "gondolas" hung beneath the hull driving propellers attached to the sides of the frame by means of long drive shafts. Additionally, there was a passenger compartment (later a bomb bay) located halfway between the two engine compartments.
Alberto Santos-Dumont was a wealthy young Brazilian who lived in France and had a passion for flying. He designed 18 balloons and dirigibles before turning his attention to fixed-winged aircraft.
On 19 October 1901 he flew his airship Number 6, from the Parc Saint Cloud to and around the Eiffel Tower and back in under thirty minutes. This feat earned him the Deutsch de la Meurthe prize of 100,000 francs. Many inventors were inspired by Santos-Dumont's small airships. Many airship pioneers, such as the American Thomas Scott Baldwin, financed their activities through passenger flights and public demonstration flights. Stanley Spencer built the first British airship with funds from advertising baby food on the sides of the envelope. Others, such as Walter Wellman and Melvin Vaniman, set their sights on loftier goals, attempting two polar flights in 1907 and 1909, and two trans-Atlantic flights in 1910 and 1912.
In 1902 the Spanish engineer Leonardo Torres Quevedo published details of an innovative airship design in Spain and France titled "" ("Improvements in dirigible aerostats"). With a non-rigid body and internal bracing wires, it overcame the flaws of these types of aircraft as regards both rigid structure (zeppelin type) and flexibility, providing the airships with more stability during flight, and the capability of using heavier engines and a greater passenger load. A system called "auto-rigid". In 1905, helped by Captain A. Kindelán, he built the airship "Torres Quevedo" at the Guadalajara military base. In 1909 he patented an improved design that he offered to the French Astra company, who started mass-producing it in 1911 as the Astra-Torres airship. This type of envelope was employed in the United Kingdom in the Coastal, C Star, and North Sea airships. The distinctive three-lobed design was widely used during the Great War by the Entente powers for diverse tasks, principally convoy protection and anti-submarine warfare. The success during the war even drew the attention of the Imperial Japanese Navy, who acquired a model in 1922. Torres also drew up designs of a 'docking station' and made alterations to airship designs, to find a resolution to the slew of problems faced by airship engineers to dock dirigibles. In 1910, he proposed the idea of attaching an airships nose to a mooring mast and allowing the airship to weathervane with changes of wind direction. The use of a metal column erected on the ground, the top of which the bow or stem would be directly attached to (by a cable) would allow a dirigible to be moored at any time, in the open, regardless of wind speeds. Additionally, Torres' design called for the improvement and accessibility of temporary landing sites, where airships were to be moored for the purpose of disembarkation of passengers. The final patent was presented in February 1911 in Belgium, and later to France and the United Kingdom in 1912, under the title "Improvements in Mooring Arrengements for Airships".
Other airship builders were also active before the war: from 1902 the French company Lebaudy Frères specialized in semirigid airships such as the Patrie and the République, designed by their engineer Henri Julliot, who later worked for the American company Goodrich; the German firm Schütte-Lanz built the wooden-framed SL series from 1911, introducing important technical innovations; another German firm Luft-Fahrzeug-Gesellschaft built the Parseval-Luftschiff (PL) series from 1909, and Italian Enrico Forlanini's firm had built and flown the first two Forlanini airships.
On May 12, 1902, the inventor and Brazilian aeronaut Augusto Severo de Albuquerque Maranhao and his French mechanic, Georges Saché, died when they were flying over Paris in the airship called Pax. A marble plaque at number 81 of the Avenue du Maine in Paris, commemorates the location of Augusto Severo accident. The Catastrophe of the Balloon "Le Pax" is a 1902 short silent film recreation of the catastrophe, directed by Georges Méliès.
In Britain, the Army built their first dirigible, the Nulli Secundus, in 1907. The Navy ordered the construction of an experimental rigid in 1908. Officially known as His Majesty's Airship No. 1 and nicknamed the Mayfly, it broke its back in 1911 before making a single flight. Work on a successor did not start until 1913.
German airship passenger service known as DELAG (Deutsche-Luftschiffahrts AG) was established in 1910.
In 1910 Walter Wellman unsuccessfully attempted an aerial crossing of the Atlantic Ocean in the airship America.
World War I
The prospect of airships as bombers had been recognized in Europe well before the airships were up to the task. H. G. Wells' The War in the Air (1908) described the obliteration of entire fleets and cities by airship attack. The Italian forces became the first to use dirigibles for a military purpose during the Italo–Turkish War, the first bombing mission being flown on 10 March 1912. World War I marked the airship's real debut as a weapon. The Germans, French, and Italians all used airships for scouting and tactical bombing roles early in the war, and all learned that the airship was too vulnerable for operations over the front. The decision to end operations in direct support of armies was made by all in 1917.
Many in the German military believed they had found the ideal weapon with which to counteract British naval superiority and strike at Britain itself, while more realistic airship advocates believed the zeppelin's value was as a long range scout/attack craft for naval operations. Raids on England began in January 1915 and peaked in 1916: following losses to the British defenses only a few raids were made in 1917–18, the last in August 1918. Zeppelins proved to be terrifying but inaccurate weapons. Navigation, target selection and bomb-aiming proved to be difficult under the best of conditions, and the cloud cover that was frequently encountered by the airships reduced accuracy even further. The physical damage done by airships over the course of the war was insignificant, and the deaths that they caused amounted to a few hundred. Nevertheless, the raid caused a significant diversion of British resources to defense efforts. The airships were initially immune to attack by aircraft and anti-aircraft guns: as the pressure in their envelopes was only just higher than ambient air, holes had little effect. But following the introduction of a combination of incendiary and explosive ammunition in 1916, their flammable hydrogen lifting gas made them vulnerable to the defending aeroplanes. Several were shot down in flames by British defenders, and many others destroyed in accidents. New designs capable of reaching greater altitude were developed, but although this made them immune from attack it made their bombing accuracy even worse.
Countermeasures by the British included sound detection equipment, searchlights and anti-aircraft artillery, followed by night fighters in 1915. One tactic used early in the war, when their limited range meant the airships had to fly from forward bases and the only zeppelin production facilities were in Friedrichshafen, was the bombing of airship sheds by the British Royal Naval Air Service. Later in the war, the development of the aircraft carrier led to the first successful carrier-based air strike in history: on the morning of 19 July 1918, seven Sopwith 2F.1 Camels were launched from and struck the airship base at Tønder, destroying zeppelins L 54 and L 60.
The British Army had abandoned airship development in favour of aeroplanes before the start of the war, but the Royal Navy had recognized the need for small airships to counteract the submarine and mine threat in coastal waters. Beginning in February 1915, they began to develop the SS (Sea Scout) class of blimp. These had a small envelope of and at first used aircraft fuselages without the wing and tail surfaces as control cars. Later, more advanced blimps with purpose-built gondolas were used. The NS class (North Sea) were the largest and most effective non-rigid airships in British service, with a gas capacity of , a crew of 10 and an endurance of 24 hours. Six bombs were carried, as well as three to five machine guns. British blimps were used for scouting, mine clearance, and convoy patrol duties. During the war, the British operated over 200 non-rigid airships. Several were sold to Russia, France, the United States, and Italy. The large number of trained crews, low attrition rate and constant experimentation in handling techniques meant that at the war's end Britain was the world leader in non-rigid airship technology.
The Royal Navy continued development of rigid airships until the end of the war. Eight rigid airships had been completed by the armistice, (No. 9r, four 23 Class, two R23X Class and one R31 Class), although several more were in an advanced state of completion by the war's end. Both France and Italy continued to use airships throughout the war. France preferred the non-rigid type, whereas Italy flew 49 semi-rigid airships in both the scouting and bombing roles.
Aeroplanes had almost entirely replaced airships as bombers by the end of the war, and Germany's remaining zeppelins were destroyed by their crews, scrapped or handed over to the Allied powers as war reparations. The British rigid airship program, which had mainly been a reaction to the potential threat of the German airships, was wound down.
The interwar period
Britain, the United States and Germany built rigid airships between the two world wars. Italy and France made limited use of Zeppelins handed over as war reparations. Italy, the Soviet Union, the United States and Japan mainly operated semi-rigid airships.
Under the terms of the Treaty of Versailles, Germany was not allowed to build airships of greater capacity than a million cubic feet. Two small passenger airships, LZ 120 Bodensee and its sister ship LZ 121 Nordstern, were built immediately after the war but were confiscated following the sabotage of the wartime Zeppelins that were to have been handed over as war reparations: Bodensee was given to Italy and Nordstern to France. On May 12, 1926, the Italian built semi-rigid airship Norge was the first aircraft to fly over the North Pole.
The British R33 and R34 were near-identical copies of the German L 33, which had come down almost intact in Yorkshire on 24 September 1916. Despite being almost three years out of date by the time they were launched in 1919, they became two of the most successful airships in British service. The creation of the Royal Air Force (RAF) in early 1918 created a hybrid British airship program. The RAF was not interested in airships while the Admiralty was, so a deal was made where the Admiralty would design any future military airships and the RAF would handle manpower, facilities and operations. On 2 July 1919, R34 began the first double crossing of the Atlantic by an aircraft. It landed at Mineola, Long Island on 6 July after 108 hours in the air; the return crossing began on 8 July and took 75 hours. This feat failed to generate enthusiasm for continued airship development, and the British airship program was rapidly wound down.
During World War I, the U.S. Navy acquired its first airship, the DH-1, but it was destroyed while being inflated shortly after delivery to the Navy. After the war, the U.S. Navy contracted to buy the R 38, which was being built in Britain, but before it was handed over it was destroyed because of a structural failure during a test flight.
America then started constructing the , designed by the Bureau of Aeronautics and based on the Zeppelin L 49. Assembled in Hangar No. 1 and first flown on 4 September 1923 at Lakehurst, New Jersey, it was the first airship to be inflated with the noble gas helium, which was then so scarce that the Shenandoah contained most of the world's supply. A second airship, , was built by the Zeppelin company as compensation for the airships that should have been handed over as war reparations according to the terms of the Versailles Treaty but had been sabotaged by their crews. This construction order saved the Zeppelin works from the threat of closure. The success of the Los Angeles, which was flown successfully for eight years, encouraged the U.S. Navy to invest in its own, larger airships. When the Los Angeles was delivered, the two airships had to share the limited supply of helium, and thus alternated operating and overhauls.
In 1922, Sir Dennistoun Burney suggested a plan for a subsidised air service throughout the British Empire using airships (the Burney Scheme). Following the coming to power of Ramsay MacDonald's Labour government in 1924, the scheme was transformed into the Imperial Airship Scheme, under which two airships were built, one by a private company and the other by the Royal Airship Works under Air Ministry control. The two designs were radically different. The "capitalist" ship, the R100, was more conventional, while the "socialist" ship, the R101, had many innovative design features. Construction of both took longer than expected, and the airships did not fly until 1929. Neither airship was capable of the service intended, though the R100 did complete a proving flight to Canada and back in 1930. On 5 October 1930, the R101, which had not been thoroughly tested after major modifications, crashed on its maiden voyage to India at Beauvais in France killing 48 of the 54 people aboard. Among the dead were the craft's chief designer and the Secretary of State for Air. The disaster ended British interest in airships.
In 1925 the Zeppelin company started construction of the Graf Zeppelin (LZ 127), the largest airship that could be built in the company's existing shed, and intended to stimulate interest in passenger airships. The Graf Zeppelin burned blau gas, similar to propane, stored in large gas bags below the hydrogen cells, as fuel. Since its density was similar to that of air, it avoided the weight change as fuel was used, and thus the need to valve hydrogen. The Graf Zeppelin had an impressive safety record, flying over (including the first circumnavigation of the globe by airship) without a single passenger injury.
The U.S. Navy experimented with the use of airships as airborne aircraft carriers, developing an idea pioneered by the British. The USS Los Angeles was used for initial experiments, and the and , the world's largest at the time, were used to test the principle in naval operations. Each carried four F9C Sparrowhawk fighters in its hangar, and could carry a fifth on the trapeze. The idea had mixed results. By the time the Navy started to develop a sound doctrine for using the ZRS-type airships, the last of the two built, USS Macon, had been wrecked. Meanwhile, the seaplane had become more capable, and was considered a better investment.
Eventually, the U.S. Navy lost all three U.S.-built rigid airships to accidents. USS Shenandoah flew into a severe thunderstorm over Noble County, Ohio while on a poorly planned publicity flight on 3 September 1925. It broke into pieces, killing 14 of its crew. USS Akron was caught in a severe storm and flown into the surface of the sea off the shore of New Jersey on 3 April 1933. It carried no life boats and few life vests, so 73 of its crew of 76 died from drowning or hypothermia. USS Macon was lost after suffering a structural failure offshore near Point Sur Lighthouse on 12 February 1935. The failure caused a loss of gas, which was made much worse when the aircraft was driven over pressure height causing it to lose too much helium to maintain flight. Only two of its crew of 83 died in the crash thanks to the inclusion of life jackets and inflatable rafts after the Akron disaster.
The Empire State Building was completed in 1931 with a dirigible mast, in anticipation of future passenger airship service, but no airship ever used the mast. Various entrepreneurs experimented with commuting and shipping freight via airship.
In the 1930s, the German Zeppelins successfully competed with other means of transport. They could carry significantly more passengers than other contemporary aircraft while providing amenities similar to those on ocean liners, such as private cabins, observation decks, and dining rooms. Less importantly, the technology was potentially more energy-efficient than heavier-than-air designs. Zeppelins were also faster than ocean liners. On the other hand, operating airships was quite involved. Often the crew would outnumber passengers, and on the ground large teams were necessary to assist mooring and very large hangars were required at airports.
By the mid-1930s, only Germany still pursued airship development. The Zeppelin company continued to operate the Graf Zeppelin on passenger service between Frankfurt and Recife in Brazil, taking 68 hours. Even with the small Graf Zeppelin, the operation was almost profitable. In the mid-1930s, work began on an airship designed specifically to operate a passenger service across the Atlantic. The Hindenburg (LZ 129) completed a successful 1936 season, carrying passengers between Lakehurst, New Jersey and Germany. The year 1937 started with the most spectacular and widely remembered airship accident. Approaching the Lakehurst mooring mast minutes before landing on 6 May 1937, the Hindenburg suddenly burst into flames and crashed to the ground. Of the 97 people aboard, 35 died: 13 passengers, 22 aircrew, along with one American ground-crewman. The disaster happened before a large crowd, was filmed and a radio news reporter was recording the arrival. This was a disaster that theater goers could see and hear in newsreels. The Hindenburg disaster shattered public confidence in airships, and brought a definitive end to their "golden age". The day after the Hindenburg disaster, the Graf Zeppelin landed safely in Germany after its return flight from Brazil. This was the last international passenger airship flight.
Hindenburgs identical sister ship, the Graf Zeppelin II (LZ 130), could not carry commercial passengers without helium, which the United States refused to sell to Germany. The Graf Zeppelin made several test flights and conducted some electronic espionage until 1939 when it was grounded due to the beginning of the war. The two Graf Zeppelins were scrapped in April, 1940.
Development of airships continued only in the United States, and to a lesser extent, the Soviet Union. The Soviet Union had several semi-rigid and non-rigid airships. The semi-rigid dirigible SSSR-V6 OSOAVIAKhIM was among the largest of these craft, and it set the longest endurance flight at the time of over 130 hours. It crashed into a mountain in 1938, killing 13 of the 19 people on board. While this was a severe blow to the Soviet airship program, they continued to operate non-rigid airships until 1950.
World War II
While Germany determined that airships were obsolete for military purposes in the coming war and concentrated on the development of aeroplanes, the United States pursued a program of military airship construction even though it had not developed a clear military doctrine for airship use. When the Japanese attacked Pearl Harbor on 7 December 1941, bringing the United States into World War II, the U.S. Navy had 10 nonrigid airships:
4 K-class: K-2, K-3, K-4 and K-5 designed as patrol ships, all built in 1938.
3 L-class: L-1, L-2 and L-3 as small training ships, produced in 1938.
1 G-class, built in 1936 for training.
2 TC-class that were older patrol airships designed for land forces, built in 1933. The U.S. Navy acquired both from the United States Army in 1938.
Only K- and TC-class airships were suitable for combat and they were quickly pressed into service against Japanese and German submarines, which were then sinking American shipping within visual range of the American coast. U.S. Navy command, remembering airship's anti-submarine success in World War I, immediately requested new modern antisubmarine airships and on 2 January 1942 formed the ZP-12 patrol unit based in Lakehurst from the four K airships. The ZP-32 patrol unit was formed from two TC and two L airships a month later, based at NAS Moffett Field in Sunnyvale, California. An airship training base was created there as well. The status of submarine-hunting Goodyear airships in the early days of World War II has created significant confusion. Although various accounts refer to airships Resolute and Volunteer as operating as "privateers" under a Letter of Marque, Congress never authorized a commission, nor did the President sign one.
In the years 1942–44, approximately 1,400 airship pilots and 3,000 support crew members were trained in the military airship crew training program and the airship military personnel grew from 430 to 12,400. The U.S. airships were produced by the Goodyear factory in Akron, Ohio. From 1942 till 1945, 154 airships were built for the U.S. Navy (133 K-class, 10 L-class, seven G-class, four M-class) and five L-class for civilian customers (serial numbers L-4 to L-8).
The primary airship tasks were patrol and convoy escort near the American coastline. They also served as an organization centre for the convoys to direct ship movements, and were used in naval search and rescue operations. Rarer duties of the airships included aerophoto reconnaissance, naval mine-laying and mine-sweeping, parachute unit transport and deployment, cargo and personnel transportation. They were deemed quite successful in their duties with the highest combat readiness factor in the entire U.S. air force (87%).
During the war, some 532 ships without airship escort were sunk near the U.S. coast by enemy submarines. Only one ship, the tanker Persephone, of the 89,000 or so in convoys escorted by blimps was sunk by the enemy. Airships engaged submarines with depth charges and, less frequently, with other on-board weapons. They were excellent at driving submarines down, where their limited speed and range prevented them from attacking convoys. The weapons available to airships were so limited that until the advent of the homing torpedo they had little chance of sinking a submarine.
Only one airship was ever destroyed by U-boat: on the night of 18/19 July 1943, the K-74 from ZP-21 division was patrolling the coastline near Florida. Using radar, the airship located a surfaced German submarine. The K-74 made her attack run but the U-boat opened fire first. K-74s depth charges did not release as she crossed the U-boat and the K-74 received serious damage, losing gas pressure and an engine but landing in the water without loss of life. The crew was rescued by patrol boats in the morning, but one crewman, Aviation Machinist's Mate Second Class Isadore Stessel, died from a shark attack. The U-boat, , was slightly damaged and the next day or so was attacked by aircraft, sustaining damage that forced it to return to base. It was finally sunk on 24 August 1943 by a British Vickers Wellington near Vigo, Spain.
Fleet Airship Wing One operated from Lakehurst, New Jersey, Glynco, Georgia, Weeksville, North Carolina, South Weymouth NAS Massachusetts, Brunswick NAS and Bar Harbor Maine, Yarmouth, Nova Scotia, and Argentia, Newfoundland.
Some Navy blimps saw action in the European war theater. In 1944–45, the U.S. Navy moved an entire squadron of eight Goodyear K class blimps (K-89, K-101, K-109, K-112, K-114, K-123, K-130, & K-134) with flight and maintenance crews from Weeksville Naval Air Station in North Carolina to Naval Air Station Port Lyautey, French Morocco. Their mission was to locate and destroy German U-boats in the relatively shallow waters around the Strait of Gibraltar where magnetic anomaly detection (MAD) was viable. PBY aircraft had been searching these waters but MAD required low altitude flying that was dangerous at night for these aircraft. The blimps were considered a perfect solution to establish a 24/7 MAD barrier (fence) at the Straits of Gibraltar with the PBYs flying the day shift and the blimps flying the night shift. The first two blimps (K-123 & K-130) left South Weymouth NAS on 28 May 1944 and flew to Argentia, Newfoundland, the Azores, and finally to Port Lyautey where they completed the first transatlantic crossing by nonrigid airships on 1 June 1944. The blimps of USN Blimp Squadron ZP-14 (Blimpron 14, aka The Africa Squadron) also conducted mine-spotting and mine-sweeping operations in key Mediterranean ports and various escorts including the convoy carrying United States President Franklin D. Roosevelt and British Prime Minister Winston Churchill to the Yalta Conference in 1945. Airships from the ZP-12 unit took part in the sinking of the last U-boat before German capitulation, sinking the U-881 on 6 May 1945 together with destroyers USS Atherton and USS Moberly.
Other airships patrolled the Caribbean, Fleet Airship Wing Two, Headquartered at Naval Air Station Richmond, covered the Gulf of Mexico from Richmond and Key West, Florida, Houma, Louisiana, as well as Hitchcock and Brownsville, Texas. FAW 2 also patrolled the northern Caribbean from San Julian, the Isle of Pines (now called Isla de la Juventud) and Guantánamo Bay, Cuba as well as Vernam Field, Jamaica.
Navy blimps of Fleet Airship Wing Five, (ZP-51) operated from bases in Trinidad, British Guiana and Paramaribo, Suriname. Fleet Airship Wing Four operated along the coast of Brazil. Two squadrons, VP-41 and VP-42 flew from bases at Amapá, Igarapé-Açu, São Luís Fortaleza, Fernando de Noronha, Recife, Maceió, Ipitanga (near Salvador, Bahia), Caravelas, Vitória and the hangar built for the Graf Zeppelin at Santa Cruz, Rio de Janeiro.
Fleet Airship Wing Three operated squadrons, ZP-32 from Moffett Field, ZP-31 at NAS Santa Ana, and ZP-33 at NAS Tillamook, Oregon. Auxiliary fields were at Del Mar, Lompoc, Watsonville and Eureka, California, North Bend and Astoria, Oregon, as well as Shelton and Quillayute in Washington.
From 2 January 1942 until the end of war airship operations in the Atlantic, the blimps of the Atlantic fleet made 37,554 flights and flew 378,237 hours. Of the over 70,000 ships in convoys protected by blimps, only one was sunk by a submarine while under blimp escort.
The Soviet Union flew a single airship during the war. The W-12, built in 1939, entered service in 1942 for paratrooper training and equipment transport. It made 1432 flights with 300 metric tons of cargo until 1945. On 1 February 1945, the Soviets constructed a second airship, a Pobeda-class (Victory-class) unit (used for mine-sweeping and wreckage clearing in the Black Sea) that crashed on 21 January 1947. Another W-class – W-12bis Patriot – was commissioned in 1947 and was mostly used until the mid-1950s for crew training, parades and propaganda.
Postwar period
Although airships are no longer used for major cargo and passenger transport, they are still used for other purposes such as advertising, sightseeing, surveillance, research and advocacy.
There were several studies and proposals for nuclear-powered airships, starting with a 1954 study by F.W. Locke Jr for US Navy. In 1957 Edwin J. Kirschner published the book The Zeppelin in the Atomic Age, which promoted the use of atomic airships. In 1959 Goodyear presented a plan for nuclear-powered airship for both military and commercial use. Several other proposals and papers were published during the next decades.
In the 1980s, Per Lindstrand and his team introduced the GA-42 airship, the first airship to use fly-by-wire flight control, which considerably reduced the pilot's workload.
An airship was prominently featured in the James Bond film A View to a Kill, released in 1985. The Skyship 500 had the livery of Zorin Industries.
The world's largest thermal airship () was constructed by the Per Lindstrand company for French botanists in 1993. The AS-300 carried an underslung raft, which was positioned by the airship on top of tree canopies in the rain forest, allowing the botanists to carry out their treetop research without significant damage to the rainforest. When research was finished at a given location, the airship returned to pick up and relocate the raft.
In June 1987, the U.S. Navy awarded a US$168.9 million contract to Westinghouse Electric and Airship Industries of the UK to find out whether an airship could be used as an airborne platform to detect the threat of sea-skimming missiles, such as the Exocet. At 2.5 million cubic feet, the Westinghouse/Airship Industries Sentinel 5000 (Redesignated YEZ-2A by the U.S. Navy) prototype design was to have been the largest blimp ever constructed. Additional funding for the Naval Airship Program was killed in 1995 and development was discontinued.
The SVAM CA-80 airship, which was produced in 2000 by Shanghai Vantage Airship Manufacture Co., Ltd., had a successful trial flight in September 2001. This was designed for advertisement and propagation, air-photo, scientific test, tour and surveillance duties. It was certified as a grade-A Hi-Tech introduction program (No. 20000186) in Shanghai. The CAAC authority granted a type design approval and certificate of airworthiness for the airship.
In the 1990s the Zeppelin company returned to the airship business. Their new model, designated the Zeppelin NT, made its maiden flight on 18 September 1997. there were four NT aircraft flying, a fifth was completed in March 2009 and an expanded NT-14 (14,000 cubic meters of helium, capable of carrying 19 passengers) was under construction. One was sold to a Japanese company, and was planned to be flown to Japan in the summer of 2004. Due to delays getting permission from the Russian government, the company decided to transport the airship to Japan by sea. One of the four NT craft is in South Africa carrying diamond detection equipment from De Beers, an application at which the very stable low vibration NT platform excels. The project included design adaptations for high temperature operation and desert climate, as well as a separate mooring mast and a very heavy mooring truck. NT-4 belonged to Airship Ventures of Moffett Field, Mountain View in the San Francisco Bay Area, and provided sight-seeing tours.
Blimps are used for advertising and as TV camera platforms at major sporting events. The most iconic of these are the Goodyear Blimps. Goodyear operates three blimps in the United States, and The Lightship Group, now The AirSign Airship Group, operates up to 19 advertising blimps around the world. Airship Management Services owns and operates three Skyship 600 blimps. Two operate as advertising and security ships in North America and the Caribbean. Airship Ventures operated a Zeppelin NT for advertising, passenger service and special mission projects. They were the only airship operator in the U.S. authorized to fly commercial passengers, until closing their doors in 2012.
Skycruise Switzerland AG owns and operates two Skyship 600 blimps. One operates regularly over Switzerland used on sightseeing tours.
The Switzerland-based Skyship 600 has also played other roles over the years. For example, it was flown over Athens during the 2004 Summer Olympics as a security measure. In November 2006, it carried advertising calling it The Spirit of Dubai as it began a publicity tour from London to Dubai, UAE on behalf of The Palm Islands, the world's largest man-made islands created as a residential complex.
Los Angeles-based Worldwide Aeros Corp. produces FAA Type Certified Aeros 40D Sky Dragon airships.
In May 2006, the U.S. Navy began to fly airships again after a hiatus of nearly 44 years. The program uses a single American Blimp Company A-170 nonrigid airship, with designation MZ-3A. Operations focus on crew training and research, and the platform integrator is Northrop Grumman. The program is directed by the Naval Air Systems Command and is being carried out at NAES Lakehurst, the original centre of U.S. Navy lighter-than-air operations in previous decades.
In November 2006 the U.S. Army bought an A380+ airship from American Blimp Corporation through a Systems level contract with Northrop Grumman and Booz Allen Hamilton. The airship started flight tests in late 2007, with a primary goal of carrying of payload to an altitude of under remote control and autonomous waypoint navigation. The program will also demonstrate carrying of payload to The platform could be used for intelligence collection. In 2008, the CA-150 airship was launched by Vantage Airship. This is an improved modification of model CA-120 and completed manufacturing in 2008. With larger volume and increased passenger capacity, it is the largest manned nonrigid airship in China at present.
In late June 2014 the Electronic Frontier Foundation flew the GEFA-FLUG AS 105 GD/4 blimp AE Bates (owned by, and in conjunction with, Greenpeace) over the NSA's Bluffdale Utah Data Center in protest.
Postwar projects
Hybrid designs such as the Heli-Stat airship/helicopter, the Aereon aerostatic/aerodynamic craft, and the CycloCrane (a hybrid aerostatic/rotorcraft), struggled to take flight. The Cyclocrane was also interesting in that the airship's envelope rotated along its longitudinal axis.
In 2005, a short-lived project of the U.S. Defense Advanced Research Projects Agency (DARPA) was Walrus HULA, which explored the potential for using airships as long-distance, heavy lift craft. The primary goal of the research program was to determine the feasibility of building an airship capable of carrying of payload a distance of and land on an unimproved location without the use of external ballast or ground equipment (such as masts). In 2005, two contractors, Lockheed Martin and US Aeros Airships were each awarded approximately $3 million to do feasibility studies of designs for WALRUS. Congress removed funding for Walrus HULA in 2006.
Modern Airships
Military
In 2010, the U.S. Army awarded a $517 million (£350.6 million) contract to Northrop Grumman and partner Hybrid Air Vehicles to develop a Long Endurance Multi-Intelligence Vehicle (LEMV) system, in the form of three HAV 304s. The project was cancelled in February 2012 due to it being behind schedule and over budget; also the forthcoming U.S. withdrawal from Afghanistan where it was intended to be deployed. Following this the Hybrid Air Vehicles HAV 304 Airlander 10 was repurchased by Hybrid Air Vehicles then modified and reassembled in Bedford, UK, and renamed the Airlander 10. As of 2018, it was being tested in readiness for its UK flight test programme.
, a French company, manufactures and operates airships and aerostats. For 2 years, A-NSE has been testing its airships for the French Army. Airships and aerostats are operated to provide intelligence, surveillance, and reconnaissance (ISR) support. Their airships include many innovative features such as water ballast take-off and landing systems, variable geometry envelopes and thrust–vectoring systems.
The U.S. government has funded two major projects in the high altitude arena. The Composite Hull High Altitude Powered Platform (CHHAPP) is sponsored by U.S. Army Space and Missile Defense Command. This aircraft is also sometimes called HiSentinel High-Altitude Airship. This prototype ship made a five-hour test flight in September 2005. The second project, the high-altitude airship (HAA), is sponsored by DARPA. In 2005, DARPA awarded a contract for nearly $150 million to Lockheed Martin for prototype development. First flight of the HAA was planned for 2008 but suffered programmatic and funding delays. The HAA project evolved into the High Altitude Long Endurance-Demonstrator (HALE-D). The U.S. Army and Lockheed Martin launched the first-of-its kind HALE-D on July 27, 2011. After attaining an altitude of , due to an anomaly, the company decided to abort the mission. The airship made a controlled descent in an unpopulated area of southwest Pennsylvania.
On 31 January 2006 Lockheed Martin made the first flight of their secretly built hybrid airship designated the P-791. The design is very similar to the SkyCat, unsuccessfully promoted for many years by the British company Advanced Technologies Group (ATG).
Dirigibles have been used in the War in Afghanistan for reconnaissance purposes, as they allow for constant monitoring of a specific area through cameras mounted on the airships.
Passenger transport
In the 1990s, the successor of the original Zeppelin company in Friedrichshafen, the Zeppelin Luftschifftechnik GmbH, reengaged in airship construction. The first experimental craft (later christened Friedrichshafen) of the type "Zeppelin NT" flew in September 1997. Though larger than common blimps, the Neue Technologie (New Technology) zeppelins are much smaller than their giant ancestors and not actually Zeppelin-types in the classical sense. They are sophisticated semirigids. Apart from the greater payload, their main advantages compared to blimps are higher speed and excellent maneuverability. Meanwhile, several Zeppelin NT have been produced and operated profitably in joyrides, research flights and similar applications.
In June 2004, a Zeppelin NT was sold for the first time to a Japanese company, Nippon Airship Corporation, for tourism and advertising mainly around Tokyo. It was also given a role at the 2005 Expo in Aichi. The aircraft began a flight from Friedrichshafen to Japan, stopping at Geneva, Paris, Rotterdam, Munich, Berlin, Stockholm and other European cities to carry passengers on short legs of the flight. Russian authorities denied overflight permission, so the airship had to be dismantled and shipped to Japan rather than following the historic Graf Zeppelin flight from Germany to Japan.
In 2008, Airship Ventures Inc. began operations from Moffett Federal Airfield near Mountain View, California and until November 2012 offered tours of the San Francisco Bay Area for up to 12 passengers.
Exploration
In November 2005, De Beers, a diamond mining company, launched an airship exploration program over the remote Kalahari Desert. A Zeppelin NT, equipped with a Bell Geospace gravity gradiometer, was used to find potential diamond mines by scanning the local geography for low-density rock formations, known as kimberlite pipes. On 21 September 2007, the airship was severely damaged by a whirlwind while in Botswana. One crew member, who was on watch aboard the moored craft, was slightly injured but released after overnight observation in hospital.
Thermal
Several companies, such as Cameron Balloons in Bristol, United Kingdom, build hot-air airships. These combine the structures of both hot-air balloons and small airships. The envelope is the normal cigar shape, complete with tail fins, but is inflated with hot air instead of helium to provide the lifting force. A small gondola, carrying the pilot and passengers, a small engine, and the burners to provide the hot air are suspended below the envelope, beneath an opening through which the burners protrude.
Hot-air airships typically cost less to buy and maintain than modern helium-based blimps, and can be quickly deflated after flights. This makes them easy to carry in trailers or trucks and inexpensive to store. They are usually very slow moving, with a typical top speed of . They are mainly used for advertising, but at least one has been used in rainforests for wildlife observation, as they can be easily transported to remote areas.
Unmanned remote
Remote-controlled (RC) airships, a type of unmanned aerial system (UAS), are sometimes used for commercial purposes such as advertising and aerial video and photography as well as recreational purposes. They are particularly common as an advertising mechanism at indoor stadiums. While RC airships are sometimes flown outdoors, doing so for commercial purposes is illegal in the US. Commercial use of an unmanned airship must be certified under part 121.
Adventures
In 2008, French adventurer Stephane Rousson attempted to cross the English Channel with a muscular pedal powered airship.
Stephane Rousson also flies the Aérosail, a sky sailing yacht.
Current design projects
Today, with large, fast, and more cost-efficient fixed-wing aircraft and helicopters, it is unknown whether huge airships can operate profitably in regular passenger transport though, as energy costs rise, attention is once again returning to these lighter-than-air vessels as a possible alternative. At the very least, the idea of comparatively slow, "majestic" cruising at relatively low altitudes and in comfortable atmosphere certainly has retained some appeal. There have been some niches for airships in and after World War II, such as long-duration observations, antisubmarine patrol, platforms for TV camera crews, and advertising; these generally require only small and flexible craft, and have thus generally been better fitted for cheaper (non-passenger) blimps.
Heavy lifting
It has periodically been suggested that airships could be employed for cargo transport, especially delivering extremely heavy loads to areas with poor infrastructure over great distances. This has also been called roadless trucking. Also, airships could be used for heavy lifting over short distances (e.g. on construction sites); this is described as heavy-lift, short-haul. In both cases, the airships are heavy haulers. One recent enterprise of this sort was the Cargolifter project, in which a hybrid (thus not entirely Zeppelin-type) airship even larger than Hindenburg was projected. Around 2000, CargoLifter AG built the world's largest self-supporting hall, measuring long, wide and high about south of Berlin. In May 2002, the project was stopped for financial reasons; the company had to file bankruptcy. The enormous CargoLifter hangar was later converted to house the Tropical Islands Resort. Although no rigid airships are currently used for heavy lifting, hybrid airships are being developed for such purposes. AEREON 26, tested in 1971, was described in John McPhee's The Deltoid Pumpkin Seed.
An impediment to the large-scale development of airships as heavy haulers has been figuring out how they can be used in a cost-efficient way. In order to have a significant economic advantage over ocean transport, cargo airships must be able to deliver their payload faster than ocean carriers but more cheaply than airplanes. William Crowder, a fellow at the Logistics Management Institute, has calculated that cargo airships are only economical when they can transport 500 to 1,000 tons, approximately the same as a super-jumbo aircraft. The large initial investment required to build such a large airship has been a hindrance to production, especially given the risk inherent in a new technology. The chief commercial officer of the company hoping to sell the LMH-1, a cargo airship currently being developed by Lockheed Martin, believes that airships can be economical in hard-to-reach locations such as mining operations in northern Canada that currently require ice roads.
Metal-clad airships
A metal-clad airship has a very thin metal envelope, rather than the usual fabric. The shell may be either internally braced or monocoque as in the ZMC-2, which flew many times in the 1920s, the only example ever to do so. The shell may be gas-tight as in a non-rigid blimp, or the design may employ internal gas bags as in a rigid airship. Compared to a fabric envelope the metal cladding is expected to be more durable.
Hybrid airships
A hybrid airship is a general term for an aircraft that combines characteristics of heavier-than-air (aeroplane or helicopter) and lighter-than-air technology. Examples include helicopter/airship hybrids intended for heavy lift applications and dynamic lift airships intended for long-range cruising. Most airships, when fully loaded with cargo and fuel, are usually ballasted to be heavier than air, and thus must use their propulsion system and shape to create aerodynamic lift, necessary to stay aloft. All airships can be operated to be slightly heavier than air at periods during flight (descent). Accordingly, the term "hybrid airship" refers to craft that obtain a significant portion of their lift from aerodynamic lift or other kinetic means.
For example, the Aeroscraft is a buoyancy assisted air vehicle that generates lift through a combination of aerodynamics, thrust vectoring and gas buoyancy generation and management, and for much of the time will fly heavier than air. Aeroscraft is Worldwide Aeros Corporation's continuation of DARPA's now cancelled Walrus HULA (Hybrid Ultra Large Aircraft) project.
The Patroller P3 hybrid airship developed by Advanced Hybrid Aircraft Ltd, BC, Canada, is a relatively small () buoyant craft, manned by the crew of five and with the endurance of up to 72 hours. The flight-tests with the 40% RC scale model proved that such a craft can be launched and landed without a large team of strong ground-handlers. Design features a special "winglet" for aerodynamic lift control.
Airships in space exploration
Airships have been proposed as a potential cheap alternative to surface rocket launches for achieving Earth orbit. JP Aerospace have proposed the Airship to Orbit project, which intends to float a multi-stage airship up to mesospheric altitudes of 55 km (180,000 ft) and then use ion propulsion to accelerate to orbital speed. At these heights, air resistance would not be a significant problem for achieving such speeds. The company has not yet built any of the three stages.
NASA has proposed the High Altitude Venus Operational Concept, which comprises a series of five missions including crewed missions to the atmosphere of Venus in airships. Pressures on the surface of the planet are too high for human habitation, but at a specific altitude the pressure is equal to that found on Earth and this makes Venus a potential target for human colonization.
Hypothetically, there could be an airship lifted by a vacuum—that is, by material that can contain nothing at all inside but withstand the atmospheric pressure from the outside. It is, at this point, science fiction, although NASA has posited that some kind of vacuum airship could eventually be used to explore the surface of Mars.
Cruiser feeder transport airship
EU FP7 MAAT Project has studied an innovative cruiser/feeder airship system, for the stratosphere with a cruiser remaining airborne for a long time and feeders connecting it to the ground and flying as piloted balloons.
Airships for humanitarian and cargo transport
Google co-founder Sergey Brin founded LTA Research in 2015 to develop airships for humanitarian and cargo transport. The company's 124-meter-long airship Pathfinder 1 received from the FAA a special airworthiness certificate for the helium-filled airship in September 2023.
The certificate allowed the largest airship since the ill-fated Hindenburg to begin flight tests at Moffett Field, a joint civil-military airport in Silicon Valley.
Comparison with heavier-than-air aircraft
The advantage of airships over aeroplanes is that static lift sufficient for flight is generated by the lifting gas and requires no engine power. This was an immense advantage before the middle of World War I and remained an advantage for long-distance or long-duration operations until World War II. Modern concepts for high-altitude airships include photovoltaic cells to reduce the need to land to refuel, thus they can remain in the air until consumables expire. This similarly reduces or eliminates the need to consider variable fuel weight in buoyancy calculations.
The disadvantages are that an airship has a very large reference area and comparatively large drag coefficient, thus a larger drag force compared to that of aeroplanes and even helicopters. Given the large frontal area and wetted surface of an airship, a practical limit is reached around , only about one-third the typical airspeed of a modern commercial airplane. Thus, airships are used where speed is not critical.
The lift capability of an airship is equal to the buoyant force minus the weight of the airship. This assumes standard air-temperature and pressure conditions. Corrections are usually made for water vapor and impurity of lifting gas, as well as percentage of inflation of the gas cells at liftoff. Based on specific lift (lifting force per unit volume of gas), the greatest static lift is provided by hydrogen (11.15 N/m3 or 71 lbf/1000 cu ft) with helium (10.37 N/m3 or 66 lbf/1000 cu ft) a close second.
In addition to static lift, an airship can obtain a certain amount of dynamic lift from its engines. Dynamic lift in past airships has been about 10% of the static lift. Dynamic lift allows an airship to "take off heavy" from a runway similar to fixed-wing and rotary-wing aircraft. This requires additional weight in engines, fuel, and landing gear, negating some of the static lift capacity.
The altitude at which an airship can fly largely depends on how much lifting gas it can lose due to expansion before stasis is reached. The ultimate altitude record for a rigid airship was set in 1917 by the L-55 under the command of Hans-Kurt Flemming when he forced the airship to attempting to cross France after the "Silent Raid" on London. The L-55 lost lift during the descent to lower altitudes over Germany and crashed due to loss of lift. While such waste of gas was necessary for the survival of airships in the later years of World War I, it was impractical for commercial operations, or operations of helium-filled military airships. The highest flight made by a hydrogen-filled passenger airship was on the Graf Zeppelin's around-the-world flight.
The greatest disadvantage of the airship is size, which is essential to increasing performance. As size increases, the problems of ground handling increase geometrically. As the German Navy changed from the P class of 1915 with a volume of over to the larger Q class of 1916, the R class of 1917, and finally the W class of 1918, at almost ground handling problems reduced the number of days the Zeppelins were able to make patrol flights. This availability declined from 34% in 1915, to 24.3% in 1916 and finally 17.5% in 1918.
So long as the power-to-weight ratios of aircraft engines remained low and specific fuel consumption high, the airship had an edge for long-range or -duration operations. As those figures changed, the balance shifted rapidly in the aeroplane's favour. By mid-1917, the airship could no longer survive in a combat situation where the threat was aeroplanes. By the late 1930s, the airship barely had an advantage over the aeroplane on intercontinental over-water flights, and that advantage had vanished by the end of World War II.
This is in face-to-face tactical situations. Currently, a high-altitude airship project is planned to survey hundreds of kilometres as their operation radius, often much farther than the normal engagement range of a military aeroplane. For example, a radar mounted on a vessel platform high has radio horizon at range, while a radar at altitude has radio horizon at range. This is significantly important for detecting low-flying cruise missiles or fighter-bombers.
Safety
The most commonly used lifting gas, helium, is inert and therefore presents no fire risk. A series of vulnerability tests were done by the UK Defence Evaluation and Research Agency DERA on a Skyship 600. Since the internal gas pressure was maintained at only 1–2% above the surrounding air pressure, the vehicle proved highly tolerant to physical damage or to attack by small-arms fire or missiles. Several hundred high-velocity bullets were fired through the hull, and even two hours later the vehicle would have been able to return to base. Ordnance passed through the envelope without causing critical helium loss. The results and related mathematical model have presented in the hypothesis of considering a Zeppelin NT size airship. In all instances of light armament fire evaluated under both test and live conditions, the airship was able to complete its mission and return to base.
Licensing
In the United Kingdom, the basic pilot licence for airships is the PPL(As), or private pilot licence, which requires a minimum of 35 hours instruction on airships. To fly commercially, an Commercial Pilot Licence (Airships) is required.
See also
Airborne aircraft carrier
Aircruise
Airship hangar
Barrage balloon
Conrad Airship CA 80 (1975–1977)
Evolutionary Air and Space Global Laser Engagement
High-altitude platform station
Hyperion, fictional airship type.
List of airship accidents
List of British airships
List of current airships in the United States
List of Zeppelins
Mystery airship
Stratellite
SVAM CA-80
Worldwide Aeros Corp
Zeppelin mail
Notes
References
Citations
Bibliography
Althoff, William F., USS Los Angeles: The Navy's Venerable Airship and Aviation Technology, 2003,
Ausrotas, R. A., "Basic Relationships for LTA Technical Analysis," Proceedings of the Interagency Workshop on Lighter-Than-Air Vehicles, Massachusetts Institute of Technology Flight Transportation Library, 1975
Archbold, Rich and Ken Marshall, Hindenburg, an Illustrated History, 1994
Bailey, D. B., and Rappoport, H. K., Maritime Patrol Airship Study, Naval Air Development Center, 1980
Botting, Douglas, Dr. Eckener's Dream Machine. New York Henry Hold and Company, 2001,
Burgess, Charles P., Airship Design, (1927) 2004
Cross, Wilbur, Disaster at the Pole, 2002
Dick, Harold G., with Robinson, Douglas H., Graf Zeppelin & Hindenburg, Washington, D.C., Smithsonian Institution Press, 1985, ISBN
Ege, L.; Balloons and Airships, Blandford (1973).
Frederick, Arthur, et al., Airship saga: The history of airships seen through the eyes of the men who designed, built, and flew them, 1982,
Griehl, Manfred and Joachim Dressel, Zeppelin! The German Airship Story, 1990,
Higham, Robin, The British Rigid Airship, 1908–1931: A study in weapons policy, London, G. T. Foulis, 1961,
Keirns, Aaron J, "America's Forgotten Airship Disaster: The Crash of the USS Shenandoah", Howard, Little River Publishing, 1998, .
Khoury, Gabriel Alexander (Editor), Airship Technology (Cambridge Aerospace Series), 2004,
McKee, Alexander, Ice crash, 1980,
Morgala, Andrzej, Sterowce w II Wojnie Światowej (Airships in the Second World War), Lotnictwo, 1992
Mowthorpe, Ces, Battlebags: British Airships of the First World War, 1995
Robinson, Douglas H., Giants in the Sky, University of Washington Press, 1973,
Robinson, Douglas H., The Zeppelin in Combat: A history of the German Naval Airship Division, 1912–1918, Atglen, PA, Shiffer Publications, 1994,
Smith, Richard K. The Airships Akron & Macon: flying aircraft carriers of the United States Navy, Annapolis MD, US Naval Institute Press, 1965,
Shock, James R., Smith, David R., The Goodyear Airships, Bloomington, Illinois, Airship International Press, 2002,
Sprigg, C., The Airship: Its design, history, operation and future, London 1931, Samson Low, Marston and Company.
Toland, John, Ships in the Sky, New York, Henry Hold; London, Muller, 1957,
Vaeth, J. Gordon, Blimps & U-Boats, Annapolis, Maryland, US Naval Institute Press, 1992,
Ventry, Lord; Kolesnik, Eugene, Jane's Pocket Book 7: Airship Development, 1976
Ventry, Lord; Koesnik, Eugene M., Airship Saga, Poole, Dorset, Blandford Press, 1982, p. 97
Winter, Lumen; Degner, Glenn, Minute Epics of Flight, New York, Grosset & Dunlap, 1933.
US War Department, Airship Aerodynamics: Technical Manual, (1941) 2003,
External links
Should Airships Make a Comeback? – Veritasium YouTube channel
Aeronautics
Gases
Vehicles introduced in 1899 | Airship | [
"Physics",
"Chemistry"
] | 15,765 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
58,017 | https://en.wikipedia.org/wiki/Microwave%20oven | A microwave oven or simply microwave is an electric oven that heats and cooks food by exposing it to electromagnetic radiation in the microwave frequency range. This induces polar molecules in the food to rotate and produce thermal energy in a process known as dielectric heating. Microwave ovens heat foods quickly and efficiently because excitation is fairly uniform in the outer of a homogeneous, high-water-content food item.
The development of the cavity magnetron in the United Kingdom made possible the production of electromagnetic waves of a small enough wavelength (microwaves) to efficiently heat up water molecules. American electrical engineer Percy Spencer is generally credited with developing and patenting the world's first commercial microwave oven post World War II from British radar technology developed before and during the war. Named the "RadaRange", it was first sold in 1947.
Raytheon later licensed its patents for a home-use microwave oven that was introduced by Tappan in 1955, but it was still too large and expensive for general home use. Sharp Corporation introduced the first microwave oven with a turntable between 1964 and 1966. The countertop microwave oven was introduced in 1967 by the Amana Corporation. After microwave ovens became affordable for residential use in the late 1970s, their use spread into commercial and residential kitchens around the world, and prices fell rapidly during the 1980s. In addition to cooking food, microwave ovens are used for heating in many industrial processes.
Microwave ovens are a common kitchen appliance and are popular for reheating previously cooked foods and cooking a variety of foods. They rapidly heat foods which can easily burn or turn lumpy if cooked in conventional pans, such as hot butter, fats, chocolate, or porridge. Microwave ovens usually do not directly brown or caramelize food, since they rarely attain the necessary temperature to produce Maillard reactions. Exceptions occur in cases where the oven is used to heat frying-oil and other oily items (such as bacon), which attain far higher temperatures than that of boiling water.
Microwave ovens have a limited role in professional cooking, because the boiling-range temperatures of a microwave oven do not produce the flavorful chemical reactions that frying, browning, or baking at a higher temperature produces. However, such high-heat sources can be added to microwave ovens in the form of a convection microwave oven.
History
Early developments
The exploitation of high-frequency radio waves for heating substances was made possible by the development of vacuum tube radio transmitters around 1920. By 1930 the application of short waves to heat human tissue had developed into the medical therapy of diathermy. At the 1933 Chicago World's Fair, Westinghouse demonstrated the cooking of foods between two metal plates attached to a 10 kW, 60 MHz shortwave transmitter. The Westinghouse team, led by I. F. Mouromtseff, found that foods like steaks and potatoes could be cooked in minutes.
The 1937 United States patent application by Bell Laboratories states:
However, lower-frequency dielectric heating, as described in the aforementioned patent, is (like induction heating) an electromagnetic heating effect, the result of the so-called near-field effects that exist in an electromagnetic cavity that is small compared with the wavelength of the electromagnetic field. This patent proposed radio frequency heating, at 10 to 20 megahertz (wavelength 30 to 15 meters, respectively). Heating from microwaves that have a wavelength that is small relative to the cavity (as in a modern microwave oven) is due to "far-field" effects that are due to classical electromagnetic radiation that describes freely propagating light and microwaves suitably far from their source. Nevertheless, the primary heating effect of all types of electromagnetic fields at both radio and microwave frequencies occurs via the dielectric heating effect, as polarized molecules are affected by a rapidly alternating electric field.
Cavity magnetron
The invention of the cavity magnetron made possible the production of electromagnetic waves of a small enough wavelength (microwaves). The cavity magnetron was a crucial component in the development of short wavelength radar during World War II. In 1937–1940, a multi-cavity magnetron was built by British physicist Sir John Turton Randall, FRSE and coworkers, for the British and American military radar installations in World War II. A higher-powered microwave generator that worked at shorter wavelengths was needed, and in 1940, at the University of Birmingham in England, Randall and Harry Boot produced a working prototype. They invented a valve that could produce pulses of microwave radio energy at a wavelength of 10 cm, an unprecedented discovery.
Sir Henry Tizard traveled to the US in late September 1940 to offer Britain's most valuable technical secrets including the cavity magnetron in exchange for US financial and industrial support (see Tizard Mission). An early 6 kW version, built in England by the General Electric Company Research Laboratories, Wembley, London, was given to the U.S. government in September 1940. The cavity magnetron was later described by American historian James Phinney Baxter III as "[t]he most valuable cargo ever brought to our shores". Contracts were awarded to Raytheon and other companies for the mass production of the cavity magnetron.
Discovery
In 1945, the heating effect of a high-power microwave beam was independently and accidentally discovered by Percy Spencer, an American self-taught engineer from Howland, Maine. Employed by Raytheon at the time, he noticed that microwaves from an active radar set he was working on started to melt a candy bar he had in his pocket. The first food deliberately cooked by Spencer was popcorn, and the second was an egg, which exploded in the face of one of the experimenters.
To verify his finding, Spencer created a high-density electromagnetic field by feeding microwave power from a magnetron into a metal box from which it had no way to escape. When food was placed in the box with the microwave energy, the temperature of the food rose rapidly. On 8 October 1945, Raytheon filed a United States patent application for Spencer's microwave cooking process, and an oven that heated food using microwave energy from a magnetron was soon placed in a Boston restaurant for testing.
Another independent discovery of microwave oven technology was by British scientists, including James Lovelock, who in the 1950s used it to reanimate cryogenically frozen hamsters.
Commercial availability
In 1947, Raytheon built the "Radarange", the first commercially available microwave oven. It was almost tall, weighed and cost about US$5,000 ($ in dollars) each. It consumed 3 kilowatts, about three times as much as today's microwave ovens, and was water-cooled. The name was the winning entry in an employee contest. An early Radarange was installed (and remains) in the galley of the nuclear-powered passenger/cargo ship NS Savannah. An early commercial model introduced in 1954 consumed 1.6 kilowatts and sold for US$2,000 to US$3,000 ($ to $ in dollars). Raytheon licensed its technology to the Tappan Stove company of Mansfield, Ohio in 1952. Under contract to Whirlpool, Westinghouse, and other major appliance manufacturers looking to add matching microwave ovens to their conventional oven line, Tappan produced several variations of their built-in model from roughly 1955 to 1960. Due to maintenance (some units were water-cooled), in-built requirement, and cost—US$1,295 ($ in dollars)—sales were limited.
Japan's Sharp Corporation began manufacturing microwave ovens in 1961. Between 1964 and 1966, Sharp introduced the first microwave oven with a turntable, an alternative means to promote more even heating of food. In 1965, Raytheon, looking to expand their Radarange technology into the home market, acquired Amana to provide more manufacturing capability. In 1967, they introduced the first popular home model, the countertop Radarange, at a price of US$495 ($ in dollars). Unlike the Sharp models, a motor driven mode stirrer in the top of the oven cavity rotated allowing the food to remain stationary.
In the 1960s, Litton bought Studebaker's Franklin Manufacturing assets, which had been manufacturing magnetrons and building and selling microwave ovens similar to the Radarange. Litton developed a new configuration of the microwave oven: the short, wide shape that is now common. The magnetron feed was also unique. This resulted in an oven that could survive a no-load condition: an empty microwave oven where there is nothing to absorb the microwaves. The new oven was shown at a trade show in Chicago, and helped begin a rapid growth of the market for home microwave ovens. Sales volume of 40,000 units for the U.S. industry in 1970 grew to one million by 1975. Market penetration was even faster in Japan, due to a less expensive re-engineered magnetron.
Several other companies joined in the market, and for a time most systems were built by defence contractors, who were most familiar with the magnetron. Litton was particularly well known in the restaurant business.
Residential use
While uncommon today, combination microwave-ranges were offered by major appliance manufacturers through much of the 1970s as a natural progression of the technology. Both Tappan and General Electric offered units that appeared to be conventional stove top/oven ranges, but included microwave capability in the conventional oven cavity. Such ranges were attractive to consumers since both microwave energy and conventional heating elements could be used simultaneously to speed cooking, and there was no loss of countertop space. The proposition was also attractive to manufacturers as the additional component cost could better be absorbed compared with countertop units where pricing was increasingly market-sensitive.
By 1972, Litton (Litton Atherton Division, Minneapolis) introduced two new microwave ovens, priced at $349 and $399, to tap into the market estimated at $750 million by 1976, according to Robert I Bruder, president of the division. While prices remained high, new features continued to be added to home models. Amana introduced automatic defrost in 1974 on their RR-4D model, and was the first to offer a microprocessor controlled digital control panel in 1975 with their RR-6 model.
The late 1970s saw an explosion of low-cost countertop models from many major manufacturers.
Formerly found only in large industrial applications, microwave ovens increasingly became a standard fixture of residential kitchens in developed countries. By 1986, roughly 25% of households in the U.S. owned a microwave oven, up from only about 1% in 1971; the U.S. Bureau of Labor Statistics reported that over 90% of American households owned a microwave oven in 1997. In Australia, a 2008 market research study found that 95% of kitchens contained a microwave oven and that 83% of them were used daily. In Canada, fewer than 5% of households had a microwave oven in 1979, but more than 88% of households owned one by 1998. In France, 40% of households owned a microwave oven in 1994, but that number had increased to 65% by 2004.
Adoption has been slower in less-developed countries, as households with disposable income concentrate on more important household appliances like refrigerators and ovens. In India, for example, only about 5% of households owned a microwave oven in 2013, well behind refrigerators at 31% ownership. However, microwave ovens are gaining popularity. In Russia, for example, the number of households with a microwave oven grew from almost 24% in 2002 to almost 40% in 2008. Almost twice as many households in South Africa owned microwave ovens in 2008 (38.7%) as in 2002 (19.8%). Microwave oven ownership in Vietnam in 2008 was at 16% of households, versus 30% ownership of refrigerators; this rate was up significantly from 6.7% microwave oven ownership in 2002, with 14% ownership for refrigerators that year.
Consumer household microwave ovens usually come with a cooking power of between 600 and 1200 watts. Microwave cooking power, also referred to as output wattage, is lower than its input wattage, which is the manufacturer's listed power rating.
The size of household microwave ovens can vary, but usually have an internal volume of around , and external dimensions of approximately wide, deep and tall. Countertop microwaves vary in weight 23 – 45 lbs.
Microwaves can be turntable or flatbed. Turntable ovens include a glass plate or tray. Flatbed ones do not include a plate, so they have a flat and wider cavity.
By position and type, US DOE classifies them as (1) countertop or (2) over the range and built-in (wall oven for a cabinet or a drawer model).
A traditional microwave only has two power output levels, fully on and fully off. Intermediate heat settings are achieved using duty-cycle modulation and switch between full power and off every few seconds, with more time on for higher settings.
An inverter type, however, can sustain lower temperatures for a lengthy duration without having to switch itself off and on repeatedly. Apart from offering superior cooking ability, these microwaves are generally more energy-efficient.
, the majority of countertop microwave ovens (regardless of brand) sold in the United States were manufactured by the Midea Group.
Categories
Domestic microwave ovens are typically marked with the microwave-safe symbol, next to the device's approximate IEC 60705 output power rating, in watts (typically either: 600W, 700W, 800W, 900W, 1000W), and a voluntary Heating Category (A-E).
Principles
A microwave oven heats food by passing microwave radiation through it. Microwaves are a form of non-ionizing electromagnetic radiation with a frequency in the so-called microwave region (300 MHz to 300 GHz). Microwave ovens use frequencies in one of the ISM (industrial, scientific, medical) bands, which are otherwise used for communication amongst devices that do not need a license to operate, so they do not interfere with other vital radio services.
It is a common misconception that microwave ovens heat food by operating at a special resonance of water molecules in the food. Instead, microwave ovens heat by causing molecules to spin under the influence of a constantly changing electric field, usually in the microwave frequencies range, and a higher wattage power of the microwave oven results in faster cooking times. Typically, consumer ovens work around a nominal 2.45 gigahertz (GHz) – a wavelength of in the 2.4 GHz to 2.5 GHz ISM band – while large industrial / commercial ovens often use 915 megahertz (MHz) – . Among other differences, the longer wavelength of a commercial microwave oven allows the initial heating effects to begin deeper within the food or liquid, and therefore become evenly spread within its bulk sooner, as well as raising the temperature deep within the food more quickly.
A microwave oven takes advantage of the electric dipole structure of water molecules, fats, and many other substances in the food, using a process known as dielectric heating. These molecules have a partial positive charge at one end and a partial negative charge at the other. In an alternating electric field, they will constantly spin around as they continually try to align themselves with the electric field. This can happen over a wide range of frequencies. The electric field's energy is absorbed by the dipole molecules as rotational energy. Then they hit non-dipole molecules, making them move faster as well. This energy is shared deeper into the substance as molecular rotation and translational movement occurs, signifying an increase in the temperature of the food. Once the electrical field's energy is initially absorbed, heat will gradually spread through the object similarly to any other heat transfer by contact with a hotter body.
Defrosting
Microwave heating is more efficient on liquid water than on frozen water, where the movement of molecules is more restricted. Defrosting is done at a low power setting, allowing time for conduction to carry heat to still frozen parts of food. Dielectric heating of liquid water is also temperature-dependent: At 0 °C, dielectric loss is greatest at a field frequency of about 10 GHz, and for higher water temperatures at higher field frequencies.
Fats and sugar
Sugars and triglycerides (fats and oils) absorb microwaves due to the dipole moments of their hydroxyl groups or ester groups. Microwave heating is less efficient on fats and sugars than on water because they have a smaller molecular dipole moment.
Although fats and sugar typically absorb energy less efficiently than water, paradoxically their temperatures rise faster and higher than water when cooking: Fats and oils require less energy delivered per gram of material to raise their temperature by 1 °C than does water (they have lower specific heat capacity) and they begin cooling off by "boiling" only after reaching a higher temperature than water (the temperature they require to vaporize is higher), so inside microwave ovens they normally reach higher temperatures – sometimes much higher. This can induce temperatures in oil or fatty foods like bacon far above the boiling point of water, and high enough to induce some browning reactions, much in the manner of conventional broiling (UK: grilling), braising, or deep fat frying.
The effect is most often noticed by consumers from unexpected damage to plastic containers when microwaving foods high in sugar, starch, or fat generates higher temperatures. Foods high in water content and with little oil rarely exceed the boiling temperature of water and do not damage plastic.
Cookware
Cookware must be transparent to microwaves. Conductive cookware, such as metal pots, reflects microwaves, and prevents the microwaves from reaching the food. Cookware made of materials with high electrical permittivity will absorb microwaves, resulting in the cookware heating rather than the food. Cookware made of melamine resin is a common type of cookware that will heat in a microwave oven, reducing the effectiveness of the microwave oven and creating a hazard from burns or shattered cookware.
Thermal runaway
Microwave heating can cause localized thermal runaways in some materials with low thermal conductivity which also have dielectric constants that increase with temperature. An example is glass, which can exhibit thermal runaway in a microwave oven to the point of melting if preheated. Additionally, microwaves can melt certain types of rocks, producing small quantities of molten rock. Some ceramics can also be melted, and may even become clear upon cooling. Thermal runaway is more typical of electrically conductive liquids such as salty water.
Penetration
Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards. This idea arises from heating behavior seen if an absorbent layer of water lies beneath a less absorbent drier layer at the surface of a food; in this case, the deposition of heat energy inside a food can exceed that on its surface. This can also occur if the inner layer has a lower heat capacity than the outer layer causing it to reach a higher temperature, or even if the inner layer is more thermally conductive than the outer layer making it feel hotter despite having a lower temperature. In most cases, however, with uniformly structured or reasonably homogeneous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers.
Depending on water content, the depth of initial heat deposition may be several centimetres or more with microwave ovens, in contrast with broiling / grilling (infrared) or convection heating methods which thinly deposit heat at the food surface. Penetration depth of microwaves depends on food composition and the frequency, with lower microwave frequencies (longer wavelengths) penetrating deeper.
Energy consumption
In use, microwave ovens can be as low as 50% efficient at converting electricity into microwaves, but energy-efficient models can exceed 64% efficiency. Stovetop cooking is 40–90% efficient, depending on the type of appliance used.
Because they are used fairly infrequently, the average residential microwave oven consumes only 72 kWh per year. Globally, microwave ovens used an estimated 77 TWh per year in 2018, or 0.3% of global electricity generation.
A 2000 study by Lawrence Berkeley National Laboratory found that the average microwave drew almost 3 watts of standby power when not being used, which would total approximately 26 kWh per year. New efficiency standards imposed in 2016 by the United States Department of Energy require less than 1 watt, or approximately 9 kWh per year, of standby power for most types of microwave ovens.
Components
A microwave oven generally consists of:
a high-voltage DC power source, either:
a large high voltage transformer with a voltage doubler (a high-voltage capacitor and a diode)
an electronic power converter usually based around an inverter.
a cavity magnetron, which converts the high-voltage DC electric energy to microwave radiation
a magnetron control circuit (usually with a microcontroller)
a short waveguide (to couple microwave power from the magnetron into the cooking chamber)
a turntable and/or metal wave guide stirring fan
a control panel
In most ovens, the magnetron is driven by a linear transformer which can only feasibly be switched completely on or off. (One variant of the GE Spacemaker had two taps on the transformer primary, for high and low power modes.) Usually choice of power level does not affect intensity of the microwave radiation; instead, the magnetron is cycled on and off every few seconds, thus altering the large scale duty cycle. Newer models use inverter power supplies that use pulse-width modulation to provide effectively continuous heating at reduced power settings, so that foods are heated more evenly at a given power level and can be heated more quickly without being damaged by uneven heating.
The microwave frequencies used in microwave ovens are chosen based on regulatory and cost constraints. The first is that they should be in one of the industrial, scientific, and medical (ISM) frequency bands set aside for unlicensed purposes. For household purposes, 2.45 GHz has the advantage over 915 MHz in that 915 MHz is only an ISM band in some countries (ITU Region 2) while 2.45 GHz is available worldwide. Three additional ISM bands exist in the microwave frequencies, but are not used for microwave cooking. Two of them are centered on 5.8 GHz and 24.125 GHz, but are not used for microwave cooking because of the very high cost of power generation at these frequencies. The third, centered on 433.92 MHz, is a narrow band that would require expensive equipment to generate sufficient power without creating interference outside the band, and is only available in some countries.
The cooking chamber is similar to a Faraday cage to prevent the waves from coming out of the oven. Even though there is no continuous metal-to-metal contact around the rim of the door, choke connections on the door edges act like metal-to-metal contact, at the frequency of the microwaves, to prevent leakage. The oven door usually has a window for easy viewing, with a layer of conductive mesh some distance from the outer panel to maintain the shielding. Because the size of the perforations in the mesh is much less than the microwaves' wavelength (12.2 cm for the usual 2.45 GHz), microwave radiation cannot pass through the door, while visible light (with its much shorter wavelength) can.
Control panel
Modern microwave ovens use either an analog dial-type timer or a digital control panel for operation. Control panels feature an LED, LCD or vacuum fluorescent display, buttons for entering the cook time and a power level selection feature. A defrost option is typically offered, as either a power level or a separate function. Some models include pre-programmed settings for different food types, typically taking weight as input. In the 1990s, brands such as Panasonic and GE began offering models with a scrolling-text display showing cooking instructions.
Power settings are commonly implemented not by actually varying the power output, but by switching the emission of microwave energy off and on at intervals. The highest setting thus represents continuous power. Defrost might represent power for two seconds followed by no power for five seconds. To indicate cooking has completed, an audible warning such as a bell or a beeper is usually present, and/or "End" usually appears on the display of a digital microwave.
Microwave control panels are often considered awkward to use and are frequently employed as examples for user interface design.
Variants and accessories
A variant of the conventional microwave oven is the convection microwave oven. A convection microwave oven is a combination of a standard microwave oven and a convection oven. It allows food to be cooked quickly, yet come out browned or crisped, as from a convection oven. Convection microwave ovens are more expensive than conventional microwave ovens. Some convection microwave ovens—those with exposed heating elements—can produce smoke and burning odors as food spatter from earlier microwave-only use is burned off the heating elements. Some ovens use high speed air; these are known as impingement ovens and are designed to cook food quickly in restaurants, but cost more and consume more power.
In 2000, some manufacturers began offering high power quartz halogen bulbs to their convection microwave oven models, marketing them under names such as "Speedcook", "Advantium", "Lightwave" and "Optimawave" to emphasize their ability to cook food rapidly and with good browning. The bulbs heat the food's surface with infrared (IR) radiation, browning surfaces as in a conventional oven. The food browns while also being heated by the microwave radiation and heated through conduction through contact with heated air. The IR energy which is delivered to the outer surface of food by the lamps is sufficient to initiate browning caramelization in foods primarily made up of carbohydrates and Maillard reactions in foods primarily made up of protein. These reactions in food produce a texture and taste similar to that typically expected of conventional oven cooking rather than the bland boiled and steamed taste that microwave-only cooking tends to create.
In order to aid browning, sometimes an accessory browning tray is used, usually composed of glass or porcelain. It makes food crisp by oxidizing the top layer until it turns brown. Ordinary plastic cookware is unsuitable for this purpose because it could melt.
Frozen dinners, pies, and microwave popcorn bags often contain a susceptor made from thin aluminium film in the packaging or included on a small paper tray. The metal film absorbs microwave energy efficiently and consequently becomes extremely hot and radiates in the infrared, concentrating the heating of oil for popcorn or even browning surfaces of frozen foods. Heating packages or trays containing susceptors are designed for a single use and are then discarded as waste.
Heating characteristics
Microwave ovens produce heat directly within the food, but despite the common misconception that microwaved food cooks from the inside out, 2.45 GHz microwaves can only penetrate approximately into most foods. The inside portions of thicker foods are mainly heated by heat conducted from the outer .
Uneven heating in microwaved food can be partly due to the uneven distribution of microwave energy inside the oven, and partly due to the different rates of energy absorption in different parts of the food. The first problem is reduced by a stirrer, a type of fan that reflects microwave energy to different parts of the oven as it rotates, or by a turntable or carousel that turns the food; turntables, however, may still leave spots, such as the center of the oven, which receive uneven energy distribution.
The location of dead spots and hot spots in a microwave oven can be mapped out by placing a damp piece of thermal paper in the oven: When the water-saturated paper is subjected to the microwave radiation it becomes hot enough to cause the dye to be darkened which can provide a visual representation of the microwaves. If multiple layers of paper are constructed in the oven with a sufficient distance between them a three-dimensional map can be created. Many store receipts are printed on thermal paper which allows this to be easily done at home.
The second problem is due to food composition and geometry, and must be addressed by the cook, by arranging the food so that it absorbs energy evenly, and periodically testing and shielding any parts of the food that overheat. In some materials with low thermal conductivity, where dielectric constant increases with temperature, microwave heating can cause localized thermal runaway. Under certain conditions, glass can exhibit thermal runaway in a microwave oven to the point of melting.
Due to this phenomenon, microwave ovens set at too-high power levels may even start to cook the edges of frozen food while the inside of the food remains frozen. Another case of uneven heating can be observed in baked goods containing berries. In these items, the berries absorb more energy than the drier surrounding bread and cannot dissipate the heat due to the low thermal conductivity of the bread. Often this results in overheating the berries relative to the rest of the food. "Defrost" oven settings either use low power levels or repeatedly turn the power off and on – intended to allow time for heat to be conducted within frozen foods from areas that absorb heat more readily to those which heat more slowly. In turntable-equipped ovens, more even heating can take place by placing food off-center on the turntable tray instead of exactly in the center, as this results in more even heating of the food throughout.
There are microwave ovens on the market that allow full-power defrosting. They do this by exploiting the properties of the electromagnetic radiation LSM modes. LSM full-power defrosting may actually achieve more even results than slow defrosting.
Microwave heating can be deliberately uneven by design. Some microwavable packages (notably pies) may include materials that contain ceramic or aluminium flakes, which are designed to absorb microwaves and heat up, which aids in baking or crust preparation by depositing more energy shallowly in these areas. The technical term for such a microwave-absorbing patch is a susceptor. Such ceramic patches affixed to cardboard are positioned next to the food, and are typically smokey blue or gray in colour, usually making them easily identifiable; the cardboard sleeves included with Hot Pockets, which have a silver surface on the inside, are a good example of such packaging. Microwavable cardboard packaging may also contain overhead ceramic patches which function in the same way.
Effects on food and nutrients
Any form of cooking diminishes overall nutrient content in food, particularly water-soluble vitamins common in vegetables, but the key variables are how much water is used in the cooking, how long the food is cooked, and at what temperature. Nutrients are primarily lost by leaching into cooking water, which tends to make microwave cooking effective, given the shorter cooking times it requires and that the water heated is in the food. Like other heating methods, microwaving converts vitamin B from an active to inactive form; the amount of conversion depends on the temperature reached, as well as the cooking time. Boiled food reaches a maximum of (the boiling point of water), whereas microwaved food can get internally hotter than this, leading to faster breakdown of vitamin B. The higher rate of loss is partially offset by the shorter cooking times required.
Spinach retains nearly all its folate when cooked in a microwave oven; when boiled, it loses about 77%, leaching nutrients into the cooking water. Bacon cooked by microwave oven has significantly lower levels of nitrosamines than conventionally cooked bacon. Steamed vegetables tend to maintain more nutrients when microwaved than when cooked on a stovetop. Microwave blanching is 3–4 times more effective than boiled-water blanching for retaining of the water-soluble vitamins, folate, thiamin and riboflavin, with the exception of of which 29% is lost (compared with a 16% loss with boiled-water blanching).
Safety benefits and features
All microwave ovens use a timer to switch off the oven at the end of the cooking time.
Microwave ovens heat food without getting hot themselves. Taking a pot off a stove, unless it is an induction cooktop, leaves a potentially dangerous heating element or trivet that remains hot for some time. Likewise, when taking a casserole out of a conventional oven, one's arms are exposed to the very hot walls of the oven. A microwave oven does not pose this problem.
Food and cookware taken out of a microwave oven are rarely much hotter than . Cookware used in a microwave oven is often much cooler than the food because the cookware is transparent to microwaves; the microwaves heat the food directly and the cookware is indirectly heated by the food. Food and cookware from a conventional oven, on the other hand, are the same temperature as the rest of the oven; a typical cooking temperature is . That means that conventional stoves and ovens can cause more serious burns.
The lower temperature of cooking (the boiling point of water) is a significant safety benefit compared with baking in the oven or frying, because it eliminates the formation of tars and char, which are carcinogenic. Microwave radiation also penetrates deeper than direct heat, so that the food is heated by its own internal water content. In contrast, direct heat can burn the surface while the inside is still cold. Pre-heating the food in a microwave oven before putting it into the grill or pan reduces the time needed to heat up the food and reduces the formation of carcinogenic char. Unlike frying and baking, microwaving does not produce acrylamide in potatoes, however unlike deep-frying at high-temperatures, it is of only limited effectiveness in reducing glycoalkaloid (i.e., solanine) levels. Acrylamide has been found in other microwaved products like popcorn.
Use in cleaning kitchen sponges
Studies have investigated the use of the microwave oven to clean non-metallic domestic sponges which have been thoroughly wetted. A 2006 study found that microwaving wet sponges for 2 minutes (at 1000-watt power) removed 99% of coliforms, E. coli, and MS2 phages. Bacillus cereus spores were killed at 4 minutes of microwaving.
A 2017 study was less affirmative: About 60% of the germs were killed but the remaining ones quickly re-colonized the sponge.
Issues
High temperatures
Closed containers
Closed containers, such as eggs, can explode when heated in a microwave oven due to the increased pressure from steam. Intact fresh egg yolks outside the shell also explode as a result of superheating. Insulating plastic foams of all types generally contain closed air pockets, and are generally not recommended for use in a microwave oven, as the air pockets explode and the foam (which can be toxic if consumed) may melt. Not all plastics are microwave-safe, and some plastics absorb microwaves to the point that they may become dangerously hot.
Fires
Products that are heated for too long can catch fire. Though this is inherent to any form of cooking, the rapid cooking and unattended nature of the use of microwave ovens results in additional hazard.
Superheating
In rare cases, water and other homogeneous liquids can superheat when heated in a microwave oven in a container with a smooth surface. That is, the liquid reaches a temperature slightly above its normal boiling point without bubbles of vapour forming inside the liquid. The boiling process can start explosively when the liquid is disturbed, such as when the user takes hold of the container to remove it from the oven or while adding solid ingredients such as powdered creamer or sugar. This can result in spontaneous boiling (nucleation) which may be violent enough to eject the boiling liquid from the container and cause severe scalding.
Metal objects
Contrary to popular assumptions, metal objects can be safely used in a microwave oven, but with some restrictions. Any metal or conductive object placed into the microwave oven acts as an antenna to some degree, resulting in an electric current. This causes the object to act as a heating element. This effect varies with the object's shape and composition, and is sometimes utilized for cooking.
Any object containing pointed metal can create an electric arc (sparks) when microwaved. This includes cutlery, crumpled aluminium foil (though some foil used in microwave ovens is safe, see below), twist-ties containing metal wire, the metal wire carry-handles in oyster pails, or almost any metal formed into a poorly conductive foil or thin wire, or into a pointed shape. Forks are a good example: the tines of the fork respond to the electric field by producing high concentrations of electric charge at the tips. This has the effect of exceeding the dielectric breakdown of air, about 3 megavolts per meter (3×106 V/m). The air forms a conductive plasma, which is visible as a spark. The plasma and the tines may then form a conductive loop, which may be a more effective antenna, resulting in a longer lived spark. When dielectric breakdown occurs in air, some ozone and nitrogen oxides are formed, both of which are unhealthy in large quantities.
Microwaving an individual smooth metal object without pointed ends, for example, a spoon or shallow metal pan, usually does not produce sparking. Thick metal wire racks can be part of the interior design in microwave ovens (see illustration). In a similar way, the interior wall plates with perforating holes which allow light and air into the oven, and allow interior-viewing through the oven door, are all made of conductive metal formed in a safe shape.
The effect of microwaving thin metal films can be seen clearly on a Compact Disc or DVD (particularly the factory pressed type). The microwaves induce electric currents in the metal film, which heats up, melting the plastic in the disc and leaving a visible pattern of concentric and radial scars. Similarly, porcelain with thin metal films can also be destroyed or damaged by microwaving. Aluminium foil is thick enough to be used in microwave ovens as a shield against heating parts of food items, if the foil is not badly warped. When wrinkled, aluminium foil is generally unsafe in microwaves, as manipulation of the foil causes sharp bends and gaps that invite sparking. The USDA recommends that aluminium foil used as a partial food shield in microwave oven cooking cover no more than one quarter of a food object, and be carefully smoothed to eliminate sparking hazards.
Another hazard is the resonance of the magnetron tube itself. If the microwave oven is run without an object to absorb the radiation, a standing wave forms. The energy is reflected back and forth between the tube and the cooking chamber. This may cause the tube to overload and burn out. High reflected power may also cause magnetron arcing, possibly resulting in primary power fuse failure, though such a causal relationship is not easily established. Thus, dehydrated food, or food wrapped in metal which does not arc, is problematic for overload reasons, without necessarily being a fire hazard.
Certain foods such as grapes, if properly arranged, can produce an electric arc. Prolonged arcing from food carries similar risks to arcing from other sources as noted above.
Some other objects that may conduct sparks are plastic/holographic print Thermos flasks and other heat-retaining containers (such as Starbucks novelty cups) or cups with metal lining. If any bit of the metal is exposed, all the outer shell can burst off the object or melt.
The high electrical fields generated inside a microwave oven often can be illustrated by placing a radiometer or neon glow-bulb inside the cooking chamber, creating glowing plasma inside the low-pressure bulb of the device.
Direct microwave exposure
Direct microwave exposure is not generally possible, as microwaves emitted by the source in a microwave oven are confined in the oven by the material out of which the oven is constructed. Furthermore, ovens are equipped with redundant safety interlocks, which remove power from the magnetron if the door is opened. This safety mechanism is required by United States federal regulations. Tests have shown confinement of the microwaves in commercially available ovens to be so nearly universal as to make routine testing unnecessary. According to the United States Food and Drug Administration's Center for Devices and Radiological Health, a U.S. Federal Standard limits the amount of microwaves that can leak from an oven throughout its lifetime to 5 milliwatts of microwave radiation per square centimeter at approximately (2 in) from the surface of the oven. This is far below the exposure level currently considered to be harmful to human health.
The radiation produced by a microwave oven is non-ionizing. It therefore does not have the cancer risks associated with ionizing radiation such as X-rays and high-energy particles. Long-term rodent studies to assess cancer risk have so far failed to identify any carcinogenicity from microwave radiation even with chronic exposure levels (i.e. large fraction of life span) far larger than humans are likely to encounter from any leaking ovens. However, with the oven door open, the radiation may cause damage by heating. Microwave ovens are sold with a protective interlock so that it cannot be run when the door is open or improperly latched.
Microwaves generated in microwave ovens cease to exist once the electrical power is turned off. They do not remain in the food when the power is turned off, any more than light from an electric lamp remains in the walls and furnishings of a room when the lamp is turned off. They do not make the food or the oven radioactive. In contrast with conventional cooking, the nutritional content of some foods may be altered differently, but generally in a positive way by preserving more micronutrients – see above. There is no indication of detrimental health issues associated with microwaved food.
There are, however, a few cases where people have been exposed to direct microwave radiation, either from appliance malfunction or deliberate action. This exposure generally results in physical burns to the body, as human tissue, particularly the outer fat and muscle layers, has a similar composition to some foods that are typically cooked in microwave ovens and so experiences similar dielectric heating effects when exposed to microwave electromagnetic radiation.
Chemical exposure
The use of unmarked plastics for microwave cooking raises the issue of plasticizers leaching into the food.
The plasticizers which received the most attention are bisphenol A (BPA) and phthalates, although it is unclear whether other plastic components present a toxicity risk. Other issues include melting and flammability. An alleged issue of release of dioxins into food has been dismissed as an intentional red herring distraction from actual safety issues.
Some current plastic containers and food wraps are specifically designed to resist radiation from microwaves. Products may use the term "microwave safe", may carry a microwave symbol (three lines of waves, one above the other) or simply provide instructions for proper microwave oven use. Any of these is an indication that a product is suitable for microwaving when used in accordance with the directions provided.
Plastic containers can release microplastics into food when heated in microwave ovens.
Uneven heating
Microwave ovens are frequently used for reheating leftover food, and bacterial contamination may not be repressed if the microwave oven is used improperly. If safe temperature is not reached, this can result in foodborne illness, as with other reheating methods. While microwave ovens can destroy bacteria as well as conventional ovens can, they cook rapidly and may not cook as evenly, similar to frying or grilling, leading to a risk of some food regions failing to reach recommended temperatures. Therefore, a standing period after cooking to allow temperatures in the food to equalize is recommended, as well as the use of a food thermometer to verify internal temperatures.
Interference
Microwave ovens, although shielded for safety purposes, still emit low levels of microwave radiation. This is not harmful to humans, but can sometimes cause interference to Wi-Fi and Bluetooth and other devices that communicate on the 2.45 GHz wavebands, particularly at close range.
Conventional transformer ovens do not operate continuously over the mains cycle, but can cause significant slowdowns for many metres around the oven, whereas inverter-based ovens can stop nearby networking entirely while operating.
See also
Countertop
Electromagnetic reverberation chamber
Induction cooker
List of cooking appliances
List of home appliances
Microwave chemistry
Peryton (astronomy)
Robert V. Decareau
Thelma Pressman
Wall oven
Notes
References
External links
: Percy Spencer's original patent
Ask a Scientist Chemistry Archives , Argonne National Laboratory
Further Reading On The History Of Microwaves and Microwave Ovens
Microwave oven history from American Heritage magazine
Superheating and Microwave Ovens, University of New South Wales (includes video)
"The Microwave Oven": Short explanation of microwave oven in terms of microwave cavities and waveguides, intended for use in a class in electrical engineering
How Things Work: Microwave Ovens, David Ruzic, University of Illinois
Ovens
Owen
American inventions
Radiation effects
Products introduced in 1945
20th-century inventions
Home appliances | Microwave oven | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 9,277 | [
"Machines",
"Physical phenomena",
"Materials science",
"Physical systems",
"Radiation",
"Condensed matter physics",
"Home appliances",
"Radiation effects"
] |
58,246 | https://en.wikipedia.org/wiki/Nitrocellulose | Nitrocellulose (also known as cellulose nitrate, flash paper, flash cotton, guncotton, pyroxylin and flash string, depending on form) is a highly flammable compound formed by nitrating cellulose through exposure to a mixture of nitric acid and sulfuric acid. One of its first major uses was as guncotton, a replacement for gunpowder as propellant in firearms. It was also used to replace gunpowder as a low-order explosive in mining and other applications. In the form of collodion it was also a critical component in an early photographic emulsion, the use of which revolutionized photography in the 1860s. In the 20th century it was adapted to automobile lacquer and adhesives.
Production
The process uses a mixture of nitric acid and sulfuric acid to convert cellulose into nitrocellulose. The quality of the cellulose is important. Hemicellulose, lignin, pentosans, and mineral salts give inferior nitrocelluloses. In precise chemical terms, nitrocellulose is not a nitro compound, but a nitrate ester. The glucose repeat unit (anhydroglucose) within the cellulose chain has three OH groups, each of which can form a nitrate ester. Thus, nitrocellulose can denote mononitrocellulose, dinitrocellulose, and trinitrocellulose, or a mixture thereof. With fewer OH groups than the parent cellulose, nitrocelluloses do not aggregate by hydrogen bonding. The overarching consequence is that the nitrocellulose is soluble in organic solvents such as acetone and esters; e.g., ethyl acetate, methyl acetate, ethyl carbonate. Most lacquers are prepared from the dinitrate, whereas explosives are mainly the trinitrate.
The chemical equation for the formation of the trinitrate is
3 HNO3 + C6H7(OH)3O2 C6H7(ONO2)3O2 + 3 H2O
The yields are about 85%, with losses attributed to complete oxidation of the cellulose to oxalic acid.
Use
The principal uses of cellulose nitrate is for the production of lacquers and coatings, explosives, and celluloid.
In terms of lacquers and coatings, nitrocellulose dissolves readily in organic solvents, which upon evaporation leave a colorless, transparent, flexible film. Nitrocellulose lacquers have been used as a finish on furniture and musical instruments.
Guncotton, dissolved at about 25% in acetone, forms a lacquer used in preliminary stages of wood finishing to develop a hard finish with a deep lustre. It is normally the first coat applied, then it is sanded and followed by other coatings that bond to it.
Nail polish contains nitrocellulose, as it is inexpensive, dries quickly to a hard film, and does not damage skin.
The explosive applications are diverse and nitrate content is typically higher for propellant applications than for coatings. For space flight, nitrocellulose was used by Copenhagen Suborbitals on several missions as a means of jettisoning components of the rocket/space capsule and deploying recovery systems. However, after several missions and flights, it proved not to have the desired explosive properties in a near vacuum environment. In 2014, the Philae comet lander failed to deploy its harpoons because its 0.3 grams of nitrocellulose propulsion charges failed to fire during the landing.
Other uses
Collodion, a solution of nitrocellulose, is used today in topical skin applications, such as liquid skin and in the application of salicylic acid, the active ingredient in Compound W wart remover.
Laboratory uses
Membrane filters made of a mesh of nitrocellulose threads with various porosities are used in laboratory procedures for particle retention and cell capture in liquid or gaseous solutions and, reversely, obtaining particle-free filtrates.
A nitrocellulose slide, nitrocellulose membrane, or nitrocellulose paper is a sticky membrane used for immobilizing nucleic acids in southern blots and northern blots. It is also used for immobilization of proteins in western blots and atomic force microscopy for its nonspecific affinity for amino acids. Nitrocellulose is widely used as support in diagnostic tests where antigen-antibody binding occurs; e.g., pregnancy tests, U-albumin tests, and CRP tests. Glycine and chloride ions make protein transfer more efficient.
Radon tests for alpha track etches use nitrocellulose.
Adolph Noé developed a method of peeling coal balls using nitrocellulose.
It is used to coat playing cards and to bind staples together in office staplers.
Hobbies
In 1846, nitrated cellulose was found to be soluble in ether and alcohol. The solution was named collodion and was soon used as a dressing for wounds.
In 1851, Frederick Scott Archer invented the wet collodion process as a replacement for albumen in early photographic emulsions, binding light-sensitive silver halides to a glass plate.
Magicians' flash paper are sheets of paper consisting of pure nitrocellulose, which burn almost instantly with a bright flash, leaving no ash or smoke.
As a medium for cryptographic one-time pads, they make the disposal of the pad complete, secure, and efficient.
Nitrocellulose lacquer is spin-coated onto aluminium or glass discs, then a groove is cut with a lathe, to make one-off phonograph records, used as masters for pressing or for play in dance clubs. They are referred to as acetate discs.
Depending on the manufacturing process, nitrocellulose is esterified to varying degrees. Table tennis balls, guitar picks, and some photographic films have fairly low esterification levels and burn comparatively slowly with some charred residue.
Historical uses
Early work on nitration of cellulose
In 1832 Henri Braconnot discovered that nitric acid, when combined with starch or wood fibers, would produce a lightweight combustible explosive material, which he named xyloïdine. A few years later in 1838, another French chemist, Théophile-Jules Pelouze (teacher of Ascanio Sobrero and Alfred Nobel), treated paper and cardboard in the same way. Jean-Baptiste Dumas obtained a similar material, which he called nitramidine.
Guncotton
Around 1846 Christian Friedrich Schönbein, a German-Swiss chemist, discovered a more practical formulation. As he was working in the kitchen of his home in Basel, he spilled a mixture of nitric acid (HNO3) and sulfuric acid (H2SO4) on the kitchen table. He reached for the nearest cloth, a cotton apron, and wiped it up. He hung the apron on the stove door to dry, and as soon as it was dry, a flash occurred as the apron ignited. His preparation method was the first to be widely used. The method was to immerse one part of fine cotton in 15 parts of an equal blend of sulfuric acid and nitric acid. After two minutes, the cotton was removed and washed in cold water to set the esterification level and to remove all acid residue. The cotton was then slowly dried at a temperature below 40 °C (104 °F). Schönbein collaborated with the Frankfurt professor Rudolf Christian Böttger, who had discovered the process independently in the same year.
By coincidence, a third chemist, the Brunswick professor F. J. Otto had also produced guncotton in 1846 and was the first to publish the process, much to the disappointment of Schönbein and Böttger.
The patent rights for the manufacture of guncotton were obtained by John Hall & Son in 1846, and industrial manufacture of the explosive began at a purpose-built factory at Marsh Works in Faversham, Kent, a year later. The manufacturing process was not properly understood and few safety measures were put in place. A serious explosion in July that killed almost two dozen workers resulted in the immediate closure of the plant. Guncotton manufacture ceased for over 15 years until a safer procedure could be developed.
The British chemist Frederick Augustus Abel developed the first safe process for guncotton manufacture, which he patented in 1865. The washing and drying times of the nitrocellulose were both extended to 48 hours and repeated eight times over. The acid mixture was changed to two parts sulfuric acid to one part nitric. Nitration can be controlled by adjusting acid concentrations and reaction temperature. Nitrocellulose is soluble in a mixture of ethanol and ether until nitrogen concentration exceeds 12%. Soluble nitrocellulose, or a solution thereof, is sometimes called collodion.
Guncotton containing more than 13% nitrogen (sometimes called insoluble nitrocellulose) was prepared by prolonged exposure to hot, concentrated acids for limited use as a blasting explosive or for warheads of underwater weapons such as naval mines and torpedoes. Safe and sustained production of guncotton began at the Waltham Abbey Royal Gunpowder Mills in the 1860s, and the material rapidly became the dominant explosive, becoming the standard for military warheads, although it remained too potent to be used as a propellant. More-stable and slower-burning collodion mixtures were eventually prepared using less concentrated acids at lower temperatures for smokeless powder in firearms. The first practical smokeless powder made from nitrocellulose, for firearms and artillery ammunition, was invented by French chemist Paul Vieille in 1884.
Jules Verne viewed the development of guncotton with optimism. He referred to the substance several times in his novels. His adventurers carried firearms employing this substance. In his From the Earth to the Moon, guncotton was used to launch a projectile into space.
Because of their fluffy and nearly white appearance, nitrocellulose products are often referred to as cottons, e.g. lacquer cotton, celluloid cotton, and gun cotton.
Guncotton was originally made from cotton (as the source of cellulose) but contemporary methods use highly processed cellulose from wood pulp. While guncotton is dangerous to store, the hazards it presents can be minimized by storing it dampened with various liquids, such as alcohol. For this reason, accounts of guncotton usage dating from the early 20th century refer to "wet guncotton."
The power of guncotton made it suitable for blasting. As a projectile driver, it had around six times the gas generation of an equal volume of black powder and produced less smoke and less heating.
Artillery shells filled with gun cotton were widely used during the American Civil War, and its use was one of the reasons the conflict was seen as the "first modern war." In combination with breech-loading artillery, such high explosive shells could cause greater damage than previous solid cannonballs.
During the first World War, British authorities were slow to introduce grenades, with soldiers at the front improvising by filling ration tin cans with gun cotton, scrap and a basic fuse.
Further research indicated the importance of washing the acidified cotton. Unwashed nitrocellulose (sometimes called pyrocellulose) may spontaneously ignite and explode at room temperature, as the evaporation of water results in the concentration of unreacted acid.
Film
In 1855, the first human-made plastic, nitrocellulose (branded Parkesine, patented in 1862), was created by Alexander Parkes from cellulose treated with nitric acid and a solvent. In 1868, American inventor John Wesley Hyatt developed a plastic material he named Celluloid, improving on Parkes' invention by plasticizing the nitrocellulose with camphor so that it could be processed into a photographic film. This was used commercially as "celluloid", a highly flammable plastic that until the mid-20th century formed the basis for lacquers and photographic film.
On May 2, 1887, Hannibal Goodwin filed a patent for "a photographic pellicle and process of producing same ... especially in connection with roller cameras", but the patent was not granted until September 13, 1898. In the meantime, George Eastman had already started production of roll-film using his own process.
Nitrocellulose was used as the first flexible film base, beginning with Eastman Kodak products in August 1889. Camphor is used as a plasticizer for nitrocellulose film, often called nitrate film. Goodwin's patent was sold to Ansco, which successfully sued Eastman Kodak for infringement of the patent and was awarded $5,000,000 in 1914 to Goodwin Film.
Nitrate film fires
Disastrous fires related to celluloid or "nitrate film" became regular occurrences in the motion picture industry throughout the silent era and for many years after the arrival of sound film. Projector fires and spontaneous combustion of nitrate footage stored in studio vaults and in other structures were often blamed during the early to mid 20th century for destroying or heavily damaging cinemas, inflicting many serious injuries and deaths, and for reducing to ashes the master negatives and original prints of tens of thousands of screen titles, turning many of them into lost films. Even when nitrate stock did not start the blaze, flames from other sources spread to large nearby film collections, producing intense and highly destructive fires.
In 1914the same year that Goodwin Film was awarded $5,000,000 from Kodak for patent infringementnitrate film fires incinerated a significant portion of the United States' early cinematic history. In that year alone, five very destructive fires occurred at four major studios and a film-processing plant. Millions of feet of film burned on March 19 at the Eclair Moving Picture Company in Fort Lee, New Jersey. Later that same month, many more reels and film cans of negatives and prints also burned at Edison Studios in New York City, in the Bronx. On May 13, a fire at Universal Pictures' Colonial Hall "film factory" in Manhattan consumed another extensive collection. Yet again, on June 13 in Philadelphia, a fire and a series of explosions ignited inside the 186-square-meter (2,000-square-foot) film vault of the Lubin Manufacturing Company and quickly wiped out virtually all of that studio's pre-1914 catalogue. Then a second fire hit the Edison Company at another location on December 9, at its film-processing complex in West Orange, New Jersey. That catastrophic fire started inside a film-inspection building and caused over $7,000,000 in property damages ($ today). Even after film technology changed, archives of older films remained vulnerable; the 1965 MGM vault fire burned many films that were decades old.
The use of volatile nitrocellulose film for motion pictures led many cinemas to fireproof their projection rooms with wall coverings made of asbestos. Those additions intended to prevent or at least delay the migration of flames beyond the projection areas. A training film for projectionists included footage of a controlled ignition of a reel of nitrate film, which continued to burn even when fully submerged in water. Once burning, it is extremely difficult to extinguish. Unlike most other flammable materials, nitrocellulose does not need a source of air to continue burning, since it contains sufficient oxygen within its molecular structure to sustain a flame. For this reason, immersing burning film in water may not extinguish it, and could actually increase the amount of smoke produced. Owing to public safety precautions, the United Kingdom's Health and Safety Executive to this day forbids transportation of nitrate film by post or public transit, or disposal with household refuse.
Cinema fires caused by the ignition of nitrocellulose film stock commonly occurred as well. In Ireland in 1926, it was blamed for the Dromcolliher cinema tragedy in County Limerick in which 48 people died. Then in 1929 at the Glen Cinema in Paisley, Scotland, a film-related fire killed 69 children. Today, nitrate film projection is rare and normally highly regulated and requires extensive precautions, including extra health-and-safety training for projectionists. A special projector certified to run nitrate films has many modifications, among them the chambering of the feed and takeup reels in thick metal covers with small slits to allow the film to run through them. The projector is additionally modified to accommodate several fire extinguishers with nozzles aimed at the film gate. The extinguishers automatically trigger if a piece of film near the gate starts to burn. While this triggering would likely damage or destroy a significant portion of the projector's components, it would contain a fire and prevent far greater damage. Projection rooms may also be required to have automatic metal covers for the projection windows, preventing the spread of fire to the auditorium. Today, the Dryden Theatre at the George Eastman Museum is one of a few theaters in the world that is capable of safely projecting nitrate films and regularly screens them to the public. The BFI Southbank in London is the only cinema in the United Kingdom licensed to show Nitrate Film.
The use of nitrate film and its fiery potential were certainly not issues limited to the realm of motion pictures or to commercial still photography. The film was also used for many years in medicine, where its hazardous nature was most acute, especially in its application to X-ray photography. In 1929, several tons of stored X-ray film were ignited by steam from a broken heating pipe at the Cleveland Clinic in Ohio. That tragedy claimed 123 lives during the fire and additional fatalities several days later, when hospitalized victims died due to inhaling excessive amounts of smoke from the burning film, which was laced with toxic gases such as sulfur dioxide and hydrogen cyanide. Related fires in other medical facilities prompted the growing disuse of nitrocellulose stock for X-rays by 1933, nearly two decades before its use was discontinued for motion-picture films in favour of cellulose acetate film, more commonly known as "safety film".
Nitrocellulose decomposition and new "safety" stocks
Nitrocellulose was found to gradually decompose, releasing nitric acid and further catalyzing the decomposition (eventually into a flammable powder). Decades later, storage at low temperatures was discovered as a means of delaying these reactions indefinitely. Many films produced during the early 20th century were lost through this accelerating, self-catalyzed disintegration or through studio warehouse fires, and many others were deliberately destroyed specifically to avoid the fire risk. Salvaging old films is a major problem for film archivists (see film preservation).
Nitrocellulose film base manufactured by Kodak can be identified by the presence of the word "nitrate" in dark letters along one edge; the word only in clear letters on a dark background indicates derivation from a nitrate base original negative or projection print, but the film in hand itself may be a later print or copy negative, made on safety film. Acetate film manufactured during the era when nitrate films were still in use was marked "Safety" or "Safety Film" along one edge in dark letters. 8, 9.5, and 16 mm film stocks, intended for amateur and other nontheatrical use, were never manufactured with a nitrate base in the west, but rumors exist of 16 mm nitrate film having been produced in the former Soviet Union and China.
Nitrate dominated the market for professional-use 35 mm motion picture film from the industry's origins to the early 1950s. While cellulose acetate-based safety film, notably cellulose diacetate and cellulose acetate propionate, was produced in the gauge for small-scale use in niche applications (such as printing advertisements and other short films to enable them to be sent through the mails without the need for fire safety precautions), the early generations of safety film base had two major disadvantages relative to nitrate: it was much more expensive to manufacture, and considerably less durable in repeated projection. The cost of the safety precautions associated with the use of nitrate was significantly lower than the cost of using any of the safety bases available before 1948. These drawbacks were eventually overcome with the launch of cellulose triacetate base film by Eastman Kodak in 1948. Cellulose triacetate superseded nitrate as the film industry's mainstay base very quickly. While Kodak had discontinued some nitrate film stocks earlier, it stopped producing various nitrate roll films in 1950 and ceased production of nitrate 35 mm motion picture film in 1951.
The crucial advantage cellulose triacetate had over nitrate was that it was no more of a fire risk than paper (the stock is often referred to as "non-flam": this is true—but it is combustible, just not in as volatile or as dangerous a way as nitrate), while it almost matched the cost and durability of nitrate. It remained in almost exclusive use in all film gauges until the 1980s, when polyester/PET film began to supersede it for intermediate and release printing.
Polyester is much more resistant to polymer degradation than either nitrate or triacetate. Although triacetate does not decompose in as dangerous a way as nitrate does, it is still subject to a process known as deacetylation, often nicknamed "vinegar syndrome" (due to the acetic acid smell of decomposing film) by archivists, which causes the film to shrink, deform, become brittle and eventually unusable. PET, like cellulose mononitrate, is less prone to stretching than other available plastics. By the late 1990s, polyester had almost entirely superseded triacetate for the production of intermediate elements and release prints.
Triacetate remains in use for most camera negative stocks because it can be "invisibly" spliced using solvents during negative assembly, while polyester film is usually spliced using adhesive tape patches, which leave visible marks in the frame area. However, ultrasonic splicing in the frame line area can be invisible. Also, polyester film is so strong, it will not break under tension and may cause serious damage to expensive camera or projector mechanisms in the event of a film jam, whereas triacetate film breaks easily, reducing the risk of damage. Many were opposed to the use of polyester for release prints for this reason, and because ultrasonic splicers are very expensive, beyond the budgets of many smaller theaters. In practice, though, this has not proved to be as much of a problem as was feared. Rather, with the increased use of automated long-play systems in cinemas, the greater strength of polyester has been a significant advantage in lessening the risk of a film performance being interrupted by a film break.
Despite its self-oxidizing hazards, nitrate is still regarded highly as the stock is more transparent than replacement stocks, and older films used denser silver in the emulsion. The combination results in a notably more luminous image with a high contrast ratio.
Fabric
The solubility of nitrocellulose was the basis for the first "artificial silk" by Georges Audemars in 1855, which he called "Rayon".. However, Hilaire de Chardonnet was the first to patent a nitrocellulose fiber marketed as "artificial silk" at the Paris Exhibition of 1889. Commercial production started in 1891, but the result was flammable and more expensive than cellulose acetate or cuprammonium rayon. Because of this predicament, production ceased early in the 1900s. Nitrocellulose was briefly known as "mother-in-law silk".
Frank Hastings Griffin invented the double-godet, a special stretch-spinning process that changed artificial silk to rayon, rendering it usable in many industrial products such as tire cords and clothing. Nathan Rosenstein invented the "spunize process" by which he turned rayon from a hard fiber to a fabric. This allowed rayon to become a popular raw material in textiles.
Coatings
Nitrocellulose lacquer manufactured by (among others) DuPont, was the primary material for painting automobiles for many years. Durability of finish, complexities of "multiple stage" modern finishes, and other factors including environmental regulation led manufacturers to choose newer technologies. It remained the favorite of hobbyists for both historical reasons and for the ease with which a professional finish can be obtained. Most automobile "touch up" paints are still made from lacquer because of its fast drying, easy application, and superior adhesion properties – regardless of the material used for the original finish. Guitars sometimes shared color codes with current automobiles. It fell out of favor for mass production use for a number of reasons including environmental regulation and the cost of application vs. "poly" finishes. However, Gibson still use nitrocellulose lacquers on all of their guitars, as well as Fender when reproducing historically accurate guitars. The nitrocellulose lacquer yellows and cracks over time, and custom shops will reproduce this aging to make instruments appear vintage. Guitars made by smaller shops (luthiers) also often use "nitro" as it has an almost mythical status among guitarists.
Hazards
Because of its explosive nature, not all applications of nitrocellulose were successful. In 1869, with elephants having been poached to near extinction, the billiards industry offered a US$10,000 prize to whoever came up with the best replacement for ivory billiard balls. John Wesley Hyatt created the winning replacement, which he created with a new material he invented, called camphored nitrocellulose—the first thermoplastic, better known as celluloid. The invention enjoyed a brief popularity, but the Hyatt balls were extremely flammable, and sometimes portions of the outer shell would explode upon impact. An owner of a billiard saloon in Colorado wrote to Hyatt about the explosive tendencies, saying that he did not mind very much personally but for the fact that every man in his saloon immediately pulled a gun at the sound. The process used by Hyatt to manufacture the billiard balls, patented in 1881, involved placing the mass of nitrocellulose in a rubber bag, which was then placed in a cylinder of liquid and heated. Pressure was applied to the liquid in the cylinder, which resulted in a uniform compression on the nitrocellulose mass, compressing it into a uniform sphere as the heat vaporized the solvents. The ball was then cooled and turned to make a uniform sphere. In light of the explosive results, this process was called the "Hyatt gun method".
An overheated container of dry nitrocellulose is believed to be the initial cause of the 2015 Tianjin explosions.
See also
Pentaerythritol tetranitrate (PETN), a related explosive.
Cordite
Nitroglycerine
Nitrostarch
RE factor
References
External links
Gun Cotton at The Periodic Table of Videos (University of Nottingham)
Nitrocellulose Paper Video (aka:Flash paper)
Cellulose, nitrate (Nitrocellulose)—ChemSub Online
How To Make Nitro-Cellulose That Works
1846 introductions
Nitrate esters
Articles containing video clips
Cellulose
Cotton
Explosive chemicals
Film and video technology
Firearm propellants
Photographic chemicals
Storage media
Transparent materials
Explosive polymers | Nitrocellulose | [
"Physics",
"Chemistry"
] | 5,752 | [
"Physical phenomena",
"Optical phenomena",
"Materials",
"Transparent materials",
"Explosive chemicals",
"Matter"
] |
58,251 | https://en.wikipedia.org/wiki/Nickel%E2%80%93metal%20hydride%20battery | A nickel–metal hydride battery (NiMH or Ni–MH) is a type of rechargeable battery. The chemical reaction at the positive electrode is similar to that of the nickel–cadmium cell (NiCd), with both using nickel oxide hydroxide (NiOOH). However, the negative electrodes use a hydrogen-absorbing alloy instead of cadmium. NiMH batteries can have two to three times the capacity of NiCd batteries of the same size, with significantly higher energy density, although only about half that of lithium-ion batteries.
They are typically used as a substitute for similarly shaped non-rechargeable alkaline batteries, as they feature a slightly lower but generally compatible cell voltage and are less prone to leaking.
History
Work on NiMH batteries began at the Battelle-Geneva Research Center following the technology's invention in 1967. It was based on sintered Ti2Ni+TiNi+x alloys and NiOOH electrodes. Development was sponsored over nearly two decades by Daimler-Benz and by Volkswagen AG within Deutsche Automobilgesellschaft, now a subsidiary of Daimler AG. The batteries' specific energy reached 50 W·h/kg (180 kJ/kg), specific power up to 1000 W/kg and a life of 500 charge cycles (at 100% depth of discharge). Patent applications were filed in European countries (priority: Switzerland), the United States, and Japan. The patents transferred to Daimler-Benz.
Interest grew in the 1970s with the commercialisation of the nickel–hydrogen battery for satellite applications. Hydride technology promised an alternative, less bulky way to store the hydrogen. Research carried out by Philips Laboratories and France's CNRS developed new high-energy hybrid alloys incorporating rare-earth metals for the negative electrode. However, these suffered from alloy instability in alkaline electrolyte and consequently insufficient cycle life. In 1987, Willems and Buschow demonstrated a successful battery based on this approach (using a mixture of La0.8Nd0.2Ni2.5Co2.4Si0.1), which kept 84% of its charge capacity after 4000 charge-discharge cycles. More economically viable alloys using mischmetal instead of lanthanum were soon developed. Modern NiMH cells were based on this design. The first consumer-grade NiMH cells became commercially available in 1989.
In 1998, Stanford Ovshinsky at Ovonic Battery Co., which had been working on MH-NiOOH batteries since mid-1980, improved the Ti–Ni alloy structure and composition and patented its innovations.
In 2008, more than two million hybrid cars worldwide were manufactured with NiMH batteries.
In the European Union due to its Battery Directive, nickel–metal hydride batteries replaced Ni–Cd batteries for portable consumer use.
About 22% of portable rechargeable batteries sold in Japan in 2010 were NiMH. In Switzerland in 2009, the equivalent statistic was approximately 60%. This percentage has fallen over time due to the increase in manufacture of lithium-ion batteries: in 2000, almost half of all portable rechargeable batteries sold in Japan were NiMH.
In 2015 BASF produced a modified microstructure that helped make NiMH batteries more durable, in turn allowing changes to the cell design that saved considerable weight, allowing the specific energy to reach 140 watt-hours per kilogram.
Electrochemistry
The negative electrode reaction occurring in a NiMH cell is
H2O + M + e− OH− + MH
On the positive electrode, nickel oxyhydroxide, NiO(OH), is formed:
Ni(OH)2 + OH− NiO(OH) + H2O + e−
The reactions proceed left to right during charge and the opposite during discharge. The metal M in the negative electrode of a NiMH cell is an intermetallic compound. Many different compounds have been developed for this application, but those in current use fall into two classes. The most common is AB5, where A is a rare-earth mixture of lanthanum, cerium, neodymium, praseodymium, and B is nickel, cobalt, manganese, or aluminium. Some cells use higher-capacity negative electrode materials based on AB2 compounds, where A is titanium or vanadium, and B is zirconium or nickel, modified with chromium, cobalt, iron, or manganese.
NiMH cells have an alkaline electrolyte, usually potassium hydroxide. The positive electrode is nickel hydroxide, and the negative electrode is hydrogen in the form of an interstitial metal hydride. Hydrophilic polyolefin nonwovens are used for separation.
Charge
When fast-charging, it is advisable to charge the NiMH cells with a smart battery charger to avoid overcharging, which can damage cells.
Trickle charging
The simplest of the safe charging methods is with a fixed low current, with or without a timer. Most manufacturers claim that overcharging is safe at very low currents, below 0.1 C (C/10) (where C is the current equivalent to the capacity of the battery divided by one hour). The Panasonic NiMH charging manual warns that overcharging for long enough can damage a battery and suggests limiting the total charging time to 10–20 hours.
Duracell further suggests that a trickle charge at C/300 can be used for batteries that must be kept in a fully charged state. Some chargers do this after the charge cycle, to offset natural self-discharge. A similar approach is suggested by Energizer, which indicates that self-catalysis can recombine gas formed at the electrodes for charge rates up to C/10. This leads to cell heating. The company recommends C/30 or C/40 for indefinite applications where long life is important. This is the approach taken in emergency lighting applications, where the design remains essentially the same as in older NiCd units, except for an increase in the trickle-charging resistor value.
Panasonic's handbook recommends that NiMH batteries on standby be charged by a lower duty cycle approach, where a pulse of a higher current is used whenever the battery's voltage drops below 1.3 V. This can extend battery life and use less energy.
ΔV charging method
To prevent cell damage, fast chargers must terminate their charge cycle before overcharging occurs. One method is to monitor the change of voltage with time. When the battery is fully charged, the voltage across its terminals drops slightly. The charger can detect this and stop charging. This method is often used with nickel–cadmium cells, which display a large voltage drop at full charge. However, the voltage drop is much less pronounced for NiMH and can be non-existent at low charge rates, which can make the approach unreliable.
Another option is to monitor the change of voltage with respect to time and stop when this becomes zero, but this risks premature cutoffs. With this method, a much higher charging rate can be used than with a trickle charge, up to 1 C. At this charge rate, Panasonic recommends to terminate charging when the voltage drops 5–10 mV per cell from the peak voltage. Since this method measures the voltage across the battery, a constant-current (rather than a constant-voltage) charging circuit is used.
ΔT charging method
The temperature-change method is similar in principle to the ΔV method. Because the charging voltage is nearly constant, constant-current charging delivers energy at a near-constant rate. When the cell is not fully charged, most of this energy is converted to chemical energy. However, when the cell reaches full charge, most of the charging energy is converted to heat. This increases the rate of change of battery temperature, which can be detected by a sensor such as a thermistor. Both Panasonic and Duracell suggest a maximal rate of temperature increase of 1 °C per minute. Using a temperature sensor allows an absolute temperature cutoff, which Duracell suggests at 60 °C. With both the ΔT and the ΔV charging methods, both manufacturers recommend a further period of trickle charging to follow the initial rapid charge.
Safety
A resettable fuse in series with the cell, particularly of the bimetallic strip type, increases safety. This fuse opens if either the current or the temperature gets too high.
Modern NiMH cells contain catalysts to handle gases produced by over-charging:
2H2{} + O2 ->[\text{catalyst}] 2H2O
However, this only works with overcharging currents of up to 0.1 C (that is, nominal capacity divided by ten hours). This reaction causes batteries to heat, ending the charging process.
A method for very rapid charging called in-cell charge control involves an internal pressure switch in the cell, which disconnects the charging current in the event of overpressure.
One inherent risk with NiMH chemistry is that overcharging causes hydrogen gas to form, potentially rupturing the cell. Therefore, cells have a vent to release the gas in the event of serious overcharging.
NiMH batteries are made of environmentally friendly materials. The batteries contain only mildly toxic substances and are recyclable.
Loss of capacity
Voltage depression (often mistakenly attributed to the memory effect) from repeated partial discharge can occur, but is reversible with a few full discharge/charge cycles.
Discharge
A fully charged cell supplies an average 1.25 V/cell during discharge, declining to about 1.0–1.1 V/cell (further discharge may cause permanent damage in the case of multi-cell packs, due to polarity reversal of the weakest cell). Under a light load (0.5 amperes), the starting voltage of a freshly charged AA NiMH cell in good condition is about 1.4 volts.
Over-discharge
Complete discharge of multi-cell packs can cause reverse polarity in one or more cells, which can permanently damage them. This situation can occur in the common arrangement of four AA cells in series, where one cell completely discharges before the others due to small differences in capacity among the cells. When this happens, the good cells start to drive the discharged cell into reverse polarity (i.e. positive anode and negative cathode). Some cameras, GPS receivers and PDAs detect the safe end-of-discharge voltage of the series cells and perform an auto-shutdown, but devices such as flashlights and some toys do not.
Irreversible damage from polarity reversal is a particular danger, even when a low voltage-threshold cutout is employed, when the cells vary in temperature. This is because capacity significantly declines as the cells are cooled. This results in a lower voltage under load of the colder cells.
Self-discharge
Historically, NiMH cells have had a somewhat higher self-discharge rate (equivalent to internal leakage) than NiCd cells. The self-discharge rate varies greatly with temperature, where lower storage temperature leads to slower discharge and longer battery life. The self-discharge is on the first day and stabilizes around per day at room temperature. But at it is approximately three times as high.
Low self-discharge
The low–self-discharge nickel–metal hydride battery (LSD NiMH) has a significantly lower rate of self-discharge. The innovation was introduced in 2005 by Sanyo, branded Eneloop. By using improvements to electrode separator, positive electrode, and other components, manufacturers claim the cells retain 70–85% of their capacity when stored for one year at , compared to about half for normal NiMH batteries. They are otherwise similar to standard NiMH batteries, and can be charged in standard NiMH chargers. These cells are marketed as "hybrid", "ready-to-use" or "pre-charged" rechargeables. Retention of charge depends in large part on the battery's leakage resistance (the higher the better), and on its physical size and charge capacity.
Separators keep the two electrodes apart to slow electrical discharge while allowing the transport of ionic charge carriers that close the circuit during the passage of current. High-quality separators are critical for battery performance.
The self-discharge rate depends upon separator thickness; thicker separators reduce self-discharge, but also reduce capacity as they leave less space for active components, and thin separators lead to higher self-discharge. Some batteries may have overcome this tradeoff by using more precisely manufactured thin separators, and a sulfonated polyolefin separator, an improvement over the hydrophilic polyolefin based on ethylene vinyl alcohol.
Low-self-discharge cells have somewhat lower capacity than otherwise equivalent NiMH cells because of the larger volume of the separator. The highest-capacity low-self-discharge AA cells have 2500 mAh capacity, compared to 2700 mAh for high-capacity AA NiMH cells.
Common methods to improve self-discharge include: use of a sulfonated
separator (causing removal of N-containing compounds), use of an acrylic acid
grafted PP separator (causing reduction in Al- and Mn-debris formation in separator), removal of Co and Mn in A2B7 MH alloy, (causing reduction in debris formation in separator), increase of the amount of electrolyte (causing reduction in the hydrogen diffusion in electrolyte), removal of Cu-containing components (causing reduction in micro-short), PTFE coating on positive electrode (causing suppression of reaction between NiOOH and H2), CMC solution dipping (causing suppression of oxygen evolution), micro-encapsulation of Cu on MH alloy (causing decrease in H2 released from MH alloy), Ni–B alloy coating on MH alloy (causing formation of a protection layer), alkaline treatment of negative electrode (causing reduction of leach-out of Mn and Al), addition of LiOH and NaOH into electrolyte (causing reduction in electrolyte
corrosion capabilities), and addition of Al2(SO4)3 into electrolyte (causing reduction in MH alloy corrosion). Most of these improvements have no or negligible effect on cost; some increase cost modestly.
Compared to other battery types
Alkaline batteries
NiMH cells are often used in digital cameras and other high-drain devices, where over the duration of single-charge use they outperform primary (such as alkaline) batteries.
NiMH cells are advantageous for high-current-drain applications compared to alkaline batteries, largely due to their lower internal resistance. Typical alkaline AA-size batteries, which offer approximately 2.6 Ah capacity at low current demand (25 mA), provide only 1.3 Ah capacity with a 500 mA load. Digital cameras with LCDs and flashlights can draw over 1 A, quickly depleting them. NiMH cells can deliver these current levels without similar loss of capacity.
Devices that were designed to operate using primary alkaline chemistry (or zinc-carbon/chloride) cells may not function with NiMH cells. However, most devices compensate for the voltage drop of an alkaline battery as it discharges down to about 1 volt. Low internal resistance allows NiMH cells to deliver a nearly constant voltage until they are almost completely discharged. Thus battery-level indicators designed to read alkaline cells overstate the remaining charge when used with NiMH cells, as the voltage of alkaline cells decreases steadily during most of the discharge cycle.
Lithium-ion batteries
Lithium-ion batteries can deliver extremely high power and have a higher specific energy than nickel–metal hydride batteries, but they were originally significantly more expensive. The cost of lithium batteries fell drastically during the 2010s and many small consumer devices now have non-consumer-replaceable lithium batteries as a result.
Lithium batteries produce a higher voltage (3.2–3.7 V nominal), and are thus not a drop-in replacement for AA (alkaline or NiMh) batteries without circuitry to reduce voltage. Although a single lithium cell will typically provide ideal power to replace 3 NiMH cells, the form factor means that the device still needs modification.
Lead batteries
NiMH batteries can easily be made smaller and lighter than lead-acid batteries and have completely replaced them in small devices. However, lead-acid batteries can deliver huge current at low cost, making lead-acid batteries more suitable for starter motors in combustion vehicles.
, nickel–metal hydride batteries constituted three percent of the battery market.
Applications
Consumer electronics
NiMH batteries have replaced NiCd for many roles, notably small rechargeable batteries. NiMH batteries are commonly available in AA (penlight-size) batteries. These have nominal charge capacities (C) of 1.1–2.8 Ah at 1.2 V, measured at the rate that discharges the cell in 5 hours. Useful discharge capacity is a decreasing function of the discharge rate, but up to a rate of around 1×C (full discharge in 1 hour), it does not differ significantly from the nominal capacity. NiMH batteries nominally operate at 1.2 V per cell, somewhat lower than conventional 1.5 V cells, but can operate many devices designed for that voltage.
Electric vehicles
NiMH batteries were frequently used in prior-generation electric and hybrid-electric vehicles; as of 2020 they have been superseded almost entirely by lithium-ion batteries in all-electric and plug-in hybrid vehicles, but they remain in use in some hybrid vehicles (2020 Toyota Highlander, for example). Prior all-electric plug-in vehicles included the General Motors EV1, first-generation Toyota RAV4 EV, Honda EV Plus, Ford Ranger EV and Vectrix scooter. Every first generation hybrid vehicle used NIMH batteries, most notably the Toyota Prius and Honda Insight, as well as later models including the Ford Escape Hybrid, Chevrolet Malibu Hybrid and Honda Civic Hybrid also use them.
Patent issues
Stanford R. Ovshinsky invented and patented a popular improvement of the NiMH battery and founded Ovonic Battery Company in 1982. General Motors purchased Ovonics' patent in 1994. By the late 1990s, NiMH batteries were being used successfully in many fully electric vehicles, such as the General Motors EV1 and Dodge Caravan EPIC minivan.
This generation of electric cars, although successful, was abruptly pulled off the market.
In October 2000, the patent was sold to Texaco, and a week later Texaco was acquired by Chevron. Chevron's Cobasys subsidiary provides these batteries only to large OEM orders. General Motors shut down production of the EV1, citing lack of battery availability as a chief obstacle. Cobasys control of NiMH batteries created a patent encumbrance for large automotive NiMH batteries.
See also
Automotive battery
Battery recycling
Comparison of commercial battery types
Gas diffusion electrode
Jelly roll
Lead–acid battery
List of battery sizes
List of battery types
Lithium-ion battery
Lithium iron phosphate battery
Nickel–zinc battery
Nickel(II) hydroxide
Nickel(III) oxide
Patent encumbrance of large automotive NiMH batteries
Power-to-weight ratio
References
External links
"Bipolar Nickel Metal Hydride Battery" by Martin G. Klein, Michael Eskra, Robert Plivelich and Paula Ralston
Energizer Nickel Metal Hydride (NiMH) Handbook and Application Manual
NiMH battery charging and safety
Metal hydrides
Nickel
Rechargeable batteries | Nickel–metal hydride battery | [
"Chemistry"
] | 4,087 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
58,282 | https://en.wikipedia.org/wiki/Thermal%20diffusivity | In heat transfer analysis, thermal diffusivity is the thermal conductivity divided by density and specific heat capacity at constant pressure. It is a measure of the rate of heat transfer inside a material and has SI units of m2/s. It is an intensive property. Thermal diffusivity is usually denoted by lowercase alpha (), but , , (kappa), , ,, are also used.
The formula is:
where
is thermal conductivity (W/(m·K))
is specific heat capacity (J/(kg·K))
is density (kg/m3)
Together, can be considered the volumetric heat capacity (J/(m3·K)).
As seen in the heat equation,
one way to view thermal diffusivity is as the ratio of the time derivative of temperature to its curvature, quantifying the rate at which temperature concavity is "smoothed out". Thermal diffusivity is a contrasting measure to thermal effusivity. In a substance with high thermal diffusivity, heat moves rapidly through it because the substance conducts heat quickly relative to its volumetric heat capacity or 'thermal bulk'.
Thermal diffusivity is often measured with the flash method. It involves heating a strip or cylindrical sample with a short energy pulse at one end and analyzing the temperature change (reduction in amplitude and phase shift of the pulse) a short distance away.
Thermal diffusivity of selected materials and substances
See also
Heat equation
Laser flash analysis
Thermophoresis
Thermal effusivity
Thermal time constant
References
Heat transfer
Physical quantities
Heat conduction | Thermal diffusivity | [
"Physics",
"Chemistry",
"Mathematics"
] | 329 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Heat conduction",
"Physical properties"
] |
58,283 | https://en.wikipedia.org/wiki/Prandtl%20number | The Prandtl number (Pr) or Prandtl group is a dimensionless number, named after the German physicist Ludwig Prandtl, defined as the ratio of momentum diffusivity to thermal diffusivity. The Prandtl number is given as:where:
: momentum diffusivity (kinematic viscosity), , (SI units: m2/s)
: thermal diffusivity, , (SI units: m2/s)
: dynamic viscosity, (SI units: Pa s = N s/m2)
: thermal conductivity, (SI units: W/(m·K))
: specific heat, (SI units: J/(kg·K))
: density, (SI units: kg/m3).
Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity.
The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number.
Experimental values
Typical values
For most gases over a wide range of temperature and pressure, is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents.
Typical values for are:
0.003 for molten potassium at 975 K
around 0.015 for mercury
0.065 for molten lithium at 975 K
around 0.16–0.7 for mixtures of noble gases or noble gases with hydrogen
0.63 for oxygen
around 0.71 for air and many other gases
1.38 for gaseous ammonia
between 4 and 5 for R-12 refrigerant
around 7.56 for water (At 18 °C)
13.4 and 7.2 for seawater (At 0 °C and 20 °C respectively)
50 for n-butanol
between 100 and 40,000 for engine oil
1000 for glycerol
10,000 for polymer melts
around 1 for Earth's mantle.
Formula for the calculation of the Prandtl number of air and water
For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values.
,
where is the temperature in Celsius.
The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values.
Physical interpretation
Small values of the Prandtl number, , means the thermal diffusivity dominates. Whereas with large values, , the momentum diffusivity dominates the behavior.
For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection, so thermal diffusivity is dominant.
However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity.
The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals () and very slowly in oils () relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer.
In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers. When is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer.
In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by
where is the thermal boundary layer thickness and is the momentum boundary layer thickness.
For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct:
where is the Reynolds number. These two asymptotic solutions can be blended together using the concept of the Norm (mathematics):
See also
Turbulent Prandtl number
Magnetic Prandtl number
References
Further reading
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics | Prandtl number | [
"Physics",
"Chemistry",
"Engineering"
] | 978 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
58,285 | https://en.wikipedia.org/wiki/Nusselt%20number | In thermal fluid dynamics, the Nusselt number (, after Wilhelm Nusselt) is the ratio of total heat transfer to conductive heat transfer at a boundary in a fluid. Total heat transfer combines conduction and convection. Convection includes both advection (fluid motion) and diffusion (conduction). The conductive component is measured under the same conditions as the convective but for a hypothetically motionless fluid. It is a dimensionless number, closely related to the fluid's Rayleigh number.
A Nusselt number of order one represents heat transfer by pure conduction. A value between one and 10 is characteristic of slug flow or laminar flow. A larger Nusselt number corresponds to more active convection, with turbulent flow typically in the 100–1000 range.
A similar non-dimensional property is the Biot number, which concerns thermal conductivity for a solid body rather than a fluid. The mass transfer analogue of the Nusselt number is the Sherwood number.
Definition
The Nusselt number is the ratio of total heat transfer (convection + conduction) to conductive heat transfer across a boundary. The convection and conduction heat flows are parallel to each other and to the surface normal of the boundary surface, and are all perpendicular to the mean fluid flow in the simple case.
where h is the convective heat transfer coefficient of the flow, L is the characteristic length, and k is the thermal conductivity of the fluid.
Selection of the characteristic length should be in the direction of growth (or thickness) of the boundary layer; some examples of characteristic length are: the outer diameter of a cylinder in (external) cross flow (perpendicular to the cylinder axis), the length of a vertical plate undergoing natural convection, or the diameter of a sphere. For complex shapes, the length may be defined as the volume of the fluid body divided by the surface area.
The thermal conductivity of the fluid is typically (but not always) evaluated at the film temperature, which for engineering purposes may be calculated as the mean-average of the bulk fluid temperature and wall surface temperature.
In contrast to the definition given above, known as average Nusselt number, the local Nusselt number is defined by taking the length to be the distance from the surface boundary to the local point of interest.
The mean, or average, number is obtained by integrating the expression over the range of interest, such as:
Context
An understanding of convection boundary layers is necessary to understand convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference.
The heat transfer rate can be written using Newton's law of cooling as
,
where h is the heat transfer coefficient and A is the heat transfer surface area. Because heat transfer at the surface is by conduction, the same quantity can be expressed in terms of the thermal conductivity k:
.
These two terms are equal; thus
.
Rearranging,
.
Multiplying by a representative length L gives a dimensionless expression:
.
The right-hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient, while the left-hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu.
.
Derivation
The Nusselt number may be obtained by a non-dimensional analysis of Fourier's law since it is equal to the dimensionless temperature gradient at the surface:
, where q is the heat transfer rate, k is the constant thermal conductivity and T the fluid temperature.
Indeed, if: and
we arrive at
then we define
so the equation becomes
By integrating over the surface of the body:
,
where .
Empirical correlations
Typically, for free convection, the average Nusselt number is expressed as a function of the Rayleigh number and the Prandtl number, written as:
Otherwise, for forced convection, the Nusselt number is generally a function of the Reynolds number and the Prandtl number, or
Empirical correlations for a wide variety of geometries are available that express the Nusselt number in the aforementioned forms.
See also Heat transfer coefficient#Convective_heat_transfer_correlations.
Free convection
Free convection at a vertical wall
Cited as coming from Churchill and Chu:
Free convection from horizontal plates
If the characteristic length is defined
where is the surface area of the plate and is its perimeter.
Then for the top surface of a hot object in a colder environment or bottom surface of a cold object in a hotter environment
And for the bottom surface of a hot object in a colder environment or top surface of a cold object in a hotter environment
Free convection from enclosure heated from below
Cited as coming from Bejan:
This equation "holds when the
horizontal layer is sufficiently wide so that the effect of the short vertical sides
is minimal."
It was empirically determined by Globe and Dropkin in 1959: "Tests were made in cylindrical containers having copper tops and bottoms and insulating walls." The containers used were around 5" in diameter and 2" high.
Flat plate in laminar flow
The local Nusselt number for laminar flow over a flat plate, at a distance downstream from the edge of the plate, is given by
The average Nusselt number for laminar flow over a flat plate, from the edge of the plate to a downstream distance , is given by
Sphere in convective flow
In some applications, such as the evaporation of spherical liquid droplets in air, the following correlation is used:
Forced convection in turbulent pipe flow
Gnielinski correlation
Gnielinski's correlation for turbulent flow in tubes:
where f is the Darcy friction factor that can either be obtained from the Moody chart or for smooth tubes from correlation developed by Petukhov:
The Gnielinski Correlation is valid for:
Dittus–Boelter equation
The Dittus–Boelter equation (for turbulent flow) as introduced by W.H. McAdams is an explicit function for calculating the Nusselt number. It is easy to solve but is less accurate when there is a large temperature difference across the fluid. It is tailored to smooth tubes, so use for rough tubes (most commercial applications) is cautioned. The Dittus–Boelter equation is:
where:
is the inside diameter of the circular duct
is the Prandtl number
for the fluid being heated, and for the fluid being cooled.
The Dittus–Boelter equation is valid for
The Dittus–Boelter equation is a good approximation where temperature differences between bulk fluid and heat transfer surface are minimal, avoiding equation complexity and iterative solving. Taking water with a bulk fluid average temperature of , viscosity and a heat transfer surface temperature of (viscosity , a viscosity correction factor for can be obtained as 1.45. This increases to 3.57 with a heat transfer surface temperature of (viscosity ), making a significant difference to the Nusselt number and the heat transfer coefficient.
Sieder–Tate correlation
The Sieder–Tate correlation for turbulent flow is an implicit function, as it analyzes the system as a nonlinear boundary value problem. The Sieder–Tate result can be more accurate as it takes into account the change in viscosity ( and ) due to temperature change between the bulk fluid average temperature and the heat transfer surface temperature, respectively. The Sieder–Tate correlation is normally solved by an iterative process, as the viscosity factor will change as the Nusselt number changes.
where:
is the fluid viscosity at the bulk fluid temperature
is the fluid viscosity at the heat-transfer boundary surface temperature
The Sieder–Tate correlation is valid for
Forced convection in fully developed laminar pipe flow
For fully developed internal laminar flow, the Nusselt numbers tend towards a constant value for long pipes.
For internal flow:
where:
Dh = Hydraulic diameter
kf = thermal conductivity of the fluid
h = convective heat transfer coefficient
Convection with uniform temperature for circular tubes
From Incropera & DeWitt,
OEIS sequence gives this value as .
Convection with uniform heat flux for circular tubes
For the case of constant surface heat flux,
See also
Sherwood number (mass transfer Nusselt number)
Churchill–Bernstein equation
Biot number
Reynolds number
Convective heat transfer
Heat transfer coefficient
Thermal conductivity
References
External links
Simple derivation of the Nusselt number from Newton's law of cooling (Accessed 23 September 2009)
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics
Heat transfer | Nusselt number | [
"Physics",
"Chemistry",
"Engineering"
] | 1,792 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
58,287 | https://en.wikipedia.org/wiki/Grashof%20number | In fluid mechanics (especially fluid thermodynamics), the Grashof number (, after Franz Grashof) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid. It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number ().
Definition
Heat transfer
Free convection is caused by a change in density of a fluid due to a temperature change or gradient. Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces.
The Grashof number is:
for vertical flat plates
for pipes and bluff bodies
where:
is gravitational acceleration due to Earth
is the coefficient of volume expansion (equal to approximately for ideal gases)
is the surface temperature
is the bulk temperature
is the vertical length
is the diameter
is the kinematic viscosity.
The and subscripts indicate the length scale basis for the Grashof number.
The transition to turbulent flow occurs in the range for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range .
Mass transfer
There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients.
where
and:
is gravitational acceleration due to Earth
is the concentration of species at surface
is the concentration of species in ambient medium
is the characteristic length
is the kinematic viscosity
is the fluid density
is the concentration of species
is the temperature (constant)
is the pressure (constant).
Relationship to other dimensionless numbers
The Rayleigh number, shown below, is a dimensionless number that characterizes convection problems in heat transfer. A critical value exists for the Rayleigh number, above which fluid motion occurs.
The ratio of the Grashof number to the square of the Reynolds number may be used to determine if forced or free convection may be neglected for a system, or if there's a combination of the two. This characteristic ratio is known as the Richardson number (). If the ratio is much less than one, then free convection may be ignored. If the ratio is much greater than one, forced convection may be ignored. Otherwise, the regime is combined forced and free convection.
Derivation
The first step to deriving the Grashof number is manipulating the volume expansion coefficient, as follows.
The in the equation above, which represents specific volume, is not the same as the in the subsequent sections of this derivation, which will represent a velocity. This partial relation of the volume expansion coefficient, , with respect to fluid density, , given constant pressure, can be rewritten as
where:
is the bulk fluid density
is the boundary layer density
, the temperature difference between boundary layer and bulk fluid.
There are two different ways to find the Grashof number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid.
Energy equation
This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow.
where:
is the rotational direction, i.e. direction parallel to the surface
is the tangential velocity, i.e. velocity parallel to the surface
is the planar direction, i.e. direction normal to the surface
is the normal velocity, i.e. velocity normal to the surface
is the radius.
In this equation the superscript is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true.
= 1: rotationally symmetric flow
= 0: planar, two-dimensional flow
is gravitational acceleration
This equation expands to the following with the addition of physical fluid properties:
From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0 ().
This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation.
where the volume expansion coefficient to density relationship found above and the kinematic viscosity relationship were substituted into the momentum equation.
To find the Grashof number from this point, the preceding equation must be non-dimensionalized. This means that every variable in the equation should have no dimension and should instead be a ratio characteristic to the geometry and setup of the problem. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length, . Velocities are divided by appropriate reference velocities, , which, considering the Reynolds number, gives . Temperatures are divided by the appropriate temperature difference, . These dimensionless parameters look like the following:
,
,
,
, and
.
The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation.
where:
is the surface temperature
is the bulk fluid temperature
is the characteristic length.
The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof number:
Buckingham π theorem
Another form of dimensional analysis that will result in the Grashof number is known as the Buckingham π theorem. This method takes into account the buoyancy force per unit volume, due to the density difference in the boundary layer and the bulk fluid.
This equation can be manipulated to give,
The list of variables that are used in the Buckingham π method is listed below, along with their symbols and dimensions.
With reference to the Buckingham π theorem there are dimensionless groups. Choose , , and as the reference variables. Thus the groups are as follows:
,
,
,
.
Solving these groups gives:
,
,
,
From the two groups and the product forms the Grashof number:
Taking and the preceding equation can be rendered as the same result from deriving the Grashof number from the energy equation.
In forced convection the Reynolds number governs the fluid flow. But, in natural convection the Grashof number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof number.
Physical Reasoning
It is also possible to derive the Grashof number by physical definition of the number as follows:
However, above expression, especially the final part at the right hand side, is slightly different from Grashof number appearing in literature. Following dimensionally correct scale in terms of dynamic viscosity can be used to have the final form.
Writing above scale in Gr gives;
Physical reasoning is helpful to grasp the meaning of the number. On the other hand, following velocity definition can be used as a characteristic velocity value for making certain velocities nondimensional.
Effects of Grashof number on the flow of different fluids
In a recent research carried out on the effects of Grashof number on the flow of different fluids driven by convection over various surfaces. Using slope of the linear regression line through data points, it is concluded that increase in the value of Grashof number or any buoyancy related parameter implies an increase in the wall temperature and this makes the bond(s) between the fluid to become weaker, strength of the internal friction to decrease, the gravity to becomes stronger enough (i.e. makes the specific weight appreciably different between the immediate fluid layers adjacent to the wall). The effects of buoyancy parameter are highly significant in the laminar flow within the boundary layer formed on a vertically moving cylinder. This is only achievable when the prescribed surface temperature (PST) and prescribed wall heat flux (WHF) are considered. It can be concluded that buoyancy parameter has a negligible positive effect on the local Nusselt number. This is only true when the magnitude of Prandtl number is small or prescribed wall heat flux (WHF) is considered. Sherwood number, Bejan Number, Entropy generation, Stanton Number and pressure gradient are increasing properties of buoyancy related parameter while concentration profiles, frictional force, and motile microorganism are decreasing properties.
Notes
References
Further reading
Buoyancy
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics
Heat transfer | Grashof number | [
"Physics",
"Chemistry",
"Engineering"
] | 1,767 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
17,490,047 | https://en.wikipedia.org/wiki/Slow%20vertex%20response | The slow vertex response (also called SVR or V potential) is an electrochemical signal associated with electrophysiological recordings of the auditory system, specifically Auditory evoked potentials (AEPs). The SVR of a normal human being recorded with surface electrodes can be found at the end of a recorded AEP waveform between the latencies 50-500ms. Detection of SVR is used to estimate thresholds for hearing pathways.
References
Physiology | Slow vertex response | [
"Biology"
] | 94 | [
"Physiology"
] |
17,490,264 | https://en.wikipedia.org/wiki/Pseudo%20Stirling%20cycle | The pseudo Stirling cycle, also known as the adiabatic Stirling cycle, is a thermodynamic cycle with an adiabatic working volume and isothermal heater and cooler, in contrast to the ideal Stirling cycle with an isothermal working space. The working fluid has no bearing on the maximum thermal efficiencies of the pseudo Stirling cycle.
Practical Stirling engines usually use a adiabatic Stirling cycle as the ideal Stirling cycle can not be practically implemented.
Nomenclature (practical engines and ideal cycle are both named Stirling) and lack in specificity (omitting ideal or adiabatic Stirling cycle) can cause confusion.
History
The pseudo Stirling cycle was designed to address predictive shortcomings in the ideal isothermal Stirling cycle. Specifically, the ideal cycle does not give usable figures or criteria for judging the performance of real-world Stirling engines.
See also
Stirling engine
Stirling cycle
References
External links
Abstract of "The Pseudo Stirling cycle - A suitable performance criterion"
Brief History of Stirling Machines p. 4 and on
Thermodynamic cycles | Pseudo Stirling cycle | [
"Physics",
"Chemistry"
] | 212 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics"
] |
17,494,135 | https://en.wikipedia.org/wiki/High-efficiency%20hybrid%20cycle | The high-efficiency hybrid cycle (HEHC) is a new 4-stroke thermodynamic cycle combining elements of the Otto cycle, Diesel cycle, Atkinson cycle and Rankine cycle.
HEHC engines
The 3rd generation design of the Liquidpiston Engine currently in development is the only engine currently designed around the HEHC. It is a rotary combustion engine.
References
External links
LiquidPiston Inc. – The company designing the first HEHC-based engine
MIT News article: "Small engine packs a punch" (December 5, 2014)
Thermodynamic cycles | High-efficiency hybrid cycle | [
"Physics",
"Chemistry"
] | 117 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics"
] |
1,032,998 | https://en.wikipedia.org/wiki/Gas%20centrifuge | A gas centrifuge is a device that performs isotope separation of gases. A centrifuge relies on the principles of centrifugal force accelerating molecules so that particles of different masses are physically separated in a gradient along the radius of a rotating container. A prominent use of gas centrifuges is for the separation of uranium-235 (235U) from uranium-238 (238U). The gas centrifuge was developed to replace the gaseous diffusion method of 235U extraction. High degrees of separation of these isotopes relies on using many individual centrifuges arranged in series that achieve successively higher concentrations. This process yields higher concentrations of 235U while using significantly less energy compared to the gaseous diffusion process.
History
Suggested in 1919, the centrifugal process was first successfully performed in 1934. American scientist Jesse Beams and his team at the University of Virginia developed the process by separating two chlorine isotopes through a vacuum ultracentrifuge. It was one of the initial isotopic separation means pursued during the Manhattan Project, more particularly by Harold Urey and Karl P. Cohen, but research was discontinued in 1944 as it was felt that the method would not produce results by the end of the war, and that other means of uranium enrichment (gaseous diffusion and electromagnetic separation) had a better chance of success in the short term. This method was successfully used in the Soviet nuclear program, making the Soviet Union the most effective supplier of enriched uranium. Franz Simon, Rudolf Peierls, Klaus Fuchs and Nicholas Kurti made important contributions to the centrifugal process.
Paul Dirac made important theoretical contributions to the centrifugal process during World War II; Dirac developed the fundamental theory of separation processes that underlies the design and analysis of modern uranium enrichment plants. In the long term, especially with the development of the Zippe-type centrifuge, the gas centrifuge has become a very economical mode of separation, using considerably less energy than other methods and having numerous other advantages.
Research in the physical performance of centrifuges was carried out by the Pakistani scientist Abdul Qadeer Khan in the 1970s–80s, using vacuum methods for advancing the role of centrifuges in the development of nuclear fuel for Pakistan's atomic bomb. Many of the theorists working with Khan were unsure that either gaseous and enriched uranium would be feasible on time. One scientist recalled: "No one in the world has used the [gas] centrifuge method to produce military-grade uranium.... This was not going to work. He was simply wasting time." In spite of skepticism, the program was quickly proven to be feasible. Enrichment via centrifuge has been used in experimental physics, and the method was smuggled to at least three different countries by the end of the 20th century.
Centrifugal process
The centrifuge relies on the force resulting from centrifugal acceleration to separate molecules according to their mass and can be applied to most fluids. The dense (heavier) molecules move towards the wall, and the lighter ones remain close to the center. The centrifuge consists of a rigid body rotor rotating at full period at high speed. Concentric gas tubes located on the axis of the rotor are used to introduce feed gas into the rotor and extract the heavier and lighter separated streams. For 235U production, the heavier stream is the waste stream and the lighter stream is the product stream. Modern Zippe-type centrifuges are tall cylinders spinning on a vertical axis. A vertical temperature gradient can be applied to create a convective circulation rising in the center and descending at the periphery of the centrifuge. Such a countercurrent flow can also be stimulated mechanically by the scoops that take out the enriched and depleted fractions. Diffusion between these opposing flows increases the separation by the principle of countercurrent multiplication.
In practice, since there are limits to how tall a single centrifuge can be made, several such centrifuges are connected in series. Each centrifuge receives one input line and produces two output lines, corresponding to light and heavy fractions. The input of each centrifuge is the product stream of the previous centrifuge. This produces an almost pure light fraction from the product stream of the last centrifuge and an almost pure heavy fraction from the waste stream of the first centrifuge.
Gas centrifugation process
The gas centrifugation process uses a unique design that allows gas to constantly flow in and out of the centrifuge. Unlike most centrifuges which rely on batch processing, the gas centrifuge uses continuous processing, allowing cascading in which multiple identical processes occur in succession. The gas centrifuge consists of a cylindrical rotor, a casing, an electric motor, and three lines for material to travel. The gas centrifuge is designed with a casing that completely encloses the centrifuge. The cylindrical rotor is located inside the casing, which is evacuated of all air to produce a near frictionless rotation when operating. The motor spins the rotor, creating the centrifugal force on the components as they enter the cylindrical rotor. This force acts to separate the molecules of the gas, with heavier molecules moving towards the wall of the rotor and the lighter molecules towards the central axis. There are two output lines, one for the fraction enriched in the desired isotope (in uranium separation, this is 235U), and one depleted of that isotope. The output lines take these separations to other centrifuges to continue the centrifugation process. The process begins when the rotor is balanced in three stages. Most of the technical details on gas centrifuges are difficult to obtain because they are shrouded in "nuclear secrecy".
The early gas centrifuges used in the UK used an alloy body wrapped in epoxy-impregnated glass fibre. Dynamic balancing of the assembly was accomplished by adding small traces of epoxy at the locations indicated by the balancing test unit. The motor was usually a pancake type located at the bottom of the cylinder. The early units were typically around 2 metres long, but subsequent developments gradually increased the length. The present generation are over 4 metres in length. The bearings are gas-based devices, as mechanical bearings would not survive at the normal operating speeds of these centrifuges.
A section of centrifuges would be fed with variable-frequency alternating current from an electronic (bulk) inverter, which would slowly ramp them up to the required speed, generally in excess of 50,000 rpm. One precaution was to quickly get past frequencies at which the cylinder was known to suffer resonance problems. The inverter is a high-frequency unit capable of operating at frequencies around 1 kilohertz. The whole process is normally silent; if a noise is heard coming from a centrifuge, it is a warning of failure (which normally occurs very quickly). The design of the cascade normally allows for the failure of at least one centrifuge unit without compromising the operation of the cascade. The units are normally very reliable, with early models having operated continuously for over 30 years.
Later models have steadily increased the rotation speed of the centrifuges, as it is the velocity of the centrifuge wall that has the most effect on the separation efficiency. A feature of the cascade system of centrifuges is that it is possible to increase plant throughput incrementally, by adding cascade "blocks" to the existing installation at suitable locations, rather than having to install a completely new line of centrifuges.
Concurrent and countercurrent centrifuges
The simplest gas centrifuge is the concurrent centrifuge, where separative effect is produced by the centrifugal effects of the rotor's rotation. In these centrifuges, the heavy fraction is collected at the periphery of the rotor and the light fraction from nearer the axis of rotation.
Inducing a countercurrent flow uses countercurrent multiplication to enhance the separative effect. A vertical circulating current is set up, with the gas flowing axially along the rotor walls in one direction and a return flow closer to the center of the rotor. The centrifugal separation continues as before (heavier molecules preferentially moving outwards), which means that the heavier molecules are collected by the wall flow, and the lighter fraction collects at the other end. In a centrifuge with downward wall flow, the heavier molecules collect at the bottom. The outlet scoops are then placed at the ends of the rotor cavity, with the feed mixture injected along the axis of the cavity (ideally, the injection point is at the point where the mixture in the rotor is equal to the feed).
This countercurrent flow can be induced mechanically or thermally, or a combination. In mechanically induced countercurrent flow, the arrangement of the (stationary) scoops and internal rotor structures are used to generate the flow. A scoop interacts with the gas by slowing it, which tends to draw it into the centre of the rotor. The scoops at each end induce opposing currents, so one scoop is protected from the flow by a "baffle": a perforated disc within the rotor which rotates along with the gas—at this end of the rotor, the flow will be outwards, towards the rotor wall. Thus, in a centrifuge with a baffled top scoop, the wall flow is downwards, and heavier molecules are collected at the bottom. Thermally induced convection currents can be created by heating the bottom of the centrifuge and/or cooling the top end.
Separative work units
The separative work unit (SWU) is a measure of the amount of work done by the centrifuge and has units of mass (typically kilogram separative work unit). The work necessary to separate a mass of feed of assay into a mass of product assay , and tails of mass and assay is expressed in terms of the number of separative work units needed, given by the expression
where is the value function, defined as
Practical application of centrifugation
Separation of uranium-235 from uranium-238
The separation of uranium requires the material in a gaseous form; uranium hexafluoride (UF6) is used for uranium enrichment. Upon entering the centrifuge cylinder, the UF6 gas is rotated at a high speed. The rotation creates a strong centrifugal force that draws more of the heavier gas molecules (containing the 238U) toward the wall of the cylinder, while the lighter gas molecules (containing the 235U) tend to collect closer to the center. The stream that is slightly enriched in 235U is withdrawn and fed into the next higher stage, while the slightly depleted stream is recycled back into the next lower stage.
Separation of zinc isotopes
For some uses in nuclear technology, the content of zinc-64 in zinc metal has to be lowered in order to prevent formation of radioisotopes by its neutron activation. Diethyl zinc is used as the gaseous feed medium for the centrifuge cascade. An example of a resulting material is depleted zinc oxide, used as a corrosion inhibitor.
See also
Nuclear technology
Nuclear power
Nuclear fuel
Notes
References
"Basics of Centrifugation." Cole-Parmer Technical Lab. 14 Mar. 2008
"Gas Centrifuge Uranium Enrichment." Global Security.Org. 27 Apr. 2005. 13 Mar. 2008
"What is a Gas Centrifuge?" 2003. Institute for Science and International Security. 10 Oct. 2013
External links
Annotated bibliography on the gas centrifuge from the Alsos Digital Library
History of the Centrifuge
What is a Gas Centrifuge?
Agreement between the Government of the United States of America and the Four Governments of the French Republic, the United Kingdom of Great Britain and Northern Ireland, the Kingdom of the Netherlands, and the Federal Republic of Germany Regarding the Establishment, Construction and Operation of Uranium Enriching Installations Using Gas Centrifuge Technology in the United States of America United States Department of State
Centrifuges
Isotope separation
Nuclear chemistry
Nuclear proliferation
Uranium
P
R
A | Gas centrifuge | [
"Physics",
"Chemistry",
"Engineering"
] | 2,540 | [
"Centrifugation",
"Chemical equipment",
"Nuclear chemistry",
"nan",
"Nuclear physics",
"Centrifuges"
] |
1,033,045 | https://en.wikipedia.org/wiki/Marcinkiewicz%20interpolation%20theorem | In mathematics, the Marcinkiewicz interpolation theorem, discovered by , is a result bounding the norms of non-linear operators acting on Lp spaces.
Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, but also applies to non-linear operators.
Preliminaries
Let f be a measurable function with real or complex values, defined on a measure space (X, F, ω). The distribution function of f is defined by
Then f is called weak if there exists a constant C such that the distribution function of f satisfies the following inequality for all t > 0:
The smallest constant C in the inequality above is called the weak norm and is usually denoted by or Similarly the space is usually denoted by L1,w or L1,∞.
(Note: This terminology is a bit misleading since the weak norm does not satisfy the triangle inequality as one can see by considering the sum of the functions on given by and , which has norm 4 not 2.)
Any function belongs to L1,w and in addition one has the inequality
This is nothing but Markov's inequality (aka Chebyshev's Inequality). The converse is not true. For example, the function 1/x belongs to L1,w but not to L1.
Similarly, one may define the weak space as the space of all functions f such that belong to L1,w, and the weak norm using
More directly, the Lp,w norm is defined as the best constant C in the inequality
for all t > 0.
Formulation
Informally, Marcinkiewicz's theorem is
Theorem. Let T be a bounded linear operator from to and at the same time from to . Then T is also a bounded operator from to for any r between p and q.
In other words, even if one only requires weak boundedness on the extremes p and q, regular boundedness still holds. To make this more formal, one has to explain that T is bounded only on a dense subset and can be completed. See Riesz-Thorin theorem for these details.
Where Marcinkiewicz's theorem is weaker than the Riesz-Thorin theorem is in the estimates of the norm. The theorem gives bounds for the norm of T but this bound increases to infinity as r converges to either p or q. Specifically , suppose that
so that the operator norm of T from Lp to Lp,w is at most Np, and the operator norm of T from Lq to Lq,w is at most Nq. Then the following interpolation inequality holds for all r between p and q and all f ∈ Lr:
where
and
The constants δ and γ can also be given for q = ∞ by passing to the limit.
A version of the theorem also holds more generally if T is only assumed to be a quasilinear operator in the following sense: there exists a constant C > 0 such that T satisfies
for almost every x. The theorem holds precisely as stated, except with γ replaced by
An operator T (possibly quasilinear) satisfying an estimate of the form
is said to be of weak type (p,q). An operator is simply of type (p,q) if T is a bounded transformation from Lp to Lq:
A more general formulation of the interpolation theorem is as follows:
If T is a quasilinear operator of weak type (p0, q0) and of weak type (p1, q1) where q0 ≠ q1, then for each θ ∈ (0,1), T is of type (p,q), for p and q with p ≤ q of the form
The latter formulation follows from the former through an application of Hölder's inequality and a duality argument.
Applications and examples
A famous application example is the Hilbert transform. Viewed as a multiplier, the Hilbert transform of a function f can be computed by first taking the Fourier transform of f, then multiplying by the sign function, and finally applying the inverse Fourier transform.
Hence Parseval's theorem easily shows that the Hilbert transform is bounded from to . A much less obvious fact is that it is bounded from to . Hence Marcinkiewicz's theorem shows that it is bounded from to for any 1 < p < 2. Duality arguments show that it is also bounded for 2 < p < ∞. In fact, the Hilbert transform is really unbounded for p equal to 1 or ∞.
Another famous example is the Hardy–Littlewood maximal function, which is only sublinear operator rather than linear. While to bounds can be derived immediately from the to weak estimate by a clever change of variables, Marcinkiewicz interpolation is a more intuitive approach. Since the Hardy–Littlewood Maximal Function is trivially bounded from to , strong boundedness for all follows immediately from the weak (1,1) estimate and interpolation. The weak (1,1) estimate can be obtained from the Vitali covering lemma.
History
The theorem was first announced by , who showed this result to Antoni Zygmund shortly before he died in World War II. The theorem was almost forgotten by Zygmund, and was absent from his original works on the theory of singular integral operators. Later realized that Marcinkiewicz's result could greatly simplify his work, at which time he published his former student's theorem together with a generalization of his own.
In 1964 Richard A. Hunt and Guido Weiss published a new proof of the Marcinkiewicz interpolation theorem.
See also
Interpolation space
References
.
.
.
Fourier analysis
Theorems in functional analysis
Lp spaces | Marcinkiewicz interpolation theorem | [
"Mathematics"
] | 1,170 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
1,033,664 | https://en.wikipedia.org/wiki/Morava%20K-theory | In stable homotopy theory, a branch of mathematics, Morava K-theory is one of a collection of cohomology theories introduced in algebraic topology by Jack Morava in unpublished preprints in the early 1970s. For every prime number p (which is suppressed in the notation), it consists of theories K(n) for each nonnegative integer n, each a ring spectrum in the sense of homotopy theory. published the first account of the theories.
Details
The theory K(0) agrees with singular homology with rational coefficients, whereas K(1) is a summand of mod-p complex K-theory. The theory K(n) has coefficient ring
Fp[vn,vn−1]
where vn has degree 2(pn − 1). In particular, Morava K-theory is periodic with this period, in much the same way that complex K-theory has period 2.
These theories have several remarkable properties.
They have Künneth isomorphisms for arbitrary pairs of spaces: that is, for X and Y CW complexes, we have
They are "fields" in the category of ring spectra. In other words every module spectrum over K(n) is free, i.e. a wedge of suspensions of K(n).
They are complex oriented (at least after being periodified by taking the wedge sum of (pn − 1) shifted copies), and the formal group they define has height n.
Every finite p-local spectrum X has the property that K(n)∗(X) = 0 if and only if n is less than a certain number N, called the type of the spectrum X. By a theorem of Devinatz–Hopkins–Smith, every thick subcategory of the category of finite p-local spectra is the subcategory of type-n spectra for some n.
See also
Chromatic homotopy theory
Morava E-theory
References
Hovey-Strickland, "Morava K-theory and localisation"
Algebraic topology
Cohomology theories | Morava K-theory | [
"Mathematics"
] | 424 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
1,033,666 | https://en.wikipedia.org/wiki/Complex%20cobordism | In mathematics, complex cobordism is a generalized cohomology theory related to cobordism of manifolds. Its spectrum is denoted by MU. It is an exceptionally powerful cohomology theory, but can be quite hard to compute, so often instead of using it directly one uses some slightly weaker theories derived from it, such as Brown–Peterson cohomology or Morava K-theory, that are easier to compute.
The generalized homology and cohomology complex cobordism theories were introduced by using the Thom spectrum.
Spectrum of complex cobordism
The complex bordism of a space is roughly the group of bordism classes of manifolds over with a complex linear structure on the stable normal bundle. Complex bordism is a generalized homology theory, corresponding to a spectrum MU that can be described explicitly in terms of Thom spaces as follows.
The space is the Thom space of the universal -plane bundle over the classifying space of the unitary group . The natural inclusion from into induces a map from the double suspension to . Together these maps give the spectrum ; namely, it is the homotopy colimit of .
Examples: is the sphere spectrum. is the desuspension of .
The nilpotence theorem states that, for any ring spectrum , the kernel of consists of nilpotent elements. The theorem implies in particular that, if is the sphere spectrum, then for any , every element of is nilpotent (a theorem of Goro Nishida). (Proof: if is in , then is a torsion but its image in , the Lazard ring, cannot be torsion since is a polynomial ring. Thus, must be in the kernel.)
Formal group laws
and showed that the coefficient ring (equal to the complex cobordism of a point, or equivalently the ring of cobordism classes of stably complex manifolds) is a polynomial ring on infinitely many generators of positive even degrees.
Write for infinite dimensional complex projective space, which is the classifying space for complex line bundles, so that tensor product of line bundles induces a map A complex orientation on an associative commutative ring spectrum E is an element x in whose restriction to
is 1, if the latter ring is identified with the coefficient ring of E. A spectrum E with such an element x is called a complex oriented ring spectrum.
If E is a complex oriented ring spectrum, then
and is a formal group law over the ring .
Complex cobordism has a natural complex orientation. showed that there is a natural isomorphism from its coefficient ring to Lazard's universal ring, making the formal group law of complex cobordism into the universal formal group law. In other words, for any formal group law F over any commutative ring R, there is a unique ring homomorphism from MU*(point) to R such that F is the pullback of the formal group law of complex cobordism.
Brown–Peterson cohomology
Complex cobordism over the rationals can be reduced to ordinary cohomology over the rationals, so the main interest is in the torsion of complex cobordism. It is often easier to study the torsion one prime at a time by localizing MU at a prime p; roughly speaking this means one kills off torsion prime to p. The localization MUp of MU at a prime p splits as a sum of suspensions of a simpler cohomology theory called Brown–Peterson cohomology, first described by . In practice one often does calculations with Brown–Peterson cohomology rather than with complex cobordism. Knowledge of the Brown–Peterson cohomologies of a space for all primes p is roughly equivalent to knowledge of its complex cobordism.
Conner–Floyd classes
The ring is isomorphic to the formal power series ring where the elements cf are called Conner–Floyd classes. They are the analogues of Chern classes for complex cobordism. They were introduced by .
Similarly is isomorphic to the polynomial ring
Cohomology operations
The Hopf algebra MU*(MU) is isomorphic to the polynomial algebra R[b1, b2, ...], where R is the reduced bordism ring of a 0-sphere.
The coproduct is given by
where the notation ()2i means take the piece of degree 2i. This can be interpreted as follows. The map
is a continuous automorphism of the ring of formal power series in x, and the coproduct of MU*(MU) gives the composition of two such automorphisms.
See also
Adams–Novikov spectral sequence
List of cohomology theories
Algebraic cobordism
Notes
References
.
. Translation of
.
External links
Complex bordism at the manifold atlas
Algebraic topology | Complex cobordism | [
"Mathematics"
] | 990 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
1,033,847 | https://en.wikipedia.org/wiki/Transient%20receptor%20potential%20channel | Transient receptor potential channels (TRP channels) are a group of ion channels located mostly on the plasma membrane of numerous animal cell types. Most of these are grouped into two broad groups: Group 1 includes TRPC ( "C" for canonical), TRPV ("V" for vanilloid), TRPVL ("VL" for vanilloid-like), TRPM ("M" for melastatin), TRPS ("S" for soromelastatin), TRPN ("N" for mechanoreceptor potential C), and TRPA ("A" for ankyrin). Group 2 consists of TRPP ("P" for polycystic) and TRPML ("ML" for mucolipin). Other less-well categorized TRP channels exist, including yeast channels and a number of Group 1 and Group 2 channels present in non-animals. Many of these channels mediate a variety of sensations such as pain, temperature, different kinds of taste, pressure, and vision. In the body, some TRP channels are thought to behave like microscopic thermometers and used in animals to sense hot or cold. Some TRP channels are activated by molecules found in spices like garlic (allicin), chili pepper (capsaicin), wasabi (allyl isothiocyanate); others are activated by menthol, camphor, peppermint, and cooling agents; yet others are activated by molecules found in cannabis (i.e., THC, CBD and CBN) or stevia. Some act as sensors of osmotic pressure, volume, stretch, and vibration. Most of the channels are activated or inhibited by signaling lipids and contribute to a family of lipid-gated ion channels.
These ion channels have a relatively non-selective permeability to cations, including sodium, calcium and magnesium.
TRP channels were initially discovered in the so-called "transient receptor potential" mutant (trp-mutant) strain of the fruit fly Drosophila, hence their name (see History of Drosophila TRP channels below). Later, TRP channels were found in vertebrates where they are ubiquitously expressed in many cell types and tissues. Most TRP channels are composed of 6 membrane-spanning helices with intracellular N- and C-termini. Mammalian TRP channels are activated and regulated by a wide variety of stimuli and are expressed throughout the body.
Families
In the animal TRP superfamily there are currently 9 proposed families split into two groups, each family containing a number of subfamilies. Group one consists of TRPC, TRPV, TRPVL, TRPA, TRPM, TRPS, and TRPN, while group two contains TRPP and TRPML. There is an additional family labeled TRPY that is not always included in either of these groups. All of these sub-families are similar in that they are molecular sensing, non-selective cation channels that have six transmembrane segments, however, each sub-family is unique and shares little structural homology with one another. This uniqueness gives rise to the various sensory perception and regulation functions that TRP channels have throughout the body. Group one and group two vary in that both TRPP and TRPML of group two have a much longer extracellular loop between the S1 and S2 transmembrane segments. Another differentiating characteristic is that all the group one sub-families either contain an N-terminal intracellular ankyrin repeat sequence, a C-terminal TRP domain sequence, or both—whereas both group two sub-families have neither. Below are members of the sub-families and a brief description of each:
TRPA
TRPA, A for "ankyrin", is named for the large amount of ankyrin repeats found near the N-terminus. TRPA is primarily found in afferent nociceptive nerve fibers and is associated with the amplification of pain signaling as well as cold pain hypersensitivity. These channels have been shown to be both mechanical receptors for pain and chemosensors activated by various chemical species, including isothiocyanates (pungent chemicals in substances such as mustard oil and wasabi), cannabinoids, general and local analgesics, and cinnamaldehyde.
While TRPA1 is expressed in a wide variety of animals, a variety of other TRPA channels exist outside of vertebrates. TRPA5, painless, pyrexia, and waterwitch are distinct phylogenetic branches within the TRPA clade, and are only evidenced to be expressed in crustaceans and insects, while HsTRPA arose as a Hymenoptera-specific duplication of waterwitch. Like TRPA1 and other TRP channels, these function as ion channels in a number of sensory systems. TRPA- or TRPA1-like channels also exists in a variety of species as a phylogenetically distinct clade, but these are less well understood.
TRPC
TRPC, C for "canonical", is named for being the most closely related to Drosophila TRP, the namesake of TRP channels. The phylogeny of TRPC channels has not been resolved in detail, but they are present across animal taxa. There are actually only six TRPC channels expressed in humans because TRPC2 is found to be expressed solely in mice and is considered a pseudo-gene in humans; this is partly due to the role of TRPC2 in detecting pheromones, which mice have an increased ability compared to humans. Mutations in TRPC channels have been associated with respiratory diseases along with focal segmental glomerulosclerosis in the kidneys. All TRPC channels are activated either by phospholipase C (PLC) or diacyglycerol (DAG).
TRPML
TRPML, ML for "mucolipin", gets its name from the neurodevelopmental disorder mucolipidosis IV. Mucolipidosis IV was first discovered in 1974 by E.R. Berman who noticed abnormalities in the eyes of an infant. These abnormalities soon became associated with mutations to the MCOLN1 gene which encodes for the TRPML1 ion channel. TRPML is still not highly characterized. The three known vertebrate copies are restricted to jawed vertebrates, with some exceptions (e.g. Xenopus tropicalis).
TRPM
TRPM, M for "melastatin", was found during a comparative genetic analysis between benign nevi and malignant nevi (melanoma). Mutations within TRPM channels have been associated with hypomagnesemia with secondary hypocalcemia. TRPM channels have also become known for their cold-sensing mechanisms, such is the case with TRPM8. Comparative studies have shown that the functional domains and critical amino acids of TRPM channels are highly conserved across species.
Phylogenetics has shown that TRPM channels are split into two major clades, αTRPM and βTRPM. αTRPMs include vertebrate TRPM1, TRPM3, and the "chanzymes" TRPM6 and TRPM7, as well as the only insect TRPM channel, among others. βTRPMs include, but are not limited to, vertebrate TRPM2, TRPM4, TRPM5, and TRPM8 (the cold and menthol sensor). Two additional major clades have been described: TRPMc, which is present only in a variety of arthropods, and a basal clade, which has since been proposed to be a distinct and separate TRP channel family (TRPS).
TRPN
TRPN was originally described in Drosophila melanogaster and Caenorhabditis elegans as nompC, a mechanically gated ion channel. Only a single TRPN, N for "no mechanoreceptor potential C," or "nompC", is known to be broadly expressed in animals (although some Cnidarians have more), and is notably only a pseudogene in amniote vertebrates. Despite TRPA being named for ankyrin repeats, TRPN channels are thought to have the most of any TRP channel, typically around 28, which are highly conserved across taxa Since its discovery, Drosophila nompC has been implicated in mechanosensation (including mechanical stimulation of the cuticle and sound detection) and cold nociception.
TRPP
TRPP, P for "polycistin", is named for polycystic kidney disease, which is associated with these channels. These channels are also referred to as PKD (polycistic kidney disease) ion channels.
PKD2-like genes (examples include TRPP2, TRPP3, and TRPP5) encode canonical TRP channels. PKD1-like genes encode much larger proteins with 11 transmembrane segments, which do not have all the features of other TRP channels. However, 6 of the transmebrane segments of PKD1-like proteins have substantial sequence homology with TRP channels, indicating they may simply have diversified greatly from other closely related proteins.
Insects have a third sub-family of TRPP, called brividos, which participate in cold sensing.
TRPS
TRPS, S for Soromelastatin, was named as it forms a sister group to TRPM. TRPS is broadly present in animals, but notably absent in vertebrates and insects (among others). TRPS has not yet been well described functionally, though it is known that the C. elegans TRPS, known as CED-11, is a calcium channel which participates in apoptosis.
TRPV
TRPV, V for "vanilloid", was originally discovered in Caenorhabditis elegans, and is named for the vanilloid chemicals that activate some of these channels. These channels have been made famous for their association with molecules such as capsaicin (a TRPV1 agonist). In addition to the 6 known vertebrate paralogues, 2 major clades are known outside of the deterostomes: nanchung and Iav. Mechanistic studies of these latter clades have been largely restricted to Drosophila, but phylogenetic analyses has placed a number of other genes from Placozoa, Annelida, Cnidaria, Mollusca, and other arthropods within them. TRPV channels have also been described in protists.
TRPVL
TRPVL has been proposed to be a sister clade to TRPV, and is limited to the cnidarians Nematostella vectensis and Hydra magnipapillata, and the annelid Capitella teleta. Little is known concerning these channels.
TRPY
TRPY, Y for "yeast", is highly localized to the yeast vacuole, which is the functional equivalent of a lysosome in a mammalian cell, and acts as a mechanosensor for vacuolar osmotic pressure. Patch clamp techniques and hyperosmotic stimulation have illustrated that TRPY plays a role in intracellular calcium release. Phylogenetic analysis has shown that TRPY1 does not form a part with the other metazoan TRP groups one and two, and is suggested to have evolved after the divergence of metazoans and fungi. Others have indicated that TRPY are more closely related to TRPP.
Structure
TRP channels are composed of 6 membrane-spanning helices (S1-S6) with intracellular N- and C-termini. Mammalian TRP channels are activated and regulated by a wide variety of stimuli including many post-transcriptional mechanisms like phosphorylation, G-protein receptor coupling, ligand-gating, and ubiquitination. The receptors are found in almost all cell types and are largely localized in cell and organelle membranes, modulating ion entry.
Most TRP channels form homo- or heterotetramers when completely functional. The ion selectivity filter, pore, is formed by the complex combination of p-loops in the tetrameric protein, which are situated in the extracellular domain between the S5 and S6 transmembrane segments. As with most cation channels, TRP channels have negatively charged residues within the pore to attract the positively charged ions.
Group 1 Characteristics
Each channel in this group is structurally unique, which adds to the diversity of functions that TRP channels possess, however, there are some commonalities that distinguish this group from others. Starting from the intracellular N-terminus there are varying lengths of ankryin repeats (except in TRPM) that aid with membrane anchoring and other protein interactions. Shortly following S6 on the C-terminal end, there is a highly conserved TRP domain (except in TRPA) which is involved with gating modulation and channel multimerization. Other C-terminal modifications such as alpha-kinase domains in TRPM7 and M8 have been seen as well in this group.
Group 2 Characteristics
Group two most distinguishable trait is the long extracellular span between the S1 and S2 transmembrane segments. Members of group two are also lacking in ankryin repeats and a TRP domain. They have been shown, however, to have endoplasmic reticulum (ER) retention sequences towards on the C-terminal end illustrating possible interactions with the ER.
Function
TRP channels modulate ion entry driving forces and Ca2+ and Mg2+ transport machinery in the plasma membrane, where most of them are located. TRPs have important interactions with other proteins and often form signaling complexes, the exact pathways of which are unknown. TRP channels were initially discovered in the trp mutant strain of the fruit fly Drosophila which displayed transient elevation of potential in response to light stimuli and were so named transient receptor potential channels. TRPML channels function as intracellular calcium release channels and thus serve an important role in organelle regulation. Importantly, many of these channels mediate a variety of sensations like the sensations of pain, temperature, different kinds of taste, pressure, and vision. In the body, some TRP channels are thought to behave like microscopic thermometers and are used in animals to sense hot or cold. TRPs act as sensors of osmotic pressure, volume, stretch, and vibration. TRPs have been seen to have complex multidimensional roles in sensory signaling. Many TRPs function as intracellular calcium release channels.
Pain and temperature sensation
TRP ion channels convert energy into action potentials in somatosensory nociceptors. Thermo-TRP channels have a C-terminal domain that is responsible for thermosensation and have a specific interchangeable region that allows them to sense temperature stimuli that is tied to ligand regulatory processes. Although most TRP channels are modulated by changes in temperature, some have a crucial role in temperature sensation. There are at least 6 different Thermo-TRP channels and each plays a different role. For instance, TRPM8 relates to mechanisms of sensing cold, TRPV1 and TRPM3 contribute to heat and inflammation sensations, and TRPA1 facilitates many signaling pathways like sensory transduction, nociception, inflammation and oxidative stress.
Taste
TRPM5 is involved in taste signaling of sweet, bitter and umami tastes by modulating the signal pathway in type II taste receptor cells. TRPM5 is activated by the sweet glycosides found in the stevia plant.
Several other TRP channels play a significant role in chemosensation through sensory nerve endings in the mouth that are independent from taste buds. TRPA1 responds to mustard oil (allyl isothiocyanate), wasabi, and cinnamon, TRPA1 and TRPV1 responds to garlic (allicin), TRPV1 responds to chilli pepper (capsaicin), TRPM8 is activated by menthol, camphor, peppermint, and cooling agents; TRPV2 is activated by molecules (THC, CBD and CBN) found in marijuana.
TRP-like channels in insect vision
The trp-mutant fruit flies, which lack a functional copy of trp gene, are characterized by a transient response to light, unlike wild-type flies that demonstrate a sustained photoreceptor cell activity in response to light.
A distantly related isoform of TRP channel, TRP-like channel (TRPL), was later identified in Drosophila photoreceptors, where it is expressed at approximately 10- to 20-fold lower levels than TRP protein. A mutant fly, trpl, was subsequently isolated. Apart from structural differences, the TRP and TRPL channels differ in cation permeability and pharmacological properties.
TRP/TRPL channels are solely responsible for depolarization of insect photoreceptor plasma membrane in response to light. When these channels open, they allow sodium and calcium to enter the cell down the concentration gradient, which depolarizes the membrane. Variations in light intensity affect the total number of open TRP/TRPL channels, and, therefore, the degree of membrane depolarization. These graded voltage responses propagate to photoreceptor synapses with second-order retinal neurons and further to the brain.
It is important to note that the mechanism of insect photoreception is dramatically different from that in mammals. Excitation of rhodopsin in mammalian photoreceptors leads to the hyperpolarization of the receptor membrane but not to depolarization as in the insect eye. In Drosophila and, it is presumed, other insects, a phospholipase C (PLC)-mediated signaling cascade links photoexcitation of rhodopsin to the opening of the TRP/TRPL channels. Although numerous activators of these channels such as phosphatidylinositol-4,5-bisphosphate (PIP2) and polyunsaturated fatty acids (PUFAs) were known for years, a key factor mediating chemical coupling between PLC and TRP/TRPL channels remained a mystery until recently. It was found that breakdown of a lipid product of PLC cascade, diacylglycerol (DAG), by the enzyme diacylglycerol lipase, generates PUFAs that can activate TRP channels, thus initiating membrane depolarization in response to light. This mechanism of TRP channel activation may be well-preserved among other cell types where these channels perform various functions.
Clinical significance
Mutations in TRPs have been linked to neurodegenerative disorders, skeletal dysplasia, kidney disorders, and may play an important role in cancer. TRPs may make important therapeutic targets. There is significant clinical significance to TRPV1, TRPV2, TRPV3 and TRPM8’s role as thermoreceptors, and TRPV4 and TRPA1’s role as mechanoreceptors; reduction of chronic pain may be possible by targeting ion channels involved in thermal, chemical, and mechanical sensation to reduce their sensitivity to stimuli. For instance the use of TRPV1 agonists would potentially inhibit nociception at TRPV1, particularly in pancreatic tissue where TRPV1 is highly expressed. The TRPV1 agonist capsaicin, found in chili peppers, has been indicated to relieve neuropathic pain. TRPV1 agonists inhibit nociception at TRPV1
Role in cancer
Altered expression of TRP proteins often leads to tumorigenesis, as reported for TRPV1, TRPV6, TRPC1, TRPC6, TRPM4, TRPM5, and TRPM8. TRPV1 and TRPV2 have been implicated in breast cancer. TRPV1 expression in aggregates found at endoplasmic reticulum or Golgi apparatus and/or surrounding these structures in breast cancer patients confer worse survival.
TRPM family of ion channels are particularly associated with prostate cancer where TRPM2 (and its long noncoding RNA TRPM2-AS), TRPM4, and TRPM8 are overexpressed in prostate cancer associated with more aggressive outcomes. TRPM3 has been shown to promote growth and autophagy in clear cell renal cell carcinoma, TRPM4 is overexpressed in diffuse large B-cell lymphoma associated with poorer survival, while TRPM5 has oncogenic properties in melanoma.
TRP channels take center stage in modulating chemotherapy resistance in breast cancer. Some TRP channels such as TRPA1 and TRPC5 are tightly associated with drug resistance during cancer treatment; TRPC5-mediated high Ca2+ influx activates the transcription factor NFATC3 (Nuclear Factor of Activated T Cells, Cytoplasmic 3), which triggers p-glycoprotein (p-gp) transcription. The overexpression of p-gp is widely recognized as a major factor in chemoresistance in cancer cells, as it functions as an active efflux pump that can remove various foreign substances, including chemotherapeutic agents, from within the cell.
Contrarily, other TRP channels, such as TRPV1 and TRPV2, have been demonstrated to potentiate the anti-tumorigenic effects of certain chemotherapeutic agents and TRPV2 is a potential biomarker and therapeutic target in triple negative breast cancer.
Role in inflammatory responses
In addition to TLR4 mediated pathways, certain members of the family of the transient receptor potential ion channels recognize LPS. LPS-mediated activation of TRPA1 was shown in mice and Drosophila melanogaster flies. At higher concentrations, LPS activates other members of the sensory TRP channel family as well, such as TRPV1, TRPM3 and to some extent TRPM8. LPS is recognized by TRPV4 on epithelial cells. TRPV4 activation by LPS was necessary and sufficient to induce nitric oxide production with a bactericidal effect.
History of Drosophila TRP channels
The original TRP-mutant in Drosophila was first described by Cosens and Manning in 1969 as "a mutant strain of D. melanogaster which, though behaving phototactically positive in a T-maze under low ambient light, is visually impaired and behaves as though blind". It also showed an abnormal electroretinogram response of photoreceptors to light which was transient rather than sustained as in the "wild type". It was investigated subsequently by Baruch Minke, a post-doc in the group of William Pak, and named TRP according to its behavior in the ERG. The identity of the mutated protein was unknown until it was cloned by Craig Montell, a post-doctoral researcher in Gerald Rubin's research group, in 1989, who noted its predicted structural relationship to channels known at the time and Roger Hardie and Baruch Minke who provided evidence in 1992 that it is an ion channel that opens in response to light stimulation. The TRPL channel was cloned and characterized in 1992 by the research group of Leonard Kelly. In 2013, Montell and his research group found that the TRPL (TRP-like) cation channel was a direct target for tastants in gustatory receptor neurons and could be reversibly down-regulated.
See also
Endocannabinoid system
Transient receptor potential channel-interacting protein database (2010)
References
External links
Membrane biology
Ion channels
Voltage-gated ion channels | Transient receptor potential channel | [
"Chemistry"
] | 4,994 | [
"Neurochemistry",
"Membrane biology",
"Ion channels",
"Molecular biology"
] |
1,034,009 | https://en.wikipedia.org/wiki/Solanine | Solanine is a glycoalkaloid poison found in species of the nightshade family within the genus Solanum, such as the potato (Solanum tuberosum). It can occur naturally in any part of the plant, including the leaves, fruit, and tubers. Solanine has pesticidal properties, and it is one of the plant's natural defenses. Solanine was first isolated in 1820 from the berries of the European black nightshade (Solanum nigrum), after which it was named. It belongs to the chemical family of saponins.
Solanine poisoning
Symptoms
Solanine poisoning is primarily displayed by gastrointestinal and neurological disorders. Symptoms include nausea, diarrhea, vomiting, stomach cramps, burning of the throat, cardiac dysrhythmia, nightmares, headache, dizziness, itching, eczema, thyroid problems, and inflammation and pain in the joints. In more severe cases, hallucinations, loss of sensation, paralysis, fever, jaundice, dilated pupils, hypothermia, and death have been reported.
Ingestion of solanine in moderate amounts can cause death. One study suggests that doses of 2 to 5 mg/kg of body weight can cause toxic symptoms, and doses of 3 to 6 mg/kg of body weight can be fatal.
Symptoms usually occur 8 to 12 hours after ingestion, but may occur as rapidly as 10 minutes after eating high-solanine foods.
Correlation with birth defects
Some studies show a correlation between the consumption of potatoes suffering from late blight (which increases solanine and other glycoalkaloid levels) and the incidence of spina bifida in humans. However, other studies have shown no correlation between potato consumption and the incidence of birth defects.
Livestock poisoning
Livestock can also be susceptible to glycoalkaloids. High concentrations of solanine are necessary to cause death to mammals. The gastrointestinal tract cannot efficiently absorb solanine, which helps decrease its strength to the mammal body. Livestock can hydrolyze solanine and excrete its contents to diminish its presence in the body.
Mechanism of action
There are several proposed mechanisms of how solanine causes toxicity in humans, but the true mechanism of action is not well understood. Solanum glycoalkaloids have been shown to inhibit cholinesterase, disrupt cell membranes, and cause birth defects. One study suggests that the toxic mechanism of solanine is caused by the chemical's interaction with mitochondrial membranes. Experiments show that solanine exposure opens the potassium channels of mitochondria, increasing their membrane potential. This, in turn, leads to Ca2+ being transported from the mitochondria into the cytoplasm, and this increased concentration of Ca2+ in the cytoplasm triggers cell damage and apoptosis. Potato, tomato, and eggplant glycoalkaloids like solanine have also been shown to affect active transport of sodium across cell membranes. This cell membrane disruption is likely the cause of many of the symptoms of solanine toxicity, including burning sensations in the mouth, nausea, vomiting, abdominal cramps, diarrhea, internal hemorrhaging, and stomach lesions.
Biosynthesis
Solanine is a glycoalkaloid poison created by various plants in the genus Solanum, such as the potato plant. When the plant's stem, tubers, or leaves are exposed to sunlight, it stimulates the biosynthesis of solanine and other glycoalkaloids as a defense mechanism so it is not eaten. It is therefore considered to be a natural pesticide.
Though the structures of the intermediates in this biosynthetic pathway are shown, many of the specific enzymes involved in these chemical processes are not known. However, it is known that in the biosynthesis of solanine, cholesterol is first converted into the steroidal alkaloid solanidine. This is accomplished through a series of hydroxylation, transamination, oxidation, cyclization, dehydration, and reduction reactions. Specifically, solanidine formation involves sequential hydroxylation, transamination, and cyclization reactions.The solanidine is then converted into solanine through a series of glycosylation reactions catalyzed by specific glycosyltransferases.
Plants like the potato and tomato constantly synthesize low levels of glycoalkaloids like solanine. However, under stress, such as the presence of a pest or herbivore, they increase the synthesis of compounds like solanine as a natural chemical defense. This rapid increase in glycoalkaloid concentration gives the potatoes a bitter taste, and stressful stimuli like light also stimulate photosynthesis and the accumulation of chlorophyll. As a result, the potatoes turn green, and are thus unattractive to pests. Other stressors that can stimulate increased solanine biosynthesis include mechanical damage, improper storage conditions, improper food processing, and sprouting. The largest concentration of solanine in response to stress is on the surface in the peel, making it an even better defense mechanism against pests trying to consume it.
Safety
Suggested limits on consumption of solanine
Toxicity typically occurs when people ingest potatoes containing high levels of solanine. The average consumption of potatoes in the U.S. is estimated to be about 167 g of potatoes per day per person. There is variation in glycoalkaloid levels in different types of potatoes, but potato farmers aim to keep solanine levels below 0.2 mg/g. Signs of solanine poisoning have been linked to eating potatoes with solanine concentrations of between 0.1 and 0.4 mg per gram of potato. The average potato has 0.075 mg solanine/g potato, which is equal to about 0.18 mg/kg based on average daily potato consumption.
Calculations have shown that 2 to 5 mg/kg of body weight is the likely toxic dose of glycoalkaloids like solanine in humans, with 3 to 6 mg/kg constituting the fatal dose. Other studies have shown that symptoms of toxicity were observed with consumption of even 1 mg/kg.
Storage of potatoes
Various storage conditions can have an impact on the level of solanine in potatoes. Glycoalkaloid levels increase when potatoes are exposed to light because light increases synthesis of glycoalkaloids like solanine. Potatoes stored in a dark place avoid increased solanine synthesis. Potatoes that have turned green due to increased chlorophyll and photosynthesis are indicative of increased light exposure and are therefore associated with high levels of solanine. Synthesis of solanine is also stimulated by mechanical injury because glycoalkaloids are synthesized at cut surfaces of potatoes. Storage of potatoes for extended periods of time has also been associated with increased solanine content. A study found that the solanine levels in Kurfi Jyoti and Kurfi Giriraj potatoes increase solanine levels by 0.232 mg/g and 0.252 mg/g respectively after being poorly stored in a heap.
Effects of cooking on solanine levels
Most home processing methods like boiling, cooking, and frying potatoes have been shown to have minimal effects on solanine levels. For example, boiling potatoes reduces the α-chaconine and α-solanine levels by only 3.5% and 1.2% respectively, but microwaving potatoes reduces the alkaloid content by 15%. Deep frying at also does not result in any measurable change. Alkaloids like solanine have been shown to start decomposing and degrading at approximately , and deep-frying potatoes at for 10 minutes causes a loss of ~40% of the solanine. Freeze-drying and dehydrating potatoes has a very minimal effect on solanine content.
The majority (30–80%) of the solanine in potatoes is found in the outer layer of the potato. Therefore, peeling potatoes before cooking them reduces the glycoalkaloid intake from potato consumption. Fried potato peels have been shown to have 1.4–1.5 mg solanine/g, which is seven times the recommended upper safety limit of 0.2 mg/g. Chewing a small piece of the raw potato peel before cooking can help determine the level of solanine contained in the potato; bitterness indicates high glycoalkaloid content. If the potato has more than 0.2 mg/g of solanine, an immediate burning sensation will develop in the mouth.
Recorded human poisonings
Though fatalities from solanine poisoning are rare, there have been several notable cases of human solanine poisonings. Between 1865 and 1983, there were around 2000 documented human cases of solanine poisoning, with most recovering fully and 30 deaths. Because the symptoms are similar to those of food poisoning, it is possible that there are many undiagnosed cases of solanine toxicity.
In 1899, 56 German soldiers fell ill due to solanine poisoning after consuming cooked potatoes containing 0.24 mg of solanine per gram of potato. There were no fatalities, but a few soldiers were left partially paralyzed and jaundiced. In 1918, there were 41 cases of solanine poisoning in people who had eaten a bad crop of potatoes with 0.43 mg solanine/g potato with no recorded fatalities.
In Scotland in 1918, there were 61 cases of solanine poisoning after consumption of potatoes containing 0.41 mg of solanine per gram of potato, resulting in the death of a five-year old.
A case report from 1925 reported that 7 family members who ate green potatoes fell ill from solanine poisoning two days later, resulting in the deaths of the 45-year-old mother and 16-year-old daughter. The other family members recovered fully. In another case report from 1959, four members of a British family exhibited symptoms of solanine poisoning after eating jacket potatoes containing 0.5 mg of solanine per gram of potato.
There was a mass solanine poisoning incident in 1979 in the U.K., when 78 adolescent boys at a boarding school exhibited symptoms after eating potatoes that had been stored improperly over the summer. Seventeen of them ended up hospitalized, but they all recovered. The potatoes were determined to have between 0.25 and 0.3 mg of solanine per gram of potato.
Another mass poisoning was reported in Canada in 1984, after 61 schoolchildren and teachers showed symptoms of solanine toxicity after consuming baked potatoes with 0.5 mg of solanine per gram of potato.
In potatoes
Potatoes naturally produce solanine and chaconine, a related glycoalkaloid, as a defense mechanism against insects, disease, and herbivores. Potato leaves, stems, and shoots are naturally high in glycoalkaloids.
When potato tubers are exposed to light, they turn green and increase glycoalkaloid production. This is a natural defense to help prevent the uncovered tuber from being eaten. The green colour is from chlorophyll, and is itself harmless. However, it is an indication that increased level of solanine and chaconine may be present. In potato tubers, 30–80% of the solanine develops in and close to the skin, and some potato varieties have high levels of solanine.
Some potato diseases, such as late blight, can dramatically increase the levels of glycoalkaloids present in potatoes. Tubers damaged in harvesting and/or transport also produce increased levels of glycoalkaloids; this is believed to be a natural reaction of the plant in response to disease and damage.
Also, the tuber glycoalkaloids (such as solanine) can be affected by some chemical fertilization. For example, different studies have reported that glycoalkaloids content increases by increasing the concentration of nitrogen fertilizer.
Green colouring under the skin strongly suggests solanine build-up in potatoes, although each process can occur without the other. A bitter taste in a potato is another – potentially more reliable – indicator of toxicity. Because of the bitter taste and appearance of such potatoes, solanine poisoning is rare outside conditions of food shortage. The symptoms are mainly vomiting and diarrhea, and the condition may be misdiagnosed as gastroenteritis. Most potato poisoning victims recover fully, although fatalities are known, especially when victims are undernourished or do not receive suitable treatment.
The United States National Institutes of Health's information on solanine strongly advises against eating potatoes that are green below the skin.
In other plants
Fatalities are also known from solanine poisoning from other plants in the nightshade family, such as the berries of Solanum dulcamara (woody nightshade).
Some, such as the California Poison Control Center, have claimed that unripe tomatoes and tomato leaves contain solanine. However, Mendel Friedman of the United States Department of Agriculture contradicts this claim, stating that tomatine, a relatively benign alkaloid, is the tomato alkaloid while solanine is found in potatoes. Food science writer Harold McGee has found scant evidence for tomato toxicity in the medical and veterinary literature.
In popular culture
Dorothy L. Sayers's short story "The Leopard Lady", in the 1939 collection In the Teeth of the Evidence, features a child poisoned by potato berries injected with solanine to increase their toxicity.
See also
Lenape (potato)
Solanidine
References
External links
a-Chaconine and a-Solanine, Review of Toxicological Literature
– "Green tubers and sprouts"
Steroidal alkaloids
Alkaloid glycosides
Steroidal alkaloids found in Solanaceae
Nitrogen heterocycles
Saponins
Plant toxins | Solanine | [
"Chemistry"
] | 2,927 | [
"Biomolecules by chemical classification",
"Chemical ecology",
"Natural products",
"Steroidal alkaloids",
"Plant toxins",
"Alkaloids by chemical classification",
"Saponins"
] |
1,034,012 | https://en.wikipedia.org/wiki/Sufentanil | Sufentanil, sold under the brand names Sufenta among others, is a synthetic opioid analgesic drug approximately 5 to 10 times as potent as its parent drug, fentanyl, and 500 to 1,000 times as potent as morphine. Structurally, sufentanil differs from fentanyl through the addition of a methoxymethyl group on the piperidine ring (which increases potency but is believed to reduce duration of action), and the replacement of the phenyl ring by thiophene. Sufentanil first was synthesized at Janssen Pharmaceutica in 1974.
Medical uses
Sufentanil offers properties of sedation and can be used as analgesic component of anesthetic regimen during an operation.
Because of its extremely high potency, it is often used in surgery and post-operative pain management for patients that are heavily opioid dependent/opioid tolerant because of long term opiate use for chronic pain or illicit opiate use. It is also used in surgery and post-operative pain control in people that are taking high dose buprenorphine for chronic pain because it has the potency and binding affinity strong enough to displace buprenorphine from the opioid receptors in the central nervous system and provide analgesia.
In 2018, the Food and Drug Administration (FDA) approved Dsuvia, a sublingual tablet form of the drug, that was developed in a collaboration between AcelRx Pharmaceuticals and the United States Department of Defense for use in battlefield settings where intravenous (IV) treatments may not be readily available. The decision to approve this new potent synthetic opioid came under criticism from politicians and from the chair of the FDA advisory committee, who fear that the tablets will be easily diverted to the illegal drug market. Dsuvia has since been withdrawn from the market due to "unresolvable manufacturing constraints."
Overdose
Management
Because sufentanil is very potent, practitioners must be prepared to reverse the effects of the drug should the patient exhibit symptoms of overdose such as respiratory depression or respiratory arrest. As for all other opioid-based medications, naloxone (trade name Narcan) is the definitive antidote for overdose. Depending on the amount administered, it can reverse the respiratory depression and, if enough is administered, completely reverse the effects of sufentanil.
Society and culture
Brand names
Sufentanil is marketed under various brand names including Dsuvia, Dzuveo, Sufenta, and Sufentil.
References
Anilides
Belgian inventions
Ethers
Fentanyl
General anesthetics
Janssen Pharmaceutica
Mu-opioid receptor agonists
Opioids
Piperidines
Propionamides
Thiophenes | Sufentanil | [
"Chemistry"
] | 577 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
1,034,358 | https://en.wikipedia.org/wiki/Chirplet%20transform | In signal processing, the chirplet transform is an inner product of an input signal with a family of analysis primitives called chirplets.
Similar to the wavelet transform, chirplets are usually generated from (or can be expressed as being from) a single mother chirplet (analogous to the so-called mother wavelet of wavelet theory).
Definitions
The term chirplet transform was coined by Steve Mann, as the title of the first published paper on chirplets. The term chirplet itself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of a chirp function. In Mann's words:
The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years in radar, pulse compression, and the like, the first published reference to the chirplet transform described specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes. In that paper, the Gaussian chirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The term chirplet (but not the term chirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic and Bracewell later that same year.
Applications
The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel).
Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wave Imprinting Machine).
More recently other practical applications have been developed, including image processing (e.g. where there is periodic structure imaged through projective geometry),
as well as to excise chirp-like interference in spread spectrum communications, in EEG processing, and Chirplet Time Domain Reflectometry.
Extensions
The warblet transform is a particular example of the chirplet transform introduced by Mann and Haykin in 1992 and now widely used. It provides a signal representation based on cyclically varying frequency modulated signals (warbling signals).
See also
Time–frequency representation
Other time–frequency transforms
Fractional Fourier transform
Short-time Fourier transform
Wavelet transform
References
LEM, Logon Expectation Maximization
introduces Logon Expectation Maximization (LEM) and Radial Basis Functions (RBF) in Time–Frequency space.
Osaka Kyoiku, Gabor, wavelet and chirplet transforms...(PDF)
J. "Richard" Cui, etal, Time–frequency analysis of visual evoked potentials using chirplet transform , IEE Electronics Letters, vol. 41, no. 4, pp. 217–218, 2005.
Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform—Part 2: phase, frequency, and chirp rate, Geophysics, 2016, 81 (6), V425-V439.
Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform for sparse representation of seismic data, Geophysics, 2015, 80 (6), WD89-WD100.
External links
DiscreteTFDs - software for computing chirplet decompositions and time–frequency distributions
The Chirplet Transform (web tutorial and info).
Transforms
Fourier analysis
Time–frequency analysis
Image processing
Radar signal processing | Chirplet transform | [
"Physics",
"Mathematics"
] | 783 | [
"Functions and mappings",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis",
"Mathematical objects",
"Mathematical relations",
"Transforms"
] |
1,034,470 | https://en.wikipedia.org/wiki/Scattering%20amplitude | In quantum physics, the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process. At large distances from the centrally symmetric scattering center, the plane wave is described by the wavefunction
where is the position vector; ; is the incoming plane wave with the wavenumber along the axis; is the outgoing spherical wave; is the scattering angle (angle between the incident and scattered direction); and is the scattering amplitude. The dimension of the scattering amplitude is length. The scattering amplitude is a probability amplitude; the differential cross-section as a function of scattering angle is given as its modulus squared,
The asymptotic form of the wave function in arbitrary external field takes the form
where is the direction of incidient particles and is the direction of scattered particles.
Unitary condition
When conservation of number of particles holds true during scattering, it leads to a unitary condition for the scattering amplitude. In the general case, we have
Optical theorem follows from here by setting
In the centrally symmetric field, the unitary condition becomes
where and are the angles between and and some direction . This condition puts a constraint on the allowed form for , i.e., the real and imaginary part of the scattering amplitude are not independent in this case. For example, if in is known (say, from the measurement of the cross section), then can be determined such that is uniquely determined within the alternative .
Partial wave expansion
In the partial wave expansion the scattering amplitude is represented as a sum over the partial waves,
,
where is the partial scattering amplitude and are the Legendre polynomials. The partial amplitude can be expressed via the partial wave S-matrix element () and the scattering phase shift as
Then the total cross section
,
can be expanded as
is the partial cross section. The total cross section is also equal to due to optical theorem.
For , we can write
X-rays
The scattering length for X-rays is the Thomson scattering length or classical electron radius, 0.
Neutrons
The nuclear neutron scattering process involves the coherent neutron scattering length, often described by .
Quantum mechanical formalism
A quantum mechanical approach is given by the S matrix formalism.
Measurement
The scattering amplitude can be determined by the scattering length in the low-energy regime.
See also
Levinson's theorem
Veneziano amplitude
Plane wave expansion
References
Neutron
X-rays
Electron
Scattering
Diffraction
Quantum mechanics | Scattering amplitude | [
"Physics",
"Chemistry",
"Materials_science"
] | 485 | [
"Electron",
"Molecular physics",
"Spectrum (physical sciences)",
"X-rays",
"Theoretical physics",
"Quantum mechanics",
"Electromagnetic spectrum",
"Scattering",
"Diffraction",
"Crystallography",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"Spectroscopy"
] |
1,034,699 | https://en.wikipedia.org/wiki/Constitutive%20equation | In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations.
Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function.
Mechanical properties of matter
The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms
like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form stress rate = f (velocity gradient, stress, density) was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell.
In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions.
Definitions
Deformation of solids
Friction
Friction is a complicated phenomenon. Macroscopically, the friction force F between the interface of two materials can be modelled as proportional to the reaction force R at a point of contact between two interfaces through a dimensionless coefficient of friction μf, which depends on the pair of materials:
This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object).
Stress and strain
The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) k in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement x:
meaning the material responds linearly. Equivalently, in terms of the stress σ, Young's modulus E, and strain ε (dimensionless):
In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor:
where C is the elasticity tensor and S is the compliance tensor.
Solid-state deformations
Several classes of deformations in elastic materials are the following:
Plastic The applied force induces non-recoverable deformations in the material when the stress (or elastic strain) reaches a critical magnitude, called the yield point.
Elastic The material recovers its initial shape after deformation.
Viscoelastic If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs.
Anelastic If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines).
Hyperelastic The applied force induces displacements in the material following a strain energy density function.
Collisions
The relative speed of separation vseparation of an object A after a collision with another object B is related to the relative speed of approach vapproach by the coefficient of restitution, defined by Newton's experimental impact law:
which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually , in which for completely elastic collisions, and for completely inelastic collisions. It is possible for to occur – for superelastic (or explosive) collisions.
Deformation of fluids
The drag equation gives the drag force D on an object of cross-section area A moving through a fluid of density ρ at velocity v (relative to the fluid)
where the drag coefficient (dimensionless) cd depends on the geometry of the object and the drag forces at the interface between the fluid and object.
For a Newtonian fluid of viscosity μ, the shear stress τ is linearly related to the strain rate (transverse flow velocity gradient) ∂u/∂y (units s−1). In a uniform shear flow:
with u(y) the variation of the flow velocity u in the cross-flow (transverse) direction y. In general, for a Newtonian fluid, the relationship between the elements τij of the shear stress tensor and the deformation of the fluid is given by
with and
where vi are the components of the flow velocity vector in the corresponding xi coordinate directions, eij are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and δij is the Kronecker delta.
The ideal gas law is a constitutive relation in the sense the pressure p and volume V are related to the temperature T, via the number of moles n of gas:
where R is the gas constant (J⋅K−1⋅mol−1).
Electromagnetism
Constitutive equations in electromagnetism and related areas
In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used.
For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory).
These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth.
It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations.
Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves:
where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases.
Without magnetic or dielectric materials
In the absence of magnetic or dielectric materials, the constitutive relations are simple:
where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively.
Isotropic linear materials
In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are:
where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are:
where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by:
General case
For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny.
In general, the constitutive relations can usually still be written:
but ε and μ are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are:
As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional coupling constants ξ and ζ:
In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration).
Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability.
Calculation of constitutive relations
The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields.
The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation.
These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function.
A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium (valid for excitations with wavelengths much larger than the scale of the inhomogeneity).
The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, ε of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and ε at optical-light frequencies is often measured by ellipsometry.
Thermoelectric and electromagnetic properties of matter
These constitutive equations are often used in crystallography, a field of solid-state physics.
Photonics
Refractive index
The (absolute) refractive index of a medium n (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum c0 to that in the medium c:
where ε is the permittivity and εr the relative permittivity of the medium, likewise μ is the permeability and μr are the relative permeability of the medium. The vacuum permittivity is ε0 and vacuum permeability is μ0. In general, n (also εr) are complex numbers.
The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces;
Speed of light in matter
As a consequence of the definition, the speed of light in matter is
for special case of vacuum; and ,
Piezooptic effect
The piezooptic effect relates the stresses in solids σ to the dielectric impermeability a, which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1):
Transport phenomena
Definitions
Definitive laws
There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read:
Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material.
In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material.
See also
Defining equation (physical chemistry)
Governing equation
Principle of material objectivity
Rheology
Notes
References
Elasticity (physics)
Equations of physics
Continuum mechanics
Electric and magnetic fields in matter | Constitutive equation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,987 | [
"Physical phenomena",
"Equations of physics",
"Elasticity (physics)",
"Continuum mechanics",
"Deformation (mechanics)",
"Mathematical objects",
"Classical mechanics",
"Electric and magnetic fields in matter",
"Equations",
"Materials science",
"Condensed matter physics",
"Physical properties"
] |
3,088,675 | https://en.wikipedia.org/wiki/Profinet | Profinet (usually styled as PROFINET, as a portmanteau for Process Field Network) is an industry technical standard for data communication over Industrial Ethernet, designed for collecting data from, and controlling equipment in industrial systems, with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International, an umbrella organization headquartered in Karlsruhe, Germany.
Functionalities
Overview
Profinet implements the interfacing to peripherals. It defines the communication with field connected peripheral devices. Its basis is a cascading real-time concept. Profinet defines the entire data exchange between controllers (called "IO-Controllers") and the devices (called "IO-Devices"), as well as parameter setting and diagnosis. IO-Controllers are typically a PLC, DCS, or IPC; whereas IO-Devices can be varied: I/O blocks, drives, sensors, or actuators. The Profinet protocol is designed for the fast data exchange between Ethernet-based field devices and follows the provider-consumer model. Field devices in a subordinate Profibus line can be integrated in the Profinet system seamlessly via an IO-Proxy (representative of a subordinate bus system).
Conformance Classes (CC)
Applications with Profinet can be divided according to the international standard IEC 61784-2 into four conformance classes:
In Conformance Class A (CC-A), only the devices are certified. A manufacturer certificate is sufficient for the network infrastructure. This is why structured cabling or a wireless local area network for mobile subscribers can also be used. Typical applications can be found in infrastructure (e.g. motorway or railway tunnels) or in building automation.
Conformance Class B (CC-B) stipulates that the network infrastructure also includes certified products and is structured according to the guidelines of Profinet. Shielded cables increase robustness and switches with management functions facilitate network diagnostics and allow the network topology to be captured as desired for controlling a production line or machine. Process automation requires increased availability, which can be achieved through media and system redundancy. For a device to adhere to Conformance Class B, it must communicate successfully via Profinet, have two ports (integrated switch), and support SNMP.
With Conformance Class C (CC-C), positioning systems can be implemented with additional bandwidth reservation and application synchronization. Conformance Class C devices additionally communicate via Profinet IRT.
For Conformance Class D (CC-D), Profinet is used via Time-Sensitive Networking (TSN). The same functions can be achieved as with CC-C. In contrast to CC-A and CC-B, the complete communication (cyclic and acyclic) between controller and device takes place on Ethernet layer 2. The Remote Service Interface (RSI) was introduced for this purpose.
Device types
A Profinet system consists of the following devices:
The IO-Controller, which controls the automation task.
The IO-Device, which is a field device, monitored and controlled by an IO-Controller. An IO-Device may consist of several modules and sub-modules.
The IO-Supervisor is software typically based on a PC for setting parameters and diagnosing individual IO-Devices.
System structure
A minimal Profinet IO-System consists of at least one IO-Controller that controls one or more IO-Devices. In addition, one or more IO-Supervisors can optionally be switched on temporarily for the engineering of the IO-Devices if required.
If two IO-Systems are in the same IP network, the IO-Controllers can also share an input signal as shared input, in which they have read access to the same submodule in an IO-Device. This simplifies the combination of a PLC with a separate safety controller or motion control. Likewise, an entire IO-Device can be shared as a shared device, in which individual submodules of an IO-Device are assigned to different IO-Controllers.
Each automation device with an Ethernet interface can simultaneously fulfill the functionality of an IO-Controller and an IO-Device. If a controller for a partner controller acts as an IO-Device and simultaneously controls its periphery as an IO-Controller, the tasks between controllers can be coordinated without additional devices.
Relations
An Application Relation (AR) is established between an IO-Controller and an IO-Device. These ARs are used to define Communication Relations (CR) with different characteristics for the transfer of parameters, cyclic exchange of data and handling of alarms.
Engineering
The project engineering of an IO system is nearly identical to the Profibus in terms of "look and feel":
The properties of an IO-Device are described by the device manufacturer in a GSD file (General Station Description). The language used for this is GSDML (GSD Markup Language) - an XML-based language. The GSD file serves an engineering environment as a basis for planning the configuration of a Profinet IO system.
All Profinet field devices determine their neighbors. This means that field devices can be exchanged in the event of a fault without additional tools and prior knowledge. By reading out this information, the plant topology can be displayed graphically for better clarity.
The engineering can be supported by tools such as PROFINET Commander or PRONETA.
Dependability
Profinet is also increasingly being used in critical applications. There is always a risk that the required functions cannot be fulfilled. This risk can be reduced by specific measures as identified by a dependability analyses. The following objectives are in the foreground:
Safety: Ensuring functional safety. The system should go into a safe state in the event of a fault.
Availability: Increasing the availability. In the event of a fault, the system should still be able to perform the minimum required function.
Security: Information security is to ensure the integrity of the system.
These goals can interfere with or complement each other.
Functional safety: Profisafe
Profisafe defines how safety-related devices (emergency stop buttons, light grids, overfill prevention devices, ...) communicate with safety controllers via Profinet in such a safe way that they can be used in safety-related automation tasks up to Safety Integrity Level 3 (SIL) according to IEC 61508, Performance Level "e" (PL) according to ISO 13849, or Category 4 according to EN 954-1.
Profisafe implements safe communication via a profile, i.e. via a special format of the user data and a special protocol. It is designed as a separate layer on top of the fieldbus application layer to reduce the probability of data transmission errors. The Profisafe messages use standard fieldbus cables and messages. They do not depend on error detection mechanisms of underlying transmission channels, and thus supports securing of whole communication paths, including backplanes inside controllers or remote I/O. The Profisafe protocol uses error and failure detection mechanisms such as:
Consecutive numbering
Timeout monitoring
Source/destination authentication
Cyclic redundancy checking (CRC)
and is defined in the IEC 61784-3-3 standard.
Increased availability
High availability is one of the most important requirements in industrial automation, both in factory and process automation. The availability of an automation system can be increased by adding redundancy for critical elements. A distinction can be made between system and media redundancy.
System redundancy
System redundancy can also be implemented with Profinet to increase availability. In this case, two IO-Controllers that control the same IO-Device are configured. The active IO-Controller marks its output data as primary. Output data that is not marked is ignored by an IO-Device in a redundant IO-System. In the event of an error, the second IO-Controller can therefore take control of all IO-Devices without interruption by marking its output data as primary. How the two IO-Controllers synchronize their tasks is not defined in Profinet and is implemented differently by the various manufacturers offering redundant control systems.
Media redundancy
Profinet offers two media redundancy solutions. The Media Redundancy Protocol (MRP) allows the creation of a protocol-independent ring topology with a switching time of less than 50 ms. This is often sufficient for standard real-time communication with Profinet. To switch over the redundancy in the event of an error without time delay, the "Media Redundancy for Planned Duplication" (MRPD) must be used as a seamless media redundancy concept. In the MRPD, the cyclic real-time data is transmitted in both directions in the ring-shaped topology. A time stamp in the data packet allows the receiver to remove the redundant duplicates.
Security
The IT security concept for Profinet assumes a defense-in-depth approach. In this approach, the production plant is protected against attacks, particularly from outside, by a multi-level perimeter, including firewalls. In addition, further protection is possible within the plant by dividing it into zones using firewalls. In addition, a security component test ensures that the Profinet components are resistant to overload to a defined extent. This concept is supported by organizational measures in the production plant within the framework of a security management system according to ISO 27001.
Application Profiles
For a smooth interaction of the devices involved in an automation solution, they must correspond in their basic functions and services. Standardization is achieved by "profiles" with binding specifications for functions and services. The possible functions of communication with Profinet are restricted and additional specifications regarding the function of the field device are prescribed. These can be cross-device class properties such as a safety-relevant behavior (Common Application Profiles) or device class specific properties (Specific Application Profiles). A distinction is made between
Device profiles for e.g. robots, drives (PROFIdrive), process devices, encoders, pumps
Industry Profiles for e.g. laboratory technology or rail vehicles
Integration Profiles for the integration of subsystems such as IO-Link systems
Drives
PROFIdrive is the modular device profile for drive devices. It was jointly developed by manufacturers and users in the 1990s and since then, in conjunction with Profibus and, from version 4.0, also with Profinet, it has covered the entire range from the simplest to the most demanding drive solutions.
Energy
Another profile is PROFIenergy which includes services for real time monitoring of energy demand. This was requested in 2009 by the AIDA group of German automotive Manufacturers (Audi, BMW, Mercedes-Benz, Porsche and Volkswagen ) who wished to have a standardised way of actively managing energy usage in their plants. High energy devices and sub-systems such as robots, lasers and even paint lines are the target for this profile, which will help reduce a plant's energy costs by intelligently switching the devices into 'sleep' modes to take account of production breaks, both foreseen (e.g. weekends and shut-downs) and unforeseen (e.g. breakdowns).
Process automation
Modern process devices have their own intelligence and can take over part of the information processing or the overall functionality in automation systems. For integration into a Profinet system, a two-wire Ethernet is required in addition to increased availability.
Process devices
The profile PA Devices defines for different classes of process devices all functions and parameters typically used in process devices for the signal flow from the sensor signal from the process to the pre-processed process value, which is read out to the control system together with a measured value status. The PA Devices profile contains device data sheets for
Pressure and differential pressure
Level, temperature and flow rate
Analog and digital inputs and outputs
Valves and actuators
Analysis equipment
Advanced Physical Layer
Ethernet Advanced Physical Layer (Ethernet-APL) describes a physical layer for the Ethernet communication technology which is especially developed for the requirements of the process industries. The development of Ethernet-APL was determined by the need for communication at high speeds and over long distances, the supply of power and communications signals via common single, twisted-pair (2-wire) cable as well as protective measures for the safe use within explosion hazardous areas. Ethernet APL opens the possibility for Profinet to be incorporated into process instruments.
Technology
Profinet protocols
Profinet uses the following protocols in the different layers of the OSI model:
Layers 1-2: Mainly full-duplex with 100 MBit/s electrical (100BASE-TX) or optical (100BASE-FX) according to IEEE 802.3 are recommended as device connections. Autocrossover is mandatory for all connections so that the use of crossover cables can be avoided. From IEEE 802.1Q the VLAN with priority tagging is used. All real-time data are thus given the highest possible priority 6 and are therefore forwarded by a switch with a minimum delay.
The Profinet protocol can be recorded and displayed with any Ethernet analysis tool. Wireshark is capable of decoding Profinet telegrams.
The Link Layer Discovery Protocol (LLDP) has been extended with additional parameters, so that in addition to the detection of neighbors, the propagation time of the signals on the connection lines can be communicated.
Layers 3-6: Either the Remote Service Interface (RSI) protocol or the Remote Procedure Call (RPC) protocol is used for the connection setup and the acyclic services. The RPC protocol is used via User Datagram Protocol (UDP) and Internet Protocol (IP) with the use of IP addresses. The Address Resolution Protocol (ARP) is extended for this purpose with the detection of duplicate IP addresses. The Discovery and basic Configuration Protocol (DCP) is mandatory for the assignment of IP addresses. Optionally, the Dynamic Host Configuration Protocol (DHCP) can also be used for this purpose. No IP addresses are used with the RSI protocol. Thus, IP can be used in the operating system of the field device for other protocols such as OPC Unified Architecture (OPC UA).
Layer 7: Various protocols are defined to access the services of the Fieldbus Application Layer (FAL). The RT (Real-Time) protocol for class A & B applications with cycle times in the range of 1 - 10 ms. The IRT (Isochronous Real-Time) protocol for application class C allows cycle times below 1 ms for drive technology applications. This can also be achieved with the same services via Time-Sensitive Networking (TSN).
Technology of Conformance Classes
The functionalities of Profinet IO are realized with different technologies and protocols:
Technology of Class A (CC-A)
The basic function of the Profinet is the cyclic data exchange between the IO-Controller as producer and several IO-Devices as consumers of the output data and the IO-Devices as producers and the IO-Controller as consumer of the input data. Each communication relationship IO data CR between the IO-Controller and an IO-Device defines the number of data and the cycle times.
All Profinet IO-Devices must support device diagnostics and the safe transmission of alarms via the communication relation for alarms Alarm CR.
In addition, device parameters can be read and written with each Profinet device via the acyclic communication relation Record Data CR. The data set for the unique identification of an IO-Device, the Identification and Maintenance Data Set 0 (I&M 0), must be installed by all Profinet IO-Devices. Optionally, further information can be stored in a standardized format as I&M 1-4.
For real-time data (cyclic data and alarms), the Profinet Real-Time (RT) telegrams are transmitted directly via Ethernet. UDP/IP is used for the transmission of acyclic data.
Management of the Application Relations (AR)
The Application Relation (AR) is established between an IO-Controller and every IO-Device to be controlled. Inside the ARs are defined the required CRs. The Profinet AR life-cycle consists of address resolution, connection establishment, parameterization, process IO data exchange / alarm handling, and termination.
Address resolution: A Profinet IO-Device is identified on the Profinet network by its station name. Connection establishment, parameterization and alarm handling are implemented with User Datagram Protocol (UDP), which requires that the device also be assigned an IP address. After identifying the device by its station name, the IO-Controller assigns the pre-configured IP address to the device.
Connection establishment: Connection establishment starts with the IO-Controller sending a connect request to the IO-Device. The connect request establishes an Application Relationship (AR) containing a number of Communication Relationships (CRs) between the IO-Controller and IO-Device. In addition to the AR and CRs, the connect request specifies the modular configuration of the IO-Device, the layout of the process IO data frames, the cyclic rate of IO data exchange and the watchdog. Acknowledgement of the connect request by the IO-Device allows parameterization to follow. From this point forward, both the IO-Device and IO-Controller start exchanging cyclic process I/O data frames. The process I/O data frames don't contain valid data at this point, but they start serving as keep-alive to keep the watchdog from expiring.
Parameterization: The IO-Controller writes parameterization data to each IO-Device sub-module in accordance with the General Station Description Mark-up Language (GSDML) file. Once all sub-modules have been configured, the IO-Controller signals that parameterization has ended. The IO-Device responds by signaling application readiness, which allows process IO data exchange and alarm handling to ensue.
Process IO data exchange / alarm handling: The IO-Device followed by the IO-Controller start to cyclically refresh valid process I/O data. The IO-Controller processes the inputs and controls the outputs of the IO-Device. Alarm notifications are exchanged acyclically between the IO-Controller and IO-Device as events and faults occur.
Termination: The connection between the IO-Device and IO-Controller terminates when the watchdog expires. Watchdog expiry is the result of a failure to refresh cyclic process I/O data by the IO-Controller or the IO-Device. Unless the connection was intentionally terminated at the IO-Controller, the IO-Controller will try to restart the Profinet Application Relation.
Technology of Class B (CC-B)
In addition to the basic Class A functions, Class B devices must support additional functionalities. These functionalities primarily support the commissioning, operation and maintenance of a Profinet IO system and are intended to increase the availability of the Profinet IO system.
Support of network diagnostics with the Simple Network Management Protocol (SNMP) is mandatory. Likewise, the Link Layer Discovery Protocol (LLDP) for neighborhood detection including the extensions for Profinet must be supported by all Class B devices. This also includes the collection and provision of Ethernet port-related statistics for network maintenance. With these mechanisms, the topology of a Profinet IO network can be read out at any time and the status of the individual connections can be monitored. If the network topology is known, automatic addressing of the nodes can be activated by their position in the topology. This considerably simplifies device replacement during maintenance, since no more settings need to be made.
High availability of the IO system is particularly important for applications in process automation and process engineering. For this reason, special procedures have been defined for Class B devices with the existing relationships and protocols. This allows system redundancy with two IO-Controllers accessing the same IO-Devices simultaneously. In addition, there is a prescribed procedure Dynamic Reconfiguration (DR), how the configuration of an IO-Device can be changed with the help of these redundant relationships without losing control over the IO-Device.
Technology of Class C (CC-C)
For the functionalities of Conformance Class C (CC-C) the Isochronous Real-Time (IRT) protocol is mainly used.
With the bandwidth reservation, a part of the available transmission bandwidth of 100 MBit/s is reserved exclusively for real-time tasks. A procedure similar to a time multiplexing method is used. The bandwidth is divided into fixed cycle times, which in turn are divided into phases. The red phase is reserved exclusively for class C real-time data, in the orange phase the time-critical messages are transmitted and in the green phase the other Ethernet messages are transparently passed through. To ensure that maximum Ethernet telegrams can still be passed through transparently, the green phase must be at least 125 μs long. Thus, cycle times under 250 μs are not possible in combination with unchanged Ethernet.
In order to achieve shorter cycle times down to 31.25 μs, the Ethernet telegrams of the green phase are optionally broken down into fragments. These short fragments are now transmitted via the green phase. This fragmentation mechanism is transparent to the other participants on the Ethernet and therefore not recognizable.
In order to implement these bus cycles for bandwidth reservation, precise clock synchronization of all participating devices including the switches is required with a maximum deviation of 1 μs. This clock synchronization is implemented with the Precision Time Protocol (PTP) according to the IEEE 1588-2008 (1588 V2) standard. All devices involved in the bandwidth reservation must therefore be in the same time domain.
For position control applications for several axes or for positioning processes according to the PROFIdrive drive profile of application classes 4 - 6, not only must communication be timely, but the actions of the various drives on a Profinet must also be coordinated and synchronized. The clock synchronization of the application program to the bus cycle allows control functions to be implemented that are executed synchronously on distributed devices.
If several Profinet devices are connected in a line (daisy chain), it is possible to further optimise the cyclic data exchange with Dynamic Frame Packing (DFP). For this purpose, the controller puts all output data for all devices into a single IRT frame. At the passing IRT frame, each Device takes out the data intended for the device, i.e. the IRT frame becomes shorter and shorter. For the data from the different devices to the controller, the IRT frame is dynamically assembled. The great efficiency of the DFP lies in the fact that the IRT frame is always only as extensive as necessary and that the data from the controller to the devices can be transmitted in full duplex simultaneously with the data from the devices to the controller.
Technology of Class D (CC-D)
Class D offers the same services to the user as Class C, with the difference that these services are provided using the mechanisms of Time-Sensitive Networking (TSN) defined by IEEE.
The Remote Service Interface (RSI) is used as a replacement for the Internet protocol suite. Thus, this application class D is implemented independently of IP addresses. The protocol stack will be smaller and independent of future Internet versions (IPv6).
The TSN is not a consistent, self-contained protocol definition, but a collection of different protocols with different characteristics that can be combined almost arbitrarily for each application. For use in industrial automation, a subset is compiled in IEC/IEEE standard 60802 "Joint Profile TSN for Industrial Automation". A subset is used in the Profinet specification version 2.4 for implementing class D.
In this specification, a distinction is made between two applications:
isochronous, cyclic data exchange with short, limited latency time (Isochronous Cyclic Real Time) for applications in Motion Control and distributed control technology
Cyclic data exchange with limited latency time (Cyclic Real Time) for general automation tasks
For the isochronous data exchange the clocks of the participants must be synchronized. For this purpose, the specifications of the Precision Time Protocol according to IEC 61588 for time synchronization with TSN are adapted accordingly.
The telegrams are arranged in queues according to the priorities provided in the VLAN tag. The Time-Aware Shaper (TAS) now specifies a clock pulse with which the individual queues are processed in a switch. This leads to a time-slot procedure where the isochronous, cyclical data is transmitted with the highest priority, the cyclical data with the second priority before all acyclic data. This reduces the latency time and also the jitter for the cyclic data. If a data telegram with low priority lasts too long, it can be interrupted by a cyclic data telegram with high priority and transmitted further afterwards. This procedure is called Frame Preemption and is mandatory for CC-D.
Implementation of Profinet interface
For the realization of a Profinet interface as controller or device, no additional hardware requirements are required for Profinet (CC-A and CC-B) that cannot be met by a common Ethernet interface (100BASE-TX or 100BASE-FX). To enable a simpler line topology, the installation of a switch with 2 ports in a device is recommended.
For the realization of class C (CC-C) devices, an extension of the hardware with time synchronization with the Precision Time Protocol (PTP) and the functionalities of bandwidth reservation is required. For class D (CC-D) devices, the hardware must support the required functionalities of Time-Sensitive Networking (TSN) according to IEEE standards.
The method of implementation depends on the design and performance of the device and the expected quantities. The alternatives are
Development in-house or with a service provider
Use of ready-made building blocks or individual design
Execution in fixed design ASIC, reconfigurable in FPGA technology, as plug-in module or as software component.
History
At the general meeting of the Profibus user organisation in 2000, the first concrete discussions for a successor to Profibus based on Ethernet took place. Just one year later, the first specification of Component Based Automation (CBA) was published and presented at the Hanover Fair. In 2002, the Profinet CBA became part of the international standard IEC 61158 / IEC 61784-1.
A Profinet CBA system consists of different automation components. One component comprises all mechanical, electrical and information technology variables. The component may have been created with the usual programming tools. To describe a component, a Profinet Component Description (PCD) file is created in XML. A planning tool loads these descriptions and allows the logical connections between the individual components to be created to implement a plant.
The basic idea behind Profinet CBA was that in many cases it is possible to divide an entire automation system into autonomously operating - and thus manageable - subsystems. The structure and functionality may well be found in several plants in identical or slightly modified form. Such so-called Profinet components are normally controlled by a manageable number of input signals. Within the component, a control program written by the user executes the required functionality and sends the corresponding output signals to another controller. The communication of a component-based system is planned instead of programmed. Communication with Profinet CBA was suitable for bus cycle times of approx. 50 to 100 ms.
Individual systems show how these concepts can be successfully implemented in the application. However, Profinet CBA does not find the expected acceptance in the market and will no longer be listed in the IEC 61784-1 standard from the 4th edition of 2014.
In 2003 the first specification of Profinet IO (IO = Input Output) was published. The application interface of the Profibus DP (DP = Decentralized Periphery), which was successful on the market, was adopted and supplemented with current protocols from the Internet. In the following year, the extension with isochronous transmission follows, which makes Profinet IO suitable for motion control applications. Profisafe is adapted so that it can also be used via Profinet. With the clear commitment of AIDA to Profinet in 2004, acceptance in the market is given. In 2006 Profinet IO becomes part of the international standard IEC 61158 / IEC 61784-2.
In 2007, according to the neutral count, 1 million Profinet devices have already been installed, in the following year this number doubles to 2 million. By 2019, a total of 26 million devices sold by the various manufacturers are reported.
In 2019, the specification for Profinet was completed with Time-Sensitive Networking (TSN), thus introducing the CC-D conformance class.
Further reading
Notes
References
External links
PROFIBUS & PROFINET International (PI)
PROFINET Technology Page
PROFIBUS International
PROFIsafe web portal
PROFINET University
wireshark PROFINET Wiki
PROFINET Community Stack
p-net - An open-source PROFINET device stack
Industrial Ethernet | Profinet | [
"Engineering"
] | 5,961 | [
"Industrial Ethernet"
] |
3,091,815 | https://en.wikipedia.org/wiki/Strecker%20amino%20acid%20synthesis | The Strecker amino acid synthesis, also known simply as the Strecker synthesis, is a method for the synthesis of amino acids by the reaction of an aldehyde with cyanide in the presence of ammonia. The condensation reaction yields an α-aminonitrile, which is subsequently hydrolyzed to give the desired amino acid. The method is used for the commercial production of racemic methionine from methional.
Primary and secondary amines also give N-substituted amino acids. Likewise, the usage of ketones, instead of aldehydes, gives α,α-disubstituted amino acids.
Reaction mechanism
In the first part of the reaction process, the carbonyl is converted to an [[[iminium ion|iminium]], to which a cyanide ion adds. First, the carbonyl oxygen of an aldehyde is protonated, followed by a nucleophilic attack of ammonia to the carbonyl carbon. After subsequent proton exchange, water is cleaved to form the iminium ion intermediate. A cyanide ion then attacks the iminium carbon yielding an aminonitrile.
In the second part of the reaction process, the nitrile is hydrolzed. First, the nitrile nitrogen of the aminonitrile is protonated, and the nitrile carbon is attacked by a water molecule. A 1,2-diamino-diol is then formed after proton exchange and a nucleophilic attack of water to the former nitrile carbon. Ammonia is subsequently eliminated after the protonation of the amino group, and finally the deprotonation of a hydroxyl group produces an amino acid.
Asymmetric Strecker reactions
One example of the Strecker synthesis is a multikilogram scale synthesis of an L-valine derivative starting from Methyl isopropyl ketone:
The initial reaction product of 3-methyl-2butanone with sodium cyanide and ammonia is resolved by application of L-tartaric acid. In contrast, asymmetric Strecker reactions require no resolving agent. By replacing ammonia with (S)-alpha-phenylethylamine as chiral auxiliary the ultimate reaction product was chiral alanine.
Catalytic asymmetric Strecker reaction can be effected using thiourea-derived catalysts. In 2012, a BINOL-derived catalyst was employed to generate chiral cyanide anion (see figure).
History
The German chemist Adolph Strecker discovered the series of chemical reactions that produce an amino acid from an aldehyde or ketone. Using ammonia or ammonium salts in this reaction gives unsubstituted amino acids. In the original Strecker reaction acetaldehyde, ammonia, and hydrogen cyanide combined to form after hydrolysis alanine. Using primary and secondary amines in place of ammonium was shown to yield N-substituted amino acids.
The classical Strecker synthesis gives racemic mixtures of α-amino acids as products, but several alternative procedures using asymmetric auxiliaries or asymmetric catalysts have been developed.
The asymmetric Strecker reaction was reported by Harada in 1963. The first reported asymmetric synthesis via a chiral catalyst was published in 1996. However, this was retracted in 2023.
Commercial syntheses of amino acids
Several methods exist to synthesize amino acids aside from the Strecker synthesis.
The commercial production of amino acids, however, usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Otherwise amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
References
See also
Bucherer–Bergs reaction
Multiple component reactions
Substitution reactions
Name reactions
Chemical synthesis of amino acids | Strecker amino acid synthesis | [
"Chemistry"
] | 833 | [
"Name reactions"
] |
3,092,216 | https://en.wikipedia.org/wiki/Ultrasonic%20testing | Ultrasonic testing (UT) is a family of non-destructive testing techniques based on the propagation of ultrasonic waves in the object or material tested. In most common UT applications, very short ultrasonic pulse waves with centre frequencies ranging from 0.1-15 MHz and occasionally up to 50 MHz, are transmitted into materials to detect internal flaws or to characterize materials. A common example is ultrasonic thickness measurement, which tests the thickness of the test object, for example, to monitor pipework corrosion and erosion. Ultrasonic testing is extensively used to detect flaws in welds.
Ultrasonic testing is often performed on steel and other metals and alloys, though it can also be used on concrete, wood and composites, albeit with less resolution. It is used in many industries including steel and aluminium construction, metallurgy, manufacturing, aerospace, automotive and other transportation sectors.
History
The first efforts to use ultrasonic testing to detect flaws in solid material occurred in the 1930s. On May 27, 1940, U.S. researcher Dr. Floyd Firestone of the University of Michigan applies for a U.S. invention patent for the first practical ultrasonic testing method. The patent is granted on April 21, 1942 as U.S. Patent No. 2,280,226, titled "Flaw Detecting Device and Measuring Instrument". Extracts from the first two paragraphs of the patent for this entirely new nondestructive testing method succinctly describe the basics of such ultrasonic testing. "My invention pertains to a device for detecting the presence of inhomogeneities of density or elasticity in materials. For instance, if a casting has a hole or a crack within it, my device allows the presence of the flaw to be detected and its position located, even though the flaw lies entirely within the casting and no portion of it extends out to the surface. ... The general principle of my device consists of sending high frequency vibrations into the part to be inspected and the determination of the time intervals of the arrival of the direct and reflected vibrations at one or more stations on the surface of the part."
James F. McNulty (U.S. radio engineer) of Automation Industries, Inc., then, in El Segundo, California, an early improver of the many foibles and limits of this and other nondestructive testing methods, teaches in further detail on ultrasonic testing in his U.S. Patent 3,260,105 (application filed December 21, 1962, granted July 12, 1966, titled “Ultrasonic Testing Apparatus and Method”) that “Basically ultrasonic testing is performed by applying to a piezoelectric crystal transducer periodic electrical pulses of ultrasonic frequency. The crystal vibrates at the ultrasonic frequency and is mechanically coupled to the surface of the specimen to be tested. This coupling may be effected by immersion of both the transducer and the specimen in a body of liquid or by actual contact through a thin film of liquid such as oil. The ultrasonic vibrations pass through the specimen and are reflected by any discontinuities which may be encountered. The echo pulses that are reflected are received by the same or by a different transducer and are converted into electrical signals which indicate the presence of the defect.” To characterize microstructural features in the early stages of fatigue or creep damage, more advanced nonlinear ultrasonic tests should be employed. These nonlinear methods are based on the fact that an intensive ultrasonic wave is getting distorted as it faces micro damages in the material. The intensity of distortion is correlated with the level of damage. This intensity can be quantified by the acoustic nonlinearity parameter (β). β is related to first and second harmonic amplitudes. These amplitudes can be measured by harmonic decomposition of the ultrasonic signal through fast Fourier transformation or wavelet transformation.
How it works
In ultrasonic testing, an ultrasound transducer connected to a diagnostic machine is passed over the object being inspected. The transducer is typically separated from the test object by a couplant such as a gel, oil or water, as in immersion testing. However, when ultrasonic testing is conducted with an Electromagnetic Acoustic Transducer (EMAT) the use of couplant is not required.
There are two methods of receiving the ultrasound waveform: reflection and attenuation. In reflection (or pulse-echo) mode, the transducer performs both the sending and the receiving of the pulsed waves as the "sound" is reflected back to the device. Reflected ultrasound comes from an interface, such as the back wall of the object or from an imperfection within the object. The diagnostic machine displays these results in the form of a signal with an amplitude representing the intensity of the reflection and the distance, representing the arrival time of the reflection. In attenuation (or through-transmission) mode, a transmitter sends ultrasound through one surface, and a separate receiver detects the amount that has reached it on another surface after travelling through the medium. Imperfections or other conditions in the space between the transmitter and receiver reduce the amount of sound transmitted, thus revealing their presence. Using the couplant increases the efficiency of the process
by reducing the losses in the ultrasonic wave energy due to separation between the surfaces.
Examples
One of the example that utilize ultrasound for proving material property is the measurement of grain size of specific material. Unlike destructive measurement, ultrasound offers methods to measure grain size in non-destructive way with even higher detection efficiency. Measurement of grain size using ultrasound can be accomplished through evaluating ultrasonic velocities, attenunations, and backscatter feature. Theoretical foundation for scattering attenunation model was developed by Stanke, Kino, and Weaver.
With constant frequency, the scattering attenuation coefficient depends mainly on the grain size; Zeng et al, figured out that in pure Niobium, attenuation is linearly correlated with grain size through grain boundary scattering. This concepts of ultrasonic proving can be used to inversely resolve the grain size in the time domain when the scattering attenuation coefficient is measured from testing data, providing the non-destructive way to predict material's property with rather simple instruments.
Features
Advantages
High penetrating power allows the detection of flaws deep in the part.
High sensitivity, permitting the detection of extremely small flaws.
Greater accuracy than other non-destructive methods in determining the depth of internal flaws and the thickness of parts with parallel surfaces.
Some capability of estimating the size, orientation, shape and nature of defects.
Some capability of estimating the structure of alloys of components with different acoustic properties.
Non-hazardous to operations or to nearby personnel and has no effect on equipment and materials in the vicinity.
Capable of portable, highly automated or remote operation.
Results are immediate, allowing on-the-spot decisions to be made.
It needs to access only one surface of the product that is being inspected.
Disadvantages
Manual operation requires careful attention by experienced technicians. The transducers alert to both normal structure of some materials, tolerable anomalies of other specimens (both termed “noise”) and to faults therein severe enough to compromise specimen integrity. These signals must be distinguished by a skilled technician, possibly requiring follow up with other nondestructive testing methods.
Extensive technical knowledge is required for the development of inspection procedures.
Rough surface finish, irregular geometry, small parts, thin thicknesses, or un-homogeneous material composition can make testing difficult.
Surface must be prepared by cleaning and removing loose scale, paint, etc., although paint that is properly bonded to a surface, may not need to be removed.
Couplants are needed to effectively transfer ultrasonic wave energy between transducers and parts being inspected unless a non-contact technique is used. Non-contact techniques include Laser and Electro Magnetic Acoustic Transducers (EMAT).
Equipment can be expensive.
Requires reference standards and calibration.
Standards
International Organization for Standardization (ISO)
ISO 2400: Non-destructive testing - Ultrasonic testing - Specification for calibration block No. 1 (2012)
ISO 7963: Non-destructive testing — Ultrasonic testing — Specification for calibration block No. 2 (2006)
ISO 10863: Non-destructive testing of welds -- Ultrasonic testing -- Use of time-of-flight diffraction technique (TOFD) (2011)
ISO 11666: Non-destructive testing of welds — Ultrasonic testing — Acceptance levels (2010)
ISO 16809: Non-destructive testing -- Ultrasonic thickness measurement (2012)
ISO 16831: Non-destructive testing -- Ultrasonic testing -- Characterization and verification of ultrasonic thickness measuring equipment (2012)
ISO 17640: Non-destructive testing of welds - Ultrasonic testing - Techniques, testing levels, and assessment (2010)
ISO 22825, Non-destructive testing of welds - Ultrasonic testing - Testing of welds in austenitic steels and nickel-based alloys (2012)
ISO 5577: Non-destructive testing -- Ultrasonic inspection -- Vocabulary (2000)
European Committee for Standardization (CEN)
EN 583, Non-destructive testing - Ultrasonic examination
EN 1330-4, Non destructive testing - Terminology - Part 4: Terms used in ultrasonic testing
EN 12668-1, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 1: Instruments
EN 12668-2, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 2: Probes
EN 12668-3, Non-destructive testing - Characterization and verification of ultrasonic examination equipment - Part 3: Combined equipment
EN 12680, Founding - Ultrasonic examination
EN 14127, Non-destructive testing - Ultrasonic thickness measurement
(Note: Part of CEN standards in Germany accepted as DIN EN, in Czech Republic as CSN EN.)
See also
Non-Contact Ultrasound
Phased array ultrasonics
Time-of-flight diffraction ultrasonics (TOFD)
Time-of-flight ultrasonic determination of 3D elastic constants (TOF)
Internal rotary inspection system (IRIS) ultrasonics for tubes
EMAT Electromagnetic Acoustic Transducer
ART (Acoustic Resonance Technology)
References
Further reading
Albert S. Birks, Robert E. Green, Jr., technical editors; Paul McIntire, editor. Ultrasonic testing, 2nd ed. Columbus, OH : American Society for Nondestructive Testing, 1991. .
Josef Krautkrämer, Herbert Krautkrämer. Ultrasonic testing of materials, 4th fully rev. ed. Berlin; New York: Springer-Verlag, 1990. .
J.C. Drury. Ultrasonic Flaw Detection for Technicians, 3rd ed., UK: Silverwing Ltd. 2004. (See Chapter 1 online (PDF, 61 kB)).
Nondestructive Testing Handbook, Third ed.: Volume 7, Ultrasonic Testing. Columbus, OH: American Society for Nondestructive Testing.
Detection and location of defects in electronic devices by means of scanning ultrasonic microscopy and the wavelet transform measurement, Volume 31, Issue 2, March 2002, Pages 77–91, L. Angrisani, L. Bechou, D. Dallet, P. Daponte, Y. Ousten
Nondestructive testing
Ultrasound
Welding | Ultrasonic testing | [
"Materials_science",
"Engineering"
] | 2,325 | [
"Nondestructive testing",
"Materials testing",
"Welding",
"Mechanical engineering"
] |
3,092,994 | https://en.wikipedia.org/wiki/Intermediate%20Jacobian | In mathematics, the intermediate Jacobian of a compact Kähler manifold or Hodge structure is a complex torus that is a common generalization of the Jacobian variety of a curve and the Picard variety and the Albanese variety. It is obtained by putting a complex structure on the torus for n odd. There are several different natural ways to put a complex structure on this torus, giving several different sorts of intermediate Jacobians, including one due to and one due to . The ones constructed by Weil have natural polarizations if M is projective, and so are abelian varieties, while the ones constructed by Griffiths behave well under holomorphic deformations.
A complex structure on a real vector space is given by an automorphism I with square . The complex structures on are defined using the Hodge decomposition
On the Weil complex structure is multiplication by , while the Griffiths complex structure is multiplication by if and if . Both these complex structures map into itself and so defined complex structures on it.
For the intermediate Jacobian is the Picard variety, and for it is the Albanese variety. In these two extreme cases the constructions of Weil and Griffiths are equivalent.
used intermediate Jacobians to show that non-singular cubic threefolds are not rational, even though they are unirational.
See also
Deligne cohomology
References
Hodge theory | Intermediate Jacobian | [
"Engineering"
] | 274 | [
"Tensors",
"Differential forms",
"Hodge theory"
] |
3,093,327 | https://en.wikipedia.org/wiki/Caesium-137 | Caesium-137 (), cesium-137 (US), or radiocaesium, is a radioactive isotope of caesium that is formed as one of the more common fission products by the nuclear fission of uranium-235 and other fissionable isotopes in nuclear reactors and nuclear weapons. Trace quantities also originate from spontaneous fission of uranium-238. It is among the most problematic of the short-to-medium-lifetime fission products. Caesium-137 has a relatively low boiling point of and easily becomes volatile when released suddenly at high temperature, as in the case of the Chernobyl nuclear accident and with atomic explosions, and can travel very long distances in the air. After being deposited onto the soil as radioactive fallout, it moves and spreads easily in the environment because of the high water solubility of caesium's most common chemical compounds, which are salts. Caesium-137 was discovered by Glenn T. Seaborg and Margaret Melhase.
Decay
Caesium-137 has a half-life of about 30.05 years.
About 94.6% decays by beta emission to a metastable nuclear isomer of barium: barium-137m (137mBa, Ba-137m). The remainder directly populates the ground state of 137Ba, which is stable. Barium-137m has a half-life of about 153 seconds, and is responsible for all of the gamma ray emissions in samples of 137Cs. Barium-137m decays to the ground state by emission of photons having energy 0.6617 MeV. A total of 85.1% of 137Cs decay generates gamma ray emission in this manner. One gram of 137Cs has an activity of 3.215 terabecquerel (TBq).
Uses
Caesium-137 has a number of practical uses. In small amounts, it is used to calibrate radiation-detection equipment. In medicine, it is used in radiation therapy. In industry, it is used in flow meters, thickness gauges, moisture-density gauges (for density readings, with americium-241/beryllium providing the moisture reading), and in borehole logging devices.
Caesium-137 is not widely used for industrial radiography because it is hard to obtain a very high specific activity material with a well defined (and small) shape as caesium from used nuclear fuel contains stable caesium-133 and also long-lived caesium-135. Isotope separation is too costly compared to cheaper alternatives. Also the higher specific activity caesium sources tend to be made from very soluble caesium chloride (CsCl), as a result if a radiography source was damaged it would increase the spread of the contamination. It is possible to make water insoluble caesium sources (with various ferrocyanide compounds such as , and ammonium ferric hexacyano ferrate (AFCF), Giese salt, ferric ammonium ferrocyanide but their specific activity will be much lower. Other chemically inert caesium compounds include caesium-aluminosilicate-glasses akin to the natural mineral pollucite. The latter has been used in demonstration of chemically stable water-insoluble forms of nuclear waste for disposal in deep geological repositories. A large emitting volume will harm the image quality in radiography. The isotopes and are preferred for radiography, since iridium and cobalt are chemically non-reactive metals and can be obtained with much higher specific activities by the activation of stable and in high flux reactors. However, while is a waste product produced in great quantities in nuclear fission reactors, and are specifically produced in commercial and research reactors and their life cycle entails the destruction of the involved high-value elements. Cobalt-60 decays to stable nickel, whereas iridium-192 can decay to either stable osmium or platinum. Due to the residual radioactivity and legal hurdles, the resulting material is not commonly recovered even from "spent" radioactive sources, meaning in essence that the entire mass is "lost" for non-radioactive uses.
As an almost purely synthetic isotope, caesium-137 has been used to date wine and detect counterfeits and as a relative-dating material for assessing the age of sedimentation occurring after 1945.
Caesium-137 is also used as a radioactive tracer in geologic research to measure soil erosion and deposition; its affinity for fine sediments is useful in this application.
Health risks
Caesium-137 reacts with water, producing a water-soluble compound (caesium hydroxide). The biological behaviour of caesium is similar to that of potassium and rubidium. After entering the body, caesium gets more or less uniformly distributed throughout the body, with the highest concentrations in soft tissue. However, unlike group 2 radionuclides like radium and strontium-90, caesium does not bioaccumulate and is excreted relatively quickly. The biological half-life of caesium is about 70 days.
A 1961 experiment showed that mice dosed with 21.5 μCi/g had a 50% fatality within 30 days (implying an LD50 of 245 μg/kg). A similar experiment in 1972 showed that when dogs are subjected to a whole body burden of 3800 μCi/kg (140 MBq/kg, or approximately 44 μg/kg) of caesium-137 (and 950 to 1400 rads), they die within 33 days, while animals with half of that burden all survived for a year.
Important researches have shown a remarkable concentration of 137Cs in the exocrine cells of the pancreas, which are those most affected by cancer. In 2003, in autopsies performed on 6 children who died in the polluted area near Chernobyl (of reasons not directly linked to the Chernobyl disaster; mostly sepsis), where they also reported a higher incidence of pancreatic tumors, Bandazhevsky found a concentration of 137Cs 3.9 times higher than in their livers (1359 vs 347 Bq/kg, equivalent to 36 and 9.3 nCi/kg in these organs, 600 Bq/kg = 16 nCi/kg in the body according to measurements), thus demonstrating that pancreatic tissue is a strong accumulator and secretor in the intestine of radioactive cesium.
Accidental ingestion of caesium-137 can be treated with Prussian blue (Fe[Fe(CN)]), which binds to it chemically and reduces the biological half-life to 30 days.
Environmental contamination
Caesium-137, along with other radioactive isotopes caesium-134, iodine-131, xenon-133, and strontium-90, were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Chernobyl disaster and the Fukushima Daiichi disaster.
Caesium-137 in the environment is substantially anthropogenic (human-made). Caesium-137 is produced from the nuclear fission of plutonium and uranium, and decays into barium-137. By observing the characteristic gamma rays emitted by this isotope, one can determine whether the contents of a given sealed container were made before or after the first atomic bomb explosion (Trinity test, 16 July 1945), which spread some of it into the atmosphere, quickly distributing trace amounts of it around the globe. This procedure has been used by researchers to check the authenticity of certain rare wines, most notably the purported "Jefferson bottles". Surface soils and sediments are also dated by measuring the activity of 137Cs.
Nuclear bomb fallout
Bombs in the arctic area of Novaja Zemlja and bombs detonated in or near the stratosphere released cesium-137 that landed in upper Lapland, Finland. Measurements of cesium-137 in 1960's was reportedly 45,000 becquerels. Figures from 2011 have a mid range of about 1,100 becquerels, but strangely, cancer cases are no more common there than elsewhere.
Chernobyl disaster
As of today and for the next few hundred years or so, caesium-137 and strontium-90 continue to be the principal source of radiation in the zone of alienation around the Chernobyl nuclear power plant, and pose the greatest risk to health, owing to their approximately 30 year half-life and biological uptake. The mean contamination of caesium-137 in Germany following the Chernobyl disaster was 2000 to 4000 Bq/m2. This corresponds to a contamination of 1 mg/km2 of caesium-137, totaling about 500 grams deposited over all of Germany. In Scandinavia, some reindeer and sheep exceeded the Norwegian legal limit (3000 Bq/kg) 26 years after Chernobyl. As of 2016, the Chernobyl caesium-137 has decayed by half, but could have been locally concentrated by much larger factors.
Fukushima Daiichi disaster
In April 2011, elevated levels of caesium-137 were also being found in the environment after the Fukushima Daiichi nuclear disasters in Japan. In July 2011, meat from 11 cows shipped to Tokyo from Fukushima Prefecture was found to have 1,530 to 3,200 becquerels per kilogram of 137Cs, considerably exceeding the Japanese legal limit of 500 becquerels per kilogram at that time. In March 2013, a fish caught near the plant had a record 740,000 becquerels per kilogram of radioactive caesium, above the 100 becquerels per kilogram government limit. A 2013 paper in Scientific Reports found that for a forest site 50 km from the stricken plant, 137Cs concentrations were high in leaf litter, fungi and detritivores, but low in herbivores. By the end of 2014, "Fukushima-derived radiocaesium had spread into the whole western North Pacific Ocean", transported by the North Pacific current from Japan to the Gulf of Alaska. It has been measured in the surface layer down to 200 meters and south of the current area down to 400 meters.
Cesium-137 is reported to be the major health concern in Fukushima. A number of techniques are being considered that will be able to strip out 80% to 95% of the caesium from contaminated soil and other materials efficiently and without destroying the organic material in the soil. These include hydrothermal blasting. The caesium precipitated with ferric ferrocyanide (Prussian blue) would be the only waste requiring special burial sites. The aim is to get annual exposure from the contaminated environment down to 1 mSv above background. The most contaminated area where radiation doses are greater than 50 mSv/year must remain off limits, but some areas that are currently less than 5 mSv/year may be decontaminated, allowing 22,000 residents to return.
Incidents and accidents
Caesium-137 gamma sources have been involved in several radiological accidents and incidents.
1987 Goiânia, Goiás, Brazil
In the Goiânia accident of 1987, an improperly disposed of radiation therapy system from an abandoned clinic in Goiânia, Brazil, was removed, then cracked to be sold in junkyards. The glowing caesium salt was then to be sold to curious, unadvised buyers. This led to four confirmed deaths and several serious injuries from radiation contamination.
1989 Kramatorsk, Ukraine
The Kramatorsk radiological accident happened in 1989 when a small capsule 8x4 mm in size of caesium-137 was found inside the concrete wall of an apartment building in Kramatorsk, Ukrainian SSR. It is believed that the capsule, originally a part of a measurement device, was lost in the late 1970s and ended up mixed with gravel used to construct the building in 1980. Over 9 years, two families had lived in the apartment. By the time the capsule was discovered, 6 residents of the building had died, 4 from leukemia and 17 more receiving varying doses of radiation.
1994 Tammiku, Estonia
The 1994 Tammiku incident involved the theft of radioactive material from a nuclear waste storage facility in Männiku, Saku Parish, Harju County, Estonia. Three brothers, unaware of the facility's nature, broke into a shed while scavenging for scrap metal. One of the brothers received a 4,000 rad whole-body dose from a caesium-137 source that had been released from a damaged container, succumbing to radiation poisoning 12 days later.
1997 Georgia
In 1997, several Georgian soldiers suffered radiation poisoning and burns. They were eventually traced back to training sources left abandoned, forgotten, and unlabeled after the dissolution of the Soviet Union. One was a caesium-137 pellet in a pocket of a shared jacket that released about 130,000 times the level of background radiation at 1 meter distance.
1998 Los Barrios, Cádiz, Spain
In the Acerinox accident of 1998, the Spanish recycling company Acerinox accidentally melted down a mass of radioactive caesium-137 that came from a gamma-ray generator.
2009 Tongchuan, Shaanxi, China
In 2009, a Chinese cement company (in Tongchuan, Shaanxi Province) was demolishing an old, unused cement plant and did not follow standards for handling radioactive materials. This caused some caesium-137 from a measuring instrument to be included with eight truckloads of scrap metal on its way to a steel mill, where the radioactive caesium was melted down into the steel.
2015 University of Tromsø, Norway
In March 2015, the Norwegian University of Tromsø lost 8 radioactive samples, including samples of caesium-137, americium-241, and strontium-90. The samples were moved out of a secure location to be used for education. When the samples were supposed to be returned, the university was unable to find them. , the samples are still missing.
2016 Helsinki, Finland
On 3 and 4 March 2016, unusually high levels of caesium-137 were detected in the air in Helsinki, Finland. According to STUK, the country's nuclear regulator, measurements showed 4,000 μBq/m3 – about 1,000 times the usual level. An investigation by the agency traced the source to a building from which STUK and a radioactive waste treatment company operate.
2019 Seattle, Washington, United States
Thirteen people were exposed to caesium-137 in May 2019 at the Research and Training building in the Harborview Medical Center complex. A contract crew was transferring the caesium from the lab to a truck when the powder was spilled. Five people were decontaminated and released, but 8 who were more directly exposed were taken to the hospital while the research building was evacuated.
2023 Western Australia, Australia
Public health authorities in Western Australia issued an emergency alert for a stretch of road measuring about 1,400 km after a capsule containing caesium-137 was lost in transport on 25 January 2023. The 8 mm capsule contained a small quantity of the radioactive material when it disappeared from a truck. The State Government immediately launched a search, with the WA Department of Health's chief health officer Andrew Robertson warning an exposed person could expect to receive the equivalent of "about 10 X-rays an hour". Experts warned, if the capsule were found, the public should stay at least 5 metres away. The capsule was found on 1 February 2023.
2023 Prachin Buri, Thailand
A caesium-137 capsule went missing from a steam power plant in Prachin Buri province, Thailand on 23 February 2023, triggering a search by officials from Thailand's Office of Atoms for Peace (OAP) and the Prachin Buri provincial administration. However, the Thai public was not notified until 14 March.
On 20 March, the Secretary-General of the OAP and the governor of Prachin Buri held a press conference stating that they had found caesium-137 contaminated furnace dust at a steel melting plant in Kabin Buri district.
2024 Khabarovsk, Russia
On Friday, 5 April an emergency regime was introduced in the Russian city of Khabarovsk after a local resident accidentally discovered that radiation levels had jumped sharply in one of the industrial areas of the city. According to volunteers of the dosimetric control group, the dosimeter at the NP site showed up to 800 microsieverts, which is 1600 times the safe value.
Employees of the Ministry of Emergency Situations fenced off the area of , where they found a capsule with caesium from a defectoscope. The find was placed in a protective container and taken away for disposal. This was first reported by the Novaya Gazeta.
See also
Commonly used gamma-emitting isotopes
References
Bibliography
External links
NLM Hazardous Substances Databank – Cesium, Radioactive
Cesium-137 dirty bombs by Theodore Liolios
Isotopes of caesium
Fission products
Radioisotope fuels
Radioactive contamination | Caesium-137 | [
"Chemistry",
"Technology"
] | 3,537 | [
"Isotopes of caesium",
"Nuclear fission",
"Radioactive contamination",
"Isotopes",
"Fission products",
"Nuclear fallout",
"Environmental impact of nuclear power"
] |
7,354,807 | https://en.wikipedia.org/wiki/Passenger%20car%20equivalent | Passenger car equivalent (PCE) or passenger car unit (PCU) is a metric used in transportation engineering to assess traffic-flow rate on a highway.
A passenger car equivalent is essentially the impact that a mode of transport has on traffic variables (such as headway, speed, density) compared to a single car.
Traffic studies and/or analysis must be done to obtain the number of trips, which shall then be converted to PCUs based on the above standards. Each region has its own manual with regards to PCU equivalence factors. Highway capacity is measured in PCE/hour daily.
A common method used in the US is the density method. However, the PCU values derived from the density method are based on underlying homogeneous traffic concepts such as strict lane discipline, car following, and a vehicle fleet that does not vary greatly in width.
On the other hand, highways in India carry heterogeneous traffic, where road space is shared among many traffic modes with different physical dimensions. Loose lane discipline prevails; car following is not the norm. This complicates computing of PCE.
Using multiple heuristic techniques, transportation engineers convert a mixed traffic stream into a hypothetical passenger-car stream.
Methods
Many methods exist for determining passenger car units (PCUs).
Examples:
homogenization coefficient,
semi-empirical method,
Walker's method,
headway method,
multiple linear regression method
simulation method.
Transport for London recommend the following PCU values in an urban context:
Pedal cycle 0.2
Motorcycle 0.4
Car or light goods vehicle 1.0
Medium goods vehicle 1.5
Bus or coach 2.0
Heavy goods vehicle (HGV) 2.3
It may be appropriate to use different values for the same vehicle type according to circumstances. For example, in the UK in the 1960s and 1970s, bicycles were evaluated thus:
on rural roads 0.5
on urban roads 0.33
on roundabouts 0.5
at traffic lights 0.2.
References
Transportation engineering
Equivalent units | Passenger car equivalent | [
"Mathematics",
"Engineering"
] | 408 | [
"Equivalent quantities",
"Quantity",
"Industrial engineering",
"Equivalent units",
"Civil engineering",
"Transportation engineering",
"Units of measurement"
] |
7,357,267 | https://en.wikipedia.org/wiki/ASTM%20A500 | ASTM A500 is a standard specification published by the ASTM for cold-formed welded and seamless carbon steel structural tubing in round, square, and rectangular shapes. It is commonly specified in the US for hollow structural sections, but the more stringent CSA G40.21 is preferred in Canada. Another related standard is ASTM A501, which is a hot-formed version of this A500. ASTM A500 defines four grades of carbon steel based primarily on material strength.
This is a standard set by the standards organization ASTM International, a voluntary standards development organization that sets technical standards for materials, products, systems, and services.
Density
Like other carbon steels, A500 and A501 steels have a specific gravity of approximately 7.85, and therefore a density of approximately 7850 kg/m3 (0.284 pounds per cubic inch).
Grades
A500 cold-formed tubing comes in four grades based on chemical composition, tensile strength, and heat treatment. The yield strength requirements are higher for square and rectangular than for round tubing. The minimum copper content is optional. Grade D must be heat treated.
Mechanical Properties
Shaped structural tubing
References
Steels
ASTM standards
Structural engineering standards | ASTM A500 | [
"Engineering"
] | 250 | [
"Steels",
"Structural engineering",
"Alloys",
"Structural engineering standards"
] |
11,989,478 | https://en.wikipedia.org/wiki/CBX1 | Chromobox protein homolog 1 is a protein that in humans is encoded by the CBX1 gene.
Function
The protein is localized at heterochromatin sites, where it mediates gene silencing.
Interactions
CBX1 has been shown to interact with:
C11orf30,
CBX3 and
CBX5, and
SUV39H1.
See also
Heterochromatin protein 1
References
Further reading
External links
Transcription factors
Genes mutated in mice | CBX1 | [
"Chemistry",
"Biology"
] | 98 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,989,499 | https://en.wikipedia.org/wiki/CBX3 | Chromobox protein homolog 3 is a protein that is encoded by the CBX3 gene in humans.
At the nuclear envelope, the nuclear lamina and heterochromatin are adjacent to the inner nuclear membrane. The protein encoded by this gene binds DNA and is a component of heterochromatin. This protein also can bind lamin B receptor, an integral membrane protein found in the inner nuclear membrane. The dual binding functions of the encoded protein may explain the association of heterochromatin with the inner nuclear membrane. Two transcript variants encoding the same protein but differing in the 5' UTR, have been found for this gene.
Interactions
CBX3 has been shown to interact with PIM1, Ki-67, Lamin B receptor, CBX5 and CBX1.
See also
Heterochromatin protein 1
References
Further reading
External links
Transcription factors | CBX3 | [
"Chemistry",
"Biology"
] | 182 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,991,342 | https://en.wikipedia.org/wiki/Magnetobiology | Magnetobiology is the study of biological effects of mainly weak static and low-frequency magnetic fields, which do not cause heating of tissues. Magnetobiological effects have unique features that obviously distinguish them from thermal effects; often they are observed for alternating magnetic fields just in separate frequency and amplitude intervals. Also, they are dependent of simultaneously present static magnetic or electric fields and their polarization.
Magnetobiology is a subset of bioelectromagnetics. Bioelectromagnetism and biomagnetism are the study of the production of electromagnetic and magnetic fields by biological organisms. The sensing of magnetic fields by organisms is known as magnetoreception.
Biological effects of weak low frequency magnetic fields, less than about 0.1 millitesla (or 1 Gauss) and 100 Hz correspondingly, constitutes a physics problem. The effects look paradoxical, for the energy quantum of these electromagnetic fields is by many orders of value less than the energy scale of an elementary chemical act. On the other hand, the field intensity is not enough to cause any appreciable heating of biological tissues or irritate nerves by the induced electric currents.
Effects
An example of a magnetobiological effect is the magnetic navigation by migrant animals by means of magnetoreception.
Many animal orders, such as certain birds, marine turtles, reptiles, amphibians and salmonoid fishes are able to detect small variations of the geomagnetic field and its magnetic inclination to find their seasonal habitats. They are said to use an "inclination compass". Certain crustaceans, spiny lobsters, bony fish, insects and mammals have been found to use a "polarity compass", whereas in snails and cartilageous fish the type of compass is as yet unknown. Little is known about other vertebrates and arthropods. Their perception can be on the order of tens of nanoteslas.
Magnetic intensity as a component of the navigational ‘map’ of pigeons had been discussed since the late nineteenth century. One of the earliest publications to prove that birds use magnetic information was a 1972 study on the compass of European robins by Wolfgang Wiltschko. A 2014 double blinded study showed that European robins exposed to low level electromagnetic noise between about 20 kHz and 20 MHz, could not orient themselves with their magnetic compass. When they entered aluminium-screened huts, which attenuated electromagnetic noise in the frequency range from 50 kHz to 5 MHz by approximately two orders of magnitude, their orientation reappeared.
For human health effects see electromagnetic radiation and health.
Magnetoreception
Several neurobiological models on the primary process which mediates the magnetic input have been proposed:
radical pair mechanism: direction-specific interactions of radical pairs with the ambient magnetic field.
processes involving permanently magnetic (iron-bearing) material like magnetite in tissues
Magnetically induced changes in physical/chemical properties of liquid water.
In the radical pair mechanism photopigments absorb a photon, which elevates it to the singlet state. They form singlet radical pairs with antiparallel spin, which, by singlet–triplet interconversion, may turn into triplet pairs with parallel spin. Because the magnetic field alters the transition between spin state the amount of triplets depends on how the photopigment is aligned within the magnetic field. Cryptochromes, a class of photopigments known from plants and animals appear to be the receptor molecules.
The induction model would only apply to marine animals because as a surrounding medium with high conductivity only salt water is feasible. Evidence for this model has been lacking.
The magnetite model arose with the discovery of chains of single domain magnetite in certain bacteria in the 1970s. Histological evidence in a large number of species belonging to all major phyla. Honey bees have magnetic material in the front part of the abdomen while in vertebrates mostly in the ethmoid region of the head. Experiments prove that the input from magnetite-based receptors in birds and fish is sent over the ophthalmic branch of the trigeminal nerve to the central nervous system.
Safety standards
Practical significance of magnetobiology is conditioned by the growing level of the background electromagnetic exposure of people. Some electromagnetic fields at chronic exposures may pose a threat to human health. World Health Organization considers enhanced level of electromagnetic exposure at working places as a stress factor. Present electromagnetic safety standards, worked out by many national and international institutions, differ by tens and hundreds of times for certain EMF ranges; this situation reflects the lack of research in the area of magnetobiology and electromagnetobiology. Today, most of the standards take into account biological effects just from heating by electromagnetic fields, and peripheral nerve stimulation from induced currents.
Medical approach
Practitioners of magnet therapy attempt to treat pain or other medical conditions by relatively weak electromagnetic fields. These methods have not yet received clinical evidence in accordance with accepted standards of evidence-based medicine. Most institutions recognize the practice as a pseudoscientific one.
See also
Bioelectrochemistry
Magnetoelectrochemistry
Electromagnetic radiation and health
Transcranial magnetic stimulation
References
Further reading
Presman A.S. Electromagnetic Fields and Life, Plenum, New York, 1970.
Kirschvink J.L., Jones D.S., MacFadden B.J. (Eds.) Magnetite Biomineralization and Magnetoreception in Organisms. A New Biomagnetism, Plenum, New York, 1985.
Binhi V.N. Magnetobiology: Underlying Physical Problems. — Academic Press, San Diego, 2002. — 473 p. —
Binhi V.N., Savin A.V. Effects of weak magnetic fields on biological systems: Physical aspects. Physics – Uspekhi, V.46(3), Pp. 259–291, 2003.
Scientific journals
Bioelectromagnetics
Electromagnetic Biology and Medicine
Biomedical Radioelectronics
Biophysics
Radiobiology | Magnetobiology | [
"Chemistry",
"Biology"
] | 1,216 | [
"Radiobiology",
"Radioactivity"
] |
11,991,996 | https://en.wikipedia.org/wiki/Hyalophane | Hyalophane or jaloallofane is a crystalline mineral, part of the feldspar group of tectosilicates. It is considered a barium-rich potassium feldspar. Its chemical formula is , and it has a hardness of 6 to . The name hyalophane comes from the Greek , meaning "glass", and meaning "to appear".
An occurrence of hyalophane was discovered in 1855 in Lengenbach Quarry, Imfield, Binn valley, municipality of Binn, Canton of Valais, Switzerland. The mineral is found predominantly in Europe, with occurrences in Switzerland, Australia, Bosnia, Germany, Japan, New Jersey, and the west coast of North America. Hyalophane may be found in manganese deposits in compact metamorphic zones.
Hyalophane has a monoclinic crystallography, with cell properties a = 8.52 Å, b = 12.95 Å, c = 7.14 Å, and β = 116°. Optically, the material exhibits biaxial birefringence, with refractive index values of nα = 1.542, nβ = 1.545, and nγ = 1.547 and a maximum birefringence of δ = 0.005. It has weak dispersion and low surface relief.
Hyalophane has sometimes been used as a gemstone.
References
Tectosilicates
Barium minerals
Feldspar
Gemstones
Monoclinic minerals
Minerals in space group 12 | Hyalophane | [
"Physics"
] | 319 | [
"Materials",
"Gemstones",
"Matter"
] |
11,993,641 | https://en.wikipedia.org/wiki/Hydrocollator | The hydrocollator, first introduced in 1947 by the Chattanooga Pharmaceutical Company, consists of a thermostatically controlled water bath for placing bentonite-filled cloth heating pads. When the pads are removed from the bath, they are placed in covers and placed on the patient. The device is primarily used by athletic trainers and physical therapists.
Research
The evidence behind the use of the hydrocollator is primarily concerned with achieving rapid heating of the tissue due to the more efficient transfer of energy through water as compared to air. There is some concern that hydrocollator treatment may be less effective with overweight or obese patients.
Heating methods are used commonly in patients with acute pain. It is recommended that heating pads be used at home on acute injuries for short term pain relief.
References
Medical equipment | Hydrocollator | [
"Biology"
] | 160 | [
"Medical equipment",
"Medical technology"
] |
11,993,854 | https://en.wikipedia.org/wiki/Common%20Data%20Link | Common Data Link (CDL) is a secure U.S. military communication protocol. It was established by the U.S. Department of Defense in 1991 as the military's primary protocol for imagery and signals intelligence. CDL operates within the at data rates up to 274 Mbit/s. CDL allows for full duplex data exchange. CDL signals are transmitted, received, synchronized, routed, and simulated by Common data link (CDL) Interface Boxes (CIBs).
The FY06 Authorization Act (Public Law ) requires use of CDL for all imagery, unless waiver is granted. The primary reason waivers are granted is from the inability to carry the 300 pound radios on a small (30 pound) aircraft. Emerging technology expects to field a 2-pound version by the end of the decade (2010).
The Tactical Common Data Link (TCDL) is a secure data link being developed by the U.S. military to send secure data and streaming video links from airborne platforms to ground stations. The TCDL can accept data from many different sources, then encrypt, multiplex, encode, transmit, demultiplex, and route this data at high speeds. It uses a Ku narrowband uplink that is used for both payload and vehicle control, and a wideband downlink for data transfer.
The TCDL uses both directional and omnidirectional antennas to transmit and receive the Ku band signal. The TCDL was designed for UAVs, specifically the MQ-8B Fire Scout, as well as crewed non-fighter environments. The TCDL transmits radar, imagery, video, and other sensor information at rates from 1.544 Mbit/s to 10.7 Mbit/s over ranges of 200 km. It has a bit error rate of 10e-6 with COMSEC and 10e-8 without COMSEC. It is also intended that the TCDL will in time support the required higher CDL rates of 45, 137, and 274 Mbit/s.
References
L-3 business segments
Avionics Systems Standardisation Committee
Secure communication
Military communications | Common Data Link | [
"Engineering"
] | 442 | [
"Military communications",
"Telecommunications engineering"
] |
11,994,387 | https://en.wikipedia.org/wiki/Niederdorla | Niederdorla is a village and a former municipality in the Unstrut-Hainich-Kreis district of Thuringia, Germany. One of the possible geographical centres of Germany is within its area. The nearest city is Erfurt, which also is the capital city of Thuringia. Since 31 December 2012, it has been part of the municipality of Vogtei.
Geographical centre of Germany
Niederdorla claims to be the most central municipality in Germany. A plaque was erected and a lime tree planted at after the 1990 German reunification. The point was confirmed as the centroid of the extreme coordinates by the Dresden University of Technology. Niederdorla also comprises the centre of gravity (equilibrium point) about to the southwest.
People from Niederdorla
Matthias Weckmann, born c. 1616 in Niederdorla, died 1674 in Hamburg, Baroque organist and composer
See also
Central Germany (geography)
References
Former municipalities in Thuringia
Geographical centres | Niederdorla | [
"Physics",
"Mathematics"
] | 207 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
11,996,151 | https://en.wikipedia.org/wiki/Sympathetic%20cooling | Sympathetic cooling is a process in which particles of one type cool particles of another type.
Typically, atomic ions that can be directly laser cooled are used to cool nearby ions or atoms, by way of their mutual Coulomb interaction. This technique is used to cool ions and atoms that cannot be cooled directly by laser cooling, which includes most molecular ion species, especially large organic molecules. However, sympathetic cooling is most efficient when the mass/charge ratios of the sympathetic- and laser-cooled ions are similar.
The cooling of neutral atoms in this manner was first demonstrated by Christopher Myatt et al. in 1997. Here, a technique with electric and magnetic fields were used, where atoms with spin in one direction were more weakly confined than those with spin in the opposite direction. The weakly confined atoms with a high kinetic energy were allowed to more easily escape, lowering the total kinetic energy, resulting in a cooling of the strongly confined atoms.
Myatt et al. also showed the utility of their version of sympathetic cooling for the creation of Bose–Einstein condensates.
References
Atomic, molecular, and optical physics
Cooling technology
Thermodynamics | Sympathetic cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 229 | [
"Dynamical systems",
"Nuclear and atomic physics stubs",
" molecular",
"Thermodynamics",
"Nuclear physics",
"Atomic",
" and optical physics"
] |
11,996,219 | https://en.wikipedia.org/wiki/Gooseneck%20%28piping%29 | A gooseneck (or goose neck) is a 180° pipe fitting at the top of a vertical pipe that prevents entry of water. Common implementations of goosenecks are ventilator piping or ducting for bathroom and kitchen exhaust fans, ship holds, landfill methane vent pipes, or any other piping implementation exposed to the weather where water ingress would be undesired. It is so named because the word comes from the similarity of pipe fitting to the bend in a goose's neck.
Gooseneck may also refer to a style of kitchen or bathroom faucet with a long vertical pipe terminating in a 180° bend.
To avoid hydrocarbon accumulation, a thermosiphon should be installed at the low point of the gooseneck.
Gooseneck, Lead (pigtail)
Leaded goosenecks are short sections of lead pipe (1’ to 2’ long) used during the early 1900s up to World War Two in supplying water to a customer. These lead tubes could be easily bent, and allowed for a flexible connection between rigid service piping. The bent segments of pipe often took the shape of a goose's neck, and are referred to as “lead goosenecks.” Lead is no longer permitted in new water systems or new building construction.
Goosenecks (also referred to as pigtails) are in-line components of a water service (i.e. piping, valves, fittings, tubing, and accessories) running from the distribution system water main to a meter or building inlet. The valve used to connect a small-diameter service line to a water main is called a corporation stop (also called a tap, or corp stop). One gooseneck joins the corporation stop to the water service pipe work. A second gooseneck links the supply pipeline to a water meter located outside the building.
See also
Swan neck duct
Swan neck flask
Trap (plumbing)
References
Piping | Gooseneck (piping) | [
"Chemistry",
"Engineering"
] | 390 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
11,996,462 | https://en.wikipedia.org/wiki/Kashiwazaki-Kariwa%20Nuclear%20Power%20Plant | The is a large, modern (housing the world's first advanced boiling water reactor or ABWR) nuclear power plant on a site. The campus spans the towns of Kashiwazaki and Kariwa in Niigata Prefecture, Japan, on the coast of the Sea of Japan, where it gets cooling water. The plant is owned and operated by Tokyo Electric Power Company (TEPCO), and it is the largest nuclear generating station in the world by net electrical power rating.
On 16 July 2007, the Chūetsu offshore earthquake took place, with its epicenter located only from the plant. The earthquake registered Mw 6.6, ranking it among the strongest earthquakes to occur in the immediate range of a nuclear power plant. This shook the plant beyond design basis and initiated an extended shutdown for inspection, which indicated that greater earthquake-proofing was needed before the operation could be resumed. The plant was completely shut down for 21 months following the earthquake. Unit 7 was restarted after seismic upgrades on 19 May 2009, followed later by units 1, 5, and 6. (Units 2, 3, and 4 were not restarted by the time of the March 2011 earthquake.)
The four restarted and operating units at the plant were not affected by the 11 March 2011 earthquake, but thereupon all units were shut down to carry out safety improvements. TEPCO regained permission to restart units 6 and 7 from the Nuclear Regulation Authority (NRA) in 2017, but throughout 2023, all units remained idle. In December 2023, the NRA finally approved the reloading of fuel at the plant, citing improvements in the safety management system. As of 2024, TEPCO is seeking permission from local authorities to restart the plant again.
Reactors
There are seven reactor units spread across the campus coast line. Numbering starts at Unit 1 with the south-most unit through Unit 4, then there is a large green space in between Unit 4 and 7, then it continues with Units 6 and 5.
The power installation costs for units at this site well reflect the general trend in costs of nuclear plants. Capital costs increased through the 1980s but have become cheaper in modern times. The last two units were the first Advanced Boiling Water Reactors (ABWRs) ever built.
Performance
Operating a single large plant comprising this many reactors has several economic advantages. One such benefit is the limited impact of single-reactor refueling outages during the replacement cycle; one dormant reactor makes minimal impact on the plant's net power production. A smooth transition was seen in the power production history of the plant up through the time the last two units were built. Currently, however, there are no active reactors at the Kashiwazaki-Kariwa plant. TEPCO has outlined plans to restart Reactor 6 and Reactor 7 and is awaiting approval from the government and citizens before the reactors are permitted to restart.
Partial shutdowns
In February 1991, Unit 2 was automatically shut down following a sudden drop in oil pressure inside the steam turbine.
On 18 July 1997, radioactive steam leaked from a gauge within Unit 7 of the Kashiwazaki-Kariwa plant. In May, a burst tube had delayed trial runs at the plant, and earlier in July smoke had been found coming from plant machinery.
In January 1998, Unit 1 was shut down after increasing radiation levels in the steam driving the turbine triggered alarms. The levels were reportedly 270 times the expected operating level.
Reactors at the plant were shut down one by one following the 2002 discovery that TEPCO had deliberately falsified data surrounding safety inspections. The first reactor was taken offline 9 September 2002, and the final reactor was taken offline 27 January 2003. The newest units, the more inherently safe ABWRs, were taken back online the quickest and suffered the smallest effect. Units 1, 2, and 3, on the other hand, generated no electricity during the fiscal year of 2003.
Complete shutdowns
Units 1-4 were completely shut down in 2008. Only Unit 1 was temporarily restarted in 2010–2011. Unit 5 was temporarily restarted between 2010 and 2012 after a shut down in 2007.
Following the Fukushima disaster in 2011, Unit 1 was shut down again in 2012 along with units 5–7. As of May 2022, the plant remains idle.
Fuel
All reactors continue to use low-enriched uranium as the nuclear fuel; however, there have been plans drafted by TEPCO to use MOX fuel in some of the reactors by the permission of the Japanese Atomic Energy Commission (JAEC). A public referendum in the Kariwa village in 2001 voted 53% against use of the new fuel. After the 2002 TEPCO data fabrication scandals, the president at the time, , announced that plans to use the MOX fuel at the KK plant would be suspended indefinitely.
Earthquakes
Earthquake resistant design features
Sand at the sites was removed and the reactor was built on firm ground. Adjacent soil was backfilled. Basements of the reactor buildings extend several levels down (maximum of 42 m below grade). These underground elements stabilize the reactor buildings, making them less likely to suffer sway due to resonance vibrations during an earthquake. As with other Japanese power plants, reactors at the plant were built according to earthquake-resistance standards, which are regulated by law and the JAEC.
In 2006 safety standards for earthquake resistance in Japan's nuclear plants were modified and tightened. After the 2007 earthquake suspicions arose that another fault line may be closer to the plant than originally thought, possibly running straight through the site.
2007 Chūetsu offshore earthquake
The KK plant was 19 kilometers away from the epicenter of the magnitude 6.6 2007 Chūetsu offshore earthquake, which took place 10:13 a.m., 16 July 2007. Peak ground acceleration of 6.8 m/s2 (0.69 g) was recorded in Unit 1 in the east–west direction, above the design specification for safe shutdown of 4.5 m/s2, and well above the rapid restart specification for key equipment in the plant of 2.73 m/s2. Units 5 and 6 also recorded shaking over this limit. Shaking of 20.58 m/s2 was recorded in the turbine building of Unit 3.
Those nearby saw black smoke which was later confirmed to be an electric transformer that had caught fire at Unit 3. The fire was put out by noon on the day of the quake, about 2 hours after it started. The 3-story transformer building was extensively charred.
Reactor units 3, 4, and 7 all automatically powered down safely in response to the quake. Unit 2 was in startup mode and not online. Units 1, 5, and 6 were already shut down for inspection at the time. TEPCO was ready to restart some of the units as of the next day, but the trade ministry ordered the plant to remain idle until additional safety checks could be completed. On Wednesday, 18 July, the mayor of Kashiwazaki ordered operations at the plant to be halted until its safety could be confirmed. The Nikkei reported that government safety checks could delay the restart for over a year, without stating the source of the information. For comparison, in 2005, a reactor at the Onagawa Nuclear Power Plant was closed for five months following an earthquake.
IAEA inspections
The International Atomic Energy Agency (IAEA) offered to inspect the plant, which was initially declined. The governor of Niigata prefecture then sent a petition to Shinzo Abe. On Sunday, 22 July 2007, the Nuclear and Industrial Safety Agency (NISA) announced that it would allow inspectors from the United Nations to review the damage.
A team from the IAEA carried out a four-day inspection, as investigations by Japan's Nuclear and Industrial Safety Agency (NISA), Nuclear Safety Commission (NSC) and the Tokyo Electric Power Company (TEPCO) continued. The team of the IAEA confirmed that the plant had "shut down safely" and that "damage appears less than expected." On 19 August, the IAEA reported that, for safety-related and nuclear components, "no visible significant damage has been found" although "nonsafety related structures, systems and components were affected by significant damage".
The official report issued by the IAEA stated that the plant "behaved in a safe manner" after a 4-day inspection. Other observations were:
"Safety related structures, systems and components of the plant seem to be in a general condition, much better than might be expected for such a strong earthquake, and there is no visible significant damage"
Conservatisms introduced in the construction of the plant compensated for the magnitude of the earthquake being so much greater than planned for.
Recommendations included:
A re-evaluation of the seismic safety.
Detailed geophysical investigations
External inspections of the plant were planned to be completed by the end of July 2008. The schedule was confirmed on 10 July 2008 by the site superintendent, Akio Takahashi. On 15 July, Akira Amari said his ministry was also continuing their own tests. An IAEA workshop in June 2008 recognized that the earthquake exceeded the "seismic input" used in the design in that plant, and that regulations played a critical role in keeping the plant safe. However, TEPCO determined that significant upgrades were required to cope with the improved understanding of the seismic environment and possible shaking effects at the plant site.
The IAEA sent a team for a follow-up visit in January 2008. They concluded that much high-quality inspection work had been undertaken and noted the likely improvements to nuclear seismic design worldwide that may result from this process. An additional visit from an IAEA team of 10 experts occurred in December 2008, noting that the "unexpectedly large ground motions" were now well understood and could be protected against, and further confirming the safe performance of the plant during the quake.
Radioactivity releases
Initially, it was thought that some water (estimated to be about 1.5 L) from the spent fuel pool leaked into the Sea of Japan as a result of the quake. Later, more detailed reports confirmed a number of releases, though most of them were far less active than common natural radiation sources. According to the NISA, this was the first time a release of radioactive material happened as a result of an earthquake.
0.6 litres of slightly radioactive water leaked from the third floor of the Unit 6 reactor building, which contained 280 becquerels of radioactivity. (For reference, a household smoke detector typically contains of radioactivity, and a living adult human typically has around 8000 Bq of naturally occurring radioactivity inside their body).
0.9 litres of slightly radioactive water leaked from the inner third floor of the Unit 6 reactor building, containing 16,000 Bq of radioactivity.
From unit 6, 1.3 cubic meters of water from the spent fuel pool leaked through a drainage pipe and ultimately into the Sea of Japan. The water contained 80 Bq/L, totaling 90,000 Bq in the release. For comparison, an Onsen located in Misasa, Tottori, Japan uses water with a large concentration of radon, which gives it a radioactivity of 9300 Bq/L. The leaked water from the plant did not pose a health risk even before being diluted. Towels were used to mop up the water.
On Wednesday, 18 July 2007, at Unit 7, radioactive iodine was found leaking from an exhaust pipe by a government inspector, the leak began between Tuesday and Wednesday and was confirmed to have stopped by Thursday night. The amount of iodine released was estimated at 12 million Bq and the total amount of particulate radioactivity released into the air was about 402,000,000 Bq. This was said to have been one 10 millionth of the legal limit. It is estimated that this caused an unintentional dose of 0.0002 nanosieverts (nSv), per person distributed among around 10 million people. The limit for dose to the public from the operations of a nuclear plant in Japan in one year is 1100 nSv, and, for comparison, natural background radiation worldwide for humans is on average around 2,400,000 nSv/year (2.4 mSv/year). In regards to the cause, Yasuhisa Shiozaki said "This is an error of not implementing the manual," because the vent should have been closed.
Other problems
About 400 drums containing low-level nuclear waste stored at the plant were knocked over by the aftershocks, 40 losing their lids. Company officials reported on 17 July that traces of the radioactive materials cobalt-60, iodine, and chromium-51 had been released into the atmosphere, presumably from the containers losing their lids.
Criticisms of the company's response to the event included the time it took the company to report events and the certainty with which they were able to locate the source of various problems. TEPCO's president made a comment the site was a "mess" after visiting post-quake. While the reported amount of leaked radioactivity remained far below what poses a danger to the public, details changed multiple times in the few days after the quake and attracted significant media attention. After the quake, TEPCO was supposedly investigating 50 separate cases of "malfunctioning and trouble," a number that was changed to 63 cases later. Even the radioactivity sensors around the site encountered trouble, the reading from these devices are normally available online, giving the public a direct measure of ambient radioactivity around the site, but due to damage sustained during the earthquake, stopped reporting on the website. The company published an apology on that page, and data from the devices covering the off-line period was released later, showing no artificial abnormalities (note that the readings naturally fluctuate depending on whether it's raining or snowing and a host of other factors).
TEPCO's president maintained that fears of a leak of radioactive material were unfounded (since the amount leaked into the ocean was a billionth of the legal limit), but many international reporters expressed distrust of the company that has a history of cover-up controversies. The IAEA's Mohamed ElBaradei encouraged full transparency throughout the investigation of the accident so that lessons learned could be applied to nuclear plants elsewhere.
Impact
News of the earthquake, combined with the fact that replacement power sources (such as oil and gas) are at record highs, caused TEPCOs stock to plummet 7.5%, the largest drop in seven years, which amounted to around US$4.4 billion lost in stock capitalization. This made the event even more costly to the company than the 2002 data falsification scandal. Additionally, TEPCO warned that the plant closure could cause a power shortage during the summer months. Trade minister Akira Amari requested that business users cut electricity use, and in August TEPCO was forced to reduce electricity supplies for industrial uses, the first time it had to resort to such measures in 17 years.
Reports of the leak caused thousands of cancellations at resorts and hotels along the Sea of Japan coast, even as far as Murakami, Niigata (140 km northeast) and Sado Island. Inn owners have said that rumors have been more damaging than direct effects of the earthquake.
The shutdown forced TEPCO to run natural gas plants in place of this plant, not only increasing Japan's demand for the fuel and increasing the price internationally, but also increasing carbon dioxide output such that Japan will have difficulty meeting the Kyoto Protocol.
Restart
After 16 months of comprehensive component-based assessment and upgrades on all seven reactors, this phase of post-earthquake response was almost complete, with Reactor 7 fully upgraded to cope with the seismic environment. On 8 November 2008, fuel loading in reactor Unit 7 started, preparatory to a period of system safety tests on that reactor. On 19 February 2009 TEPCO applied to the local governance to restart Unit 7 after having obtained approval from the national government and regulators. Local government agreement for restart was granted in May and electrical grid power was supplied from Unit 7 at 20% power on 19 May. The reactor was raised to 100% power on 5 June 2009 as part of a series of restart tests.
Unit 6 restarted on 26 August 2009 and reconnected to the grid on 31 August.
Unit 1 restarted on 31 May 2010 after loading with fuel (along with Unit 5) earlier in the year, and was generating grid power by 6 June 2010.
Unit 5 recommenced grid generation on 26 November 2010, in the same week that fuel loading for Unit 3 started.
Units 2, 3, and 4 were not restarted.
2011 Tōhoku earthquake
The reactors were shut down indefinitely following the 2011 Tōhoku earthquake and tsunami. Plans to restart units 6 and 7 were delayed after problems developed with the intruder detection system.
Facility improvements after Fukushima I nuclear accidents
On 21 April 2011, after the Fukushima Daiichi nuclear disaster, TEPCO announced a plan to build up the seawall to a height of 15 m (49.2 ft) above sea level and spanning more than 800 m (2,624 ft) in length for units 1–4, and more than 500 m (1,640 ft) for units 5–7 by June 2013. The height of a potential tsunami was assumed to be 3.3 m. Also, plans were made to rebuild the radioactive overflow storage pool to be completed by September 2012.
2011–2012: Survey on tsunamis in the past
On 10 November 2011, TEPCO announced a survey for signs of past tsunamis in this area. With drills, soil samples were to be taken of sediment layers dating from the year 1600 back to 7000 years ago, at nine locations around the plant at the coast of central Japan. This survey, the first that TEPCO ever conducted on this subject, did start on 15 November 2011, and was planned to be completed in April 2012, and was done to examine the possibility of higher tsunamis than had been expected at the time the plant was designed and built.
On 26 April 2012, TEPCO said that it would recalculate the risks of earthquakes and tsunamis. This was done after reports, as published by four prefectures around the nuclear Plant, re-estimated the risks of potential earthquakes in the region:
Tottori Prefecture: a 220 kilometer long fault might trigger an 8.15 magnitude earthquake
Shimane Prefecture: 8.01 magnitude
Ishikawa Prefecture: 7.99 magnitude
The calculated earthquake magnitudes are almost three times stronger than all the calculations done by TEPCO regarding the safety assessments for the plant. These were based on a magnitude 7.85 quake caused by a 131 kilometer long fault near Sado Island in Niigata and a 3.3 meter-high tsunami. To endure this, an embankment was under construction to resist tsunami waves up to 15 meters high. The recalculation could have consequences for the stress tests and safety assessments for the plant.
After the planned revision of the safety standards in July 2013, some faults under the reactors were considered as geologically active. This was found by Japanese news agency Kyodo News on 23 January 2013 in papers and other material published by TEPCO. Under the new regulations, geologic faults would be considered to be active if they had moved within the last 400,000 years, instead of the less stringent standard of 120 000 years, as was formerly accepted.
Two faults, named "Alpha" and "Beta," are present under Reactors 1 and 2. Other faults are situated under Reactor 3 and Reactor 5, as well as underneath the building of Reactor 4. Under the new regulations, the beta-fault could be classified as active because it moved a ground layer including volcanic ash around 240,000 years ago. The outcome of the study might trigger a second survey by the newly installed Japanese regulator NRA. In January 2013, studies were conducted or planned on geological faults around six Japanese reactor sites. The Kashiwazaki-Kariwa plant would be number seven.
Current status
In 2017, TEPCO contempleted a restart of the plant from 2019 to 2021.
Kashiwazaki-Kariwa is one of the 44 nuclear power plants in Japan that have been rendered inactive in the years following the Fukushima Daiichi Accident. By October 2020, the Japanese government had inspected the plant, and by January 2020, TEPCO had completed its improvements on Unit 7. The company outlined plans to restart the reactor as early as the end of the Japanese 2022 Fiscal year (31 March 2022). However, the Nuclear Regulation Authority released a report in April 2021 indicating that there were serious security infractions and enacted an order that postponed the restart indefinitely.
Following the April 2021 NRA report, TEPCO admitted that its intruder detection system was left broken in order to reduce costs and confirmed that an unauthorized personnel member used a colleague's ID card to access the plant's central control room in September 2020. In response, TEPCO plans to implement anti-terrorism measures, install an intrusion detection system, and hire an additional 30 guards to protect nuclear material at the facility. The power company intends to invest ¥20 Billion (US$165.4 Million) on these security measures from 31 March 2023 to 31 March 2028.
According to a report from TEPCO, the NRA began Additional Inspection (Phase II) to monitor the new security measures at the plant. In April 2022, it was confirmed that the security flaws revealed in the NRA's April 2022 report were limited to Kashiwazaki-Kariwa and not indicative of a widespread issue throughout the company's culture. TEPCO is planning on moving nearly 40% of their nuclear division employees to Niigata Prefecture in preparation of its plans to restart Reactor 7 and begin rebuilding trust in the citizens, but the future of Kashiwazaki-Kariwa is still uncertain. As of 26 May 2022, the local government has yet to move forward with approval for TEPCO to set forth their plans to restart. According to a 2021 survey by Niigata Nippo, just over half of Niigata prefecture residents oppose a nuclear restart.
In October 2022, Japanese Prime Minister Kishida Fumio unveiled a new strategy for Japans nuclear power plants regarding new construction projects and license extensions. Included in this strategy, is a plan to restart units at the Kashiwazaki-Kariwa Nuclear Power Plant by the summer months of 2023. Although, the feasibility of this timelime have been questioned by journalists given the number of safety issues that have come to light at the plant in the last few years. Most of these issues relate to security discrepancies such as a worker who forgot his ID, borrowed his colleagues card to enter crucial areas. A government inspection of Unit 7 in October 2020 concluded that the majority of construction had been finished by January the following year. TEPCO felt that it is doing everything in its power to meet NRA guidelines.
In late 2023, the national regulator lifted the operational ban on the plant, allowing it to begin applying for permits from local governments to reopen.
On Monday, 8 April 2024, Japans Nuclear Regulation Authority approved plans submitted by TEPCO to fuel reactor No. 7. TEPCO announced it would begin fueling reactor 7 starting around 4pm on 14 April, a process which typically takes about two weeks. Operation of reactor 7 would still require completion of additional inspections and would require the approval of the Niigate Prefecture Governor. It's been reported that Reactor 7 is scheduled to restart operation in October of 2024 "under a base-case scenario".
See also
Katsuhiko Ishibashi
Pacific Ring of Fire
List of nuclear power plants in Japan
References
External links
Niigata Chuetsu Offshore earthquake
Niigata Chuetsu Offshore Earthquake impacts Japan Atomic Industrial Forum
Earthquake impacts Japan Nuclear Technology Institute
View on earthquake events Japan's Nuclear Safety Commission
Chairman's statement
Kashiwazaki-Kariwa Earthquake Japan's Citizens' Nuclear Information Center Report
Kashiwazaki nuclear plant report from the scene Greenpeace
Insight: Where not to build nuclear power stations New Scientist
Japan’s Quake-Prone Atomic Plant Prompts Wider Worry The New York Times
Entire plant
Tokyo Electric Company Official Site for Kashiwazaki-Kariwa 東京電力・柏崎刈羽原子力発電所 (in Japanese)
This shows output power, click on icons at top left to see three different radiation monitors.
Nuclear TEPCO-Power Plants (in English)
List of events at the plant (in English)
1980s establishments in Japan
Nuclear power stations using advanced boiling water reactors
Earthquake engineering
Buildings and structures in Niigata Prefecture
Nuclear power stations in Japan
Tokyo Electric Power Company
Kashiwazaki, Niigata
Kariwa, Niigata | Kashiwazaki-Kariwa Nuclear Power Plant | [
"Engineering"
] | 5,066 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
5,624,034 | https://en.wikipedia.org/wiki/Chemical%20compound%20microarray | A chemical compound microarray is a collection of organic chemical compounds spotted on a solid surface, such as glass and plastic. This microarray format is very similar to DNA microarray, protein microarray and antibody microarray. In chemical genetics research, they are routinely used for searching proteins that bind with specific chemical compounds, and in general drug discovery research, they provide a multiplex way to search potential drugs for therapeutic targets.
There are three different forms of chemical compound microarrays based on the fabrication method. The first form is to covalently immobilize the organic compounds on the solid surface with diverse linking techniques; this platform is usually called Small Molecule Microarray, which is invented and advanced by Dr. Stuart Schreiber and colleagues . The second form is to spot and dry organic compounds on the solid surface without immobilization, this platform has a commercial name as Micro Arrayed Compound Screening (μARCS), which is developed by scientists in Abbott Laboratories . The last form is to spot organic compounds in a homogenous solution without immobilization and drying effect, this platform is developed by Dr. Dhaval Gosalia and Dr. Scott Diamond and later commercialized as DiscoveryDot technology by Reaction Biology Corporation .
Polymer Microarrays
Polymer microarrays have been developed to allow screening for new polymeric materials to direct different tissue lineages. Research has also been directed towards studying the surface chemistry of these arrays to determine which surface chemistries control cell adhesion, although concerns have been raised as to the influence of the substrate on measurements and the questionable statistical interpretation of results.
The lack of control in the production of many of these polymer arrays suggests that any practical application of these technologies will be limited. This is particularly true for the in situ polymerisation of acrylate monomers in minute volumes.
References
Uttamchandani, M. et al. (2005) "Small molecule microarrays, recent advances and applications". Curr Opin Chem Biol. 9, 4–13 .
Walsh, D.P. and Chang, Y.T. (2004) "Recent Advances in Small Molecule Microarrays, Applications and Technology". Comb Chem High Throughput Screen. 7, 557–564 .
Hoever, M. and Zbinden, P. (2004) "The evolution of microarrayed compound screening. Drug Discov". Today 9, 358–365.
Gosalia, DN and Diamond, SL. (2003) "Printing Chemical libraries on microarrays for fluid phase nanoliter reactions". Proc. Natl. Acad. Sci. USA, 100, 8721–8726 .
Ma, H. et al. (2005) "Nanoliter Homogenous Ultra High Throughput Screening Microarray for Lead Discoveries and IC50 Profiling". Assay Drug Dev. Technol. 3, 177–187 .
Horiuchi, K.Y. et al. (2005) "Microarrays for the functional analysis of the chemical-kinase interactome", accepted, J Biomol Screen. 11, 48–56 .
Ma, H. and Horiuchi, K.Y. (2006) "Chemical Microarray: a new tool for drug screening and discovery", Drug Discovery Today, 11, 661–668 .
Nanotechnology
Microarrays | Chemical compound microarray | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 710 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Materials science",
"Bioinformatics",
"Molecular biology techniques",
"Nanotechnology"
] |
5,625,309 | https://en.wikipedia.org/wiki/Energy%20gap | In solid-state physics, an energy gap or band gap is an energy range in a solid where no electron states exist, i.e. an energy range where the density of states vanishes.
Especially in condensed matter physics, an energy gap is often known more abstractly as a spectral gap, a term which need not be specific to electrons or solids.
Band gap
If an energy gap exists in the band structure of a material, it is called band gap. The physical properties of semiconductors are to a large extent determined by their band gaps, but also for insulators and metals the band structure—and thus any possible band gaps—govern their electronic properties.
Superconductors
For superconductors the energy gap is a region of suppressed density of states around the Fermi energy, with the size of the energy gap much smaller than the energy scale of the band structure. The superconducting energy gap is a key aspect in the theoretical description of superconductivity and thus features prominently in BCS theory. Here, the size of the energy gap indicates the energy gain for two electrons upon formation of a Cooper pair. If a conventional superconducting material is cooled from its metallic state (at higher temperatures) into the superconducting state, then the superconducting energy gap is absent above the critical temperature , it starts to open upon entering the superconducting state at , and it grows upon further cooling.
BCS theory predicts that the size of the superconducting energy gap for conventional superconductors at zero temperature scales with their critical temperature : (with Boltzmann constant ).
Pseudogap
If the density of states is suppressed near the Fermi energy but does not fully vanish, then this suppression is called pseudogap. Pseudogaps are experimentally observed in a variety of material classes; a prominent example are the cuprate high-temperature superconductors.
Hard gap vs. soft gap
If the density of states vanishes over an extended energy range, then this is called a hard gap. If instead the density of states exactly vanishes only for a single energy value (while being suppressed, but not vanishing for nearby energy values), then this is called a soft gap. A prototypical example of a soft gap is the Coulomb gap that exists in localized electron states with Coulomb interaction.
References
Electronic band structures
Superconductivity | Energy gap | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 492 | [
"Electron",
"Physical quantities",
"Superconductivity",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
5,629,366 | https://en.wikipedia.org/wiki/Yrast | Yrast ( , ) is a technical term in nuclear physics that refers to a state of a nucleus with a minimum of energy (when it is least excited) for a given angular momentum. Yr is a Swedish adjective sharing the same root as the English whirl. Yrast is the superlative of yr and can be translated whirlingest, although it literally means "dizziest" or "most bewildered". The yrast levels are vital to understanding reactions, such as off-center heavy ion collisions, that result in high-spin states.
Yrare is the comparative of yr and is used to refer to the second-least energetic state of a given angular momentum.
Background
An unstable nucleus may decay in several different ways: it can eject a neutron, proton, alpha particle, or other fragment; it can emit a gamma ray; it can undergo beta decay. Because of the relative strengths of the fundamental interactions associated with those processes (the strong interaction, electromagnetism, and the weak interaction respectively), they usually occur with frequencies in that order. Theoretically, a nucleus has a very small probability of emitting a gamma ray even if it could eject a neutron, and beta decay rarely occurs unless both of the other two pathways are highly unlikely.
In some instances, however, predictions based on this model underestimate the total amount of energy released in the form of gamma rays; that is, nuclei appear to have more than enough energy to eject neutrons, but decay by gamma emission instead. This discrepancy is found by the energy of a nuclear angular momentum, and documentation and calculation of yrast levels for a given system may be used for analyzing such a situation.
The energy stored in the angular momentum of an atomic nucleus can also be responsible for the emission of larger-than-expected particles, such as alpha particles over single nucleons, because they can carry away angular momentum more effectively. This is not the only reason alpha particles are preferentially emitted, though; another reason is simply that alpha particles (He-4 nuclei) are energetically very stable in and of themselves.
Yrast isomers
Sometimes there is a large gap between two yrast states. For example, the nucleus 95Pd has a 21/2 state that lies below the lowest 19/2, 17/2, and 15/2 states. This state does not have enough energy to undergo strong particle decay, and because of the large spin difference, gamma decay from the 21/2 state to the 13/2 state below is very unlikely. The more likely decay option is beta decay, which forms an isomer with an unusually long half-life of 14 seconds.
An exceptional example is the J=9 state of tantalum-180, which is a very low-lying yrast state only 77 keV above the ground state. The ground state has J=1, which is too large a gap for gamma decay to occur. Alpha and beta decay are also suppressed, so strongly that the resulting isomer, tantalum-180m, is effectively stable for all practical purposes, and has never been observed to decay. Tantalum-180m is the only currently known yrast isomer to be observationally stable.
Some superheavy isotopes (such as copernicium-285) have longer-lived isomers with half-lives on the order of minutes. These may be yrasts, but the exact angular momentum and energy is often hard to determine for these nuclides.
References
Swedish words and phrases
Nuclear physics
Angular momentum | Yrast | [
"Physics",
"Mathematics"
] | 739 | [
"Physical quantities",
"Quantity",
"Nuclear physics",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
5,630,017 | https://en.wikipedia.org/wiki/Cadiot%E2%80%93Chodkiewicz%20coupling | The Cadiot–Chodkiewicz coupling in organic chemistry is a coupling reaction between a terminal alkyne and a haloalkyne catalyzed by a copper(I) salt such as copper(I) bromide and an amine base. The reaction product is a 1,3-diyne or di-alkyne.
The reaction mechanism involves deprotonation by base of the terminal alkyne proton followed by formation of a copper(I) acetylide. A cycle of oxidative addition and reductive elimination on the copper centre then creates a new carbon-carbon bond.
Scope
Unlike the related Glaser coupling the Cadiot–Chodkiewicz coupling proceeds selectively and will only couple the alkyne to the haloalkyne, giving a single product. By comparison the Glaser coupling would simply produce a distribution of all possible couplings.
In one study the Cadiot–Chodkiewicz coupling has been applied in the synthesis of acetylene macrocycles starting from cis-1,4-diethynyl-1,4-dimethoxycyclohexa-2,5-diene. This compound is also the starting material for the dibromide through N-bromosuccinimide (NBS) and silver nitrate:
The coupling reaction itself takes place in methanol with piperidine, the hydrochloric acid salt of hydroxylamine and copper(I) bromide.
See also
Glaser coupling – Another alkyne coupling reaction catalysed by a copper(I) salt.
Sonogashira coupling – Pd/Cu catalysed coupling of an alkyne with an aryl or vinyl halide
Castro–Stephens coupling – A cross-coupling reaction between a copper(I) acetylide and an aryl halide
References
Substitution reactions
Carbon-carbon bond forming reactions
Name reactions | Cadiot–Chodkiewicz coupling | [
"Chemistry"
] | 390 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
4,226,265 | https://en.wikipedia.org/wiki/Fluorine-18 | Fluorine-18 (18F, also called radiofluorine) is a fluorine radioisotope which is an important source of positrons. It has a mass of 18.0009380(6) u and its half-life is 109.771(20) minutes. It decays by positron emission 96.7% of the time and electron capture 3.3% of the time. Both modes of decay yield stable oxygen-18.
Natural occurrence
is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n.
Synthesis
In the radiopharmaceutical industry, fluorine-18 is made using either a cyclotron or linear particle accelerator to bombard a target, usually of natural or enriched [18O]water with high energy protons (typically ~18 MeV). The fluorine produced is in the form of a water solution of [18F]fluoride, which is then used in a rapid chemical synthesis of various radio pharmaceuticals. The organic oxygen-18 pharmaceutical molecule is not made before the production of the radiopharmaceutical, as high energy protons destroy such molecules (radiolysis). Radiopharmaceuticals using fluorine must therefore be synthesized after the fluorine-18 has been produced.
History
First published synthesis and report of properties of fluorine-18 were in 1937 by Arthur H. Snell, produced by the nuclear reaction of 20Ne(d,α)18F in the cyclotron laboratories of Ernest O. Lawrence.
Chemistry
Fluorine-18 is often substituted for a hydroxyl group in a radiotracer parent molecule, due to similar steric and electrostatic properties. This may however be problematic in certain applications due to possible changes in the molecule polarity.
Applications
Fluorine-18 is one of the early tracers used in positron emission tomography (PET), having been in use since the 1960s.
Its significance is due to both its short half-life and the emission of positrons when decaying.
A major medical use of fluorine-18 is: in positron emission tomography (PET) to image the brain and heart; to image the thyroid gland; as a radiotracer to image bones and seeking cancers that have metastasized from other locations in the body and in radiation therapy treating internal tumors.
Tracers include sodium fluoride which can be useful for skeletal imaging as it displays high and rapid bone uptake accompanied by very rapid blood clearance, which results in a high bone-to-background ratio in a short time and
fluorodeoxyglucose (FDG), where the 18F substitutes a hydroxyl.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (18F) and fluorescent for dual modality PET and fluorescence imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins.
References
Isotopes of fluorine
Medicinal radiochemistry
Positron emitters
Medical isotopes | Fluorine-18 | [
"Chemistry"
] | 785 | [
"Medicinal radiochemistry",
"Isotopes of fluorine",
"Isotopes",
"Medicinal chemistry",
"Chemicals in medicine",
"Medical isotopes"
] |
11,009,466 | https://en.wikipedia.org/wiki/Uridine%20diphosphate%20galactose | Uridine diphosphate galactose (UDP-galactose) is an intermediate in the production of polysaccharides. It is important in nucleotide sugars metabolism, and is the substrate for the transferase B4GALT5.
Sugar metabolism
Uridine diphosphate (UDP)-galactose is relevant in glycolysis. UDP-galactose is the activated form of Gal, a crucial monosaccharide building block for human milk oligosaccharide (HMO). The activated form of galactose (Gal) serves as a donor molecule involved in catalyzing the conversion of UDP-galactose to UDP-glucose. The conversion is a rate-limiting step essential to the pace of UDP-glucose production that determines the completion of glycosylation reactions.
To further explain, UDP-galactose is derived from a galactose molecule which is an epimer of glucose, and via the Leloir pathway, it is used be used as a precursor for the metabolism of glucose into pyruvate. When lactose is hydrolyzed, D-Galactose enters the liver via the bloodstream. There, galactokinase phosphorylates it to galactose-1-phosphate using ATP. This compound then engages in a "ping-pong" reaction with UDP-glucose, catalyzed by uridylyltransferase, yielding glucose-1-phosphate and UDP-galactose. This glucose-1-phosphate feeds into glycolysis, while UDP-galactose undergoes epimerization to regenerate UDP-glucose.
See also
Galactose
UDP galactose epimerase
Uridine diphosphate
References
Coenzymes
Nucleotides | Uridine diphosphate galactose | [
"Chemistry"
] | 388 | [
"Organic compounds",
"Coenzymes"
] |
11,009,758 | https://en.wikipedia.org/wiki/Realizability | In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.
Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide.
Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi.
Example: Kleene's 1945-realizability
Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair (n,m) is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number n, φn is the computable function with index n. The following clauses are used to define a relation "n realizes A" between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation:
Any number n realizes an atomic formula s=t if and only if s=t is true. Thus every number realizes a true equation, and no number realizes a false equation.
A pair (n,m) realizes a formula A∧B if and only if n realizes A and m realizes B. Thus a realizer for a conjunction is a pair of realizers for the conjuncts.
A pair (n,m) realizes a formula A∨B if and only if the following hold: n is 0 or 1; and if n is 0 then m realizes A; and if n is 1 then m realizes B. Thus a realizer for a disjunction explicitly picks one of the disjuncts (with n) and provides a realizer for it (with m).
A number n realizes a formula A→B if and only if, for every m that realizes A, φn(m) realizes B. Thus a realizer for an implication corresponds to a computable function that takes any realizer for the hypothesis and produces a realizer for the conclusion.
A pair (n,m) realizes a formula (∃ x)A(x) if and only if m is a realizer for A(n). Thus a realizer for an existential formula produces an explicit witness for the quantifier along with a realizer for the formula instantiated with that witness.
A number n realizes a formula (∀ x)A(x) if and only if, for all m, φn(m) is defined and realizes A(m). Thus a realizer for a universal statement is a computable function that produces, for each m, a realizer for the formula instantiated with m.
With this definition, the following theorem is obtained:
Let A be a sentence of Heyting arithmetic (HA). If HA proves A then there is an n such that n realizes A.
On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. So realizability does not exactly mirror intuitionistic reasoning.
Further analysis of the method can be used to prove that HA has the "disjunction and existence properties":
If HA proves a sentence (∃ x)A(x), then there is an n such that HA proves A(n)
If HA proves a sentence A∨B, then HA proves A or HA proves B.
More such properties are obtained involving Harrop formulas.
Later developments
Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise:
.
Relative realizability is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems.
Classical realizability was introduced by Krivine and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory. Understood as a generalization of Cohen’s forcing, it was used to provide new models of set theory.
Linear realizability extends realizability techniques to linear logic. The term was coined by Seiller to encompass several constructions, such as geometry of interaction models, ludics, interaction graphs models.
Use in proof mining
Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq.
See also
Curry–Howard correspondence
Dialectica interpretation
Harrop formula
Notes
References
Kreisel G. (1959). "Interpretation of Analysis by Means of Constructive Functionals of Finite Types", in: Constructivity in Mathematics, edited by A. Heyting, North-Holland, pp. 101–128.
Kleene, S. C. (1973). "Realizability: a retrospective survey" from , pp. 95–112.
External links
Realizability Collection of links to recent papers on realizability and related topics.
Proof theory
Constructivism (mathematics) | Realizability | [
"Mathematics"
] | 1,317 | [
"Mathematical logic",
"Constructivism (mathematics)",
"Proof theory"
] |
11,012,831 | https://en.wikipedia.org/wiki/Bioenergetic%20systems | Bioenergetic systems are metabolic processes that relate to the flow of energy in living organisms. Those processes convert energy into adenosine triphosphate (ATP), which is the form suitable for muscular activity. There are two main forms of synthesis of ATP: aerobic, which uses oxygen from the bloodstream, and anaerobic, which does not. Bioenergetics is the field of biology that studies bioenergetic systems.
Overview
The process that converts the chemical energy of food into ATP (which can release energy) is not dependent on oxygen availability. During exercise, the supply and demand of oxygen available to muscle cells is affected by duration and intensity and by the individual's cardio respiratory fitness level. It is also affected by the type of activity, for instance, during isometric activity the contracted muscles restricts blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). Three systems can be selectively recruited, depending on the amount of oxygen available, as part of the cellular respiration process to generate ATP for the muscles. They are ATP, the anaerobic system and the aerobic system.
Adenosine triphosphate
ATP is the only type of usable form of chemical energy for musculoskeletal activity. It is stored in most cells, particularly in muscle cells. Other forms of chemical energy, such as those available from oxygen and food, must be transformed into ATP before they can be utilized by the muscle cells.
Coupled reactions
Since energy is released when ATP is broken down, energy is required to rebuild or resynthesize it. The building blocks of ATP synthesis are the by-products of its breakdown; adenosine diphosphate (ADP) and inorganic phosphate (Pi). The energy for ATP resynthesis comes from three different series of chemical reactions that take place within the body. Two of the three depend upon the food eaten, whereas the other depends upon a chemical compound called phosphocreatine. The energy released from any of these three series of reactions is utilized in reactions that resynthesize ATP. The separate reactions are functionally linked in such a way that the energy released by one is used by the other.
Three processes can synthesize ATP:
ATP–CP system (phosphagen system) – At maximum intensity, this system is used for up to 10–15 seconds. The ATP–CP system neither uses oxygen nor produces lactic acid if oxygen is unavailable and is thus called alactic anaerobic. This is the primary system behind very short, powerful movements like a golf swing, a 100 m sprint or powerlifting.
Anaerobic system – This system predominates in supplying energy for intense exercise lasting less than two minutes. It is also known as the glycolytic system. An example of an activity of the intensity and duration that this system works under would be a 400 m sprint.
Aerobic system – This is the long-duration energy system. After five minutes of exercise, the O2 system is dominant. In a 1 km run, this system is already providing approximately half the energy; in a marathon run it provides 98% or more. Around mile 20 of a marathon, runners typically "hit the wall," having depleted their glycogen reserves they then attain "second wind" which is entirely aerobic metabolism primarily by free fatty acids.
Aerobic and anaerobic systems usually work concurrently. When describing activity, it is not a question of which energy system is working, but which predominates.
Anaerobic and aerobic metabolism
The term metabolism refers to the various series of chemical reactions that take place within the body. Aerobic refers to the presence of oxygen, whereas anaerobic means with a series of chemical reactions that does not require the presence of oxygen. The ATP-CP series and the lactic acid series are anaerobic, whereas the oxygen series is aerobic.
Anaerobic metabolism
ATP–CP: the phosphagen system
Creatine phosphate (CP), like ATP, is stored in muscle cells. When it is broken down, a considerable amount of energy is released. The energy released is coupled to the energy requirement necessary for the resynthesis of ATP.
The total muscular stores of both ATP and CP are small. Thus, the amount of energy obtainable through this system is limited. The phosphagen stored in the working muscles is typically exhausted in seconds of vigorous activity. However, the usefulness of the ATP-CP system lies in the rapid availability of energy rather than quantity. This is important with respect to the kinds of physical activities that humans are capable of performing.
The phosphagen system (ATP-PCr) occurs in the cytosol (a gel-like substance) of the sarcoplasm of skeletal muscle, and in the myocyte's cytosolic compartment of the cytoplasm of cardiac and smooth muscle.
During muscle contraction:
H2O + ATP → H+ + ADP + Pi (Mg2+ assisted, utilization of ATP for muscle contraction by ATPase)
H+ + ADP + CP → ATP + Creatine (Mg2+ assisted, catalyzed by creatine kinase, ATP is used again in the above reaction for continued muscle contraction)
2 ADP → ATP + AMP (catalyzed by adenylate kinase/myokinase when CP is depleted, ATP is again used for muscle contraction)
Muscle at rest:
ATP + Creatine → H+ + ADP + CP (Mg2+ assisted, catalyzed by creatine kinase)
ADP + Pi → ATP (during anaerobic glycolysis and oxidative phosphorylation)
When the phosphagen system has been depleted of phosphocreatine (creatine phosphate), the resulting AMP produced from the adenylate kinase (myokinase) reaction is primarily regulated by the purine nucleotide cycle.
Anaerobic glycolysis
This system is known as anaerobic glycolysis. "Glycolysis" refers to the breakdown of sugar. In this system, the breakdown of sugar supplies the necessary energy from which ATP is manufactured. When sugar is metabolized anaerobically, it is only partially broken down and one of the byproducts is lactic acid. This process creates enough energy to couple with the energy requirements to resynthesize ATP.
When H+ ions accumulate in the muscles causing the blood pH level to reach low levels, temporary muscle fatigue results. Another limitation of the lactic acid system that relates to its anaerobic quality is that only a few moles of ATP can be resynthesized from the breakdown of sugar. This system cannot be relied on for extended periods of time.
The lactic acid system, like the ATP-CP system, is important primarily because it provides a rapid supply of ATP energy. For example, exercises that are performed at maximum rates for between 1 and 3 minutes depend heavily upon the lactic acid system. In activities such as running 1500 meters or a mile, the lactic acid system is used predominantly for the "kick" at the end of the race.
Aerobic metabolism
Aerobic glycolysis
Glycolysis – The first stage is known as glycolysis, which produces 2 ATP molecules, 2 reduced molecules of nicotinamide adenine dinucleotide (NADH) and 2 pyruvate molecules that move on to the next stage – the Krebs cycle. Glycolysis takes place in the cytoplasm of normal body cells, or the sarcoplasm of muscle cells.
The Krebs cycle – This is the second stage, and the products of this stage of the aerobic system are a net production of one ATP, one carbon dioxide molecule, three reduced NAD+ molecules, and one reduced flavin adenine dinucleotide (FAD) molecule. (The molecules of NAD+ and FAD mentioned here are electron carriers, and if they are reduced, they have had one or two H+ ions and two electrons added to them.) The metabolites are for each turn of the Krebs cycle. The Krebs cycle turns twice for each six-carbon molecule of glucose that passes through the aerobic system – as two three-carbon pyruvate molecules enter the Krebs cycle. Before pyruvate enters the Krebs cycle it must be converted to acetyl coenzyme A. During this link reaction, for each molecule of pyruvate converted to acetyl coenzyme A, a NAD+ is also reduced. This stage of the aerobic system takes place in the matrix of the cells' mitochondria.
Oxidative phosphorylation – The last stage of the aerobic system produces the largest yield of ATP – a total of 34 ATP molecules. It is called oxidative phosphorylation because oxygen is the final acceptor of electrons and hydrogen ions (hence oxidative) and an extra phosphate is added to ADP to form ATP (hence phosphorylation).
This stage of the aerobic system occurs on the cristae (infoldings of the membrane of the mitochondria). The reaction of each NADH in this electron transport chain provides enough energy for 3 molecules of ATP, while reaction of FADH2 yields 2 molecules of ATP. This means that 10 total NADH molecules allow the regeneration of 30 ATP, and 2 FADH2 molecules allow for 4 ATP molecules to be regenerated (in total 34 ATP from oxidative phosphorylation, plus 4 from the previous two stages, producing a total of 38 ATP in the aerobic system). NADH and FADH2 are oxidized to allow the NAD+ and FAD to be reused in the aerobic system, while electrons and hydrogen ions are accepted by oxygen to produce water, a harmless byproduct.
Fatty acid oxidation
Triglycerides stored in adipose tissue and in other tissues, such as muscle and liver, release fatty acids and glycerol in a process known as lipolysis. Fatty acids are slower than glucose to convert into acetyl-CoA, as first it has to go through beta oxidation. It takes about 10 minutes for fatty acids to sufficiently produce ATP. Fatty acids are the primary fuel source at rest and in low to moderate intensity exercise. Though slower than glucose, its yield is much higher. One molecule of glucose produces through aerobic glycolysis a net of 30-32 ATP; whereas a fatty acid can produce through beta oxidation a net of approximately 100 ATP depending on the type of fatty acid. For example, palmitic acid can produce a net of 106 ATP.
Amino acid degradation
Normally, amino acids do not provide the bulk of fuel substrates. However, in times of glycolytic or ATP crisis, amino acids can convert into pyruvate, acetyl-CoA, and citric acid cycle intermediates. This is useful during strenuous exercise or starvation as it provides faster ATP than fatty acids; however, it comes at the expense of risking protein catabolism (such as the breakdown of muscle tissue) to maintain the free amino acid pool.
Purine nucleotide cycle
The purine nucleotide cycle is used in times of glycolytic or ATP crisis, such as strenuous exercise or starvation. It produces fumarate, a citric acid cycle intermediate, which enters the mitochondrion through the malate-aspartate shuttle, and from there produces ATP by oxidative phosphorylation.
Ketolysis
During starvation or while consuming a low-carb/ketogenic diet, the liver produces ketones. Ketones are needed as fatty acids cannot pass the blood-brain barrier, blood glucose levels are low and glycogen reserves depleted. Ketones also convert to acetyl-CoA faster than fatty acids. After the ketones convert to acetyl-CoA in a process known as ketolysis, it enters the citric acid cycle to produce ATP by oxidative phosphorylation.
The longer that the person's glycogen reserves have been depleted, the higher the blood concentration of ketones, typically due to starvation or a low carb diet (βHB 3 - 5 mM). Prolonged high-intensity aerobic exercise, such as running 20 miles, where individuals "hit the wall" can create post-exercise ketosis; however, the level of ketones produced are smaller (βHB 0.3 - 2 mM).
Ethanol metabolism
Ethanol (alcohol) is first converted into acetaldehyde, consuming NAD+ twice, before being converted into acetate. The acetate is then converted into acetyl-CoA. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA to be used by the Krebs cycle for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation.
When the NADH/NAD+ ratio is disrupted (far more NADH than NAD+), this is called pseudohypoxia. The Krebs cycle needs NAD+ as well as oxygen, for oxidative phosphorylation. Without sufficient NAD+, the impaired aerobic metabolism mimics hypoxia (insufficient oxygen), resulting in excessive use of anaerobic glycolysis and a disrupted pyruvate/lactate ratio (low pyruvate, high lactate). The conversion of pyruvate into lactate produces NAD+, but only enough to maintain anaerobic glycolysis. In chronic excessive alcohol consumption (alcoholism), the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase.
See also
Hitting the wall (muscle fatigue due to glycogen depletion)
Second wind (increased ATP synthesis primarily from free fatty acids)
References
Further reading
Exercise Physiology for Health, Fitness and Performance. Sharon Plowman and Denise Smith. Lippincott Williams & Wilkins; Third edition (2010). .
Ch. 38. Hormonal Regulation of Energy Metabolism. Berne and Levy Physiology, 6th ed (2008)
The effects of increasing exercise intensity on muscle fuel utilisation in humans. Van Loon et al. Journal of Physiology (2001)
(OTEP) Open Textbook of Exercise Physiology. Edited by Brian R. MacIntosh (2023)
ATP metabolism | Bioenergetic systems | [
"Chemistry",
"Biology"
] | 3,041 | [
"Exercise biochemistry",
"Biochemistry",
"Chemical energy sources"
] |
11,014,361 | https://en.wikipedia.org/wiki/American%20Society%20for%20Mass%20Spectrometry | The American Society for Mass Spectrometry (ASMS) is a professional association based in the United States that supports the scientific field of mass spectrometry. As of 2018, the society had approximately 10,000 members primarily from the US, but also from around the world. The society holds a large annual meeting, typically in late May or early June as well as other topical conferences and workshops. The society publishes the Journal of the American Society for Mass Spectrometry.
Awards
The Society recognizes achievements and promotes academic research through four annual awards. The Biemann Medal and the John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry both are awarded in recognition of singular achievements or contributions in fundamental or applied mass spectrometry, with the Biemann Medal being focused on individuals who are early in their careers. The Ronald A. Hites Award is awarded for outstanding original research demonstrated in papers published in the Journal of the American Society for Mass Spectrometry. The Research Awards are given to young scientists in mass spectrometry, based on the evaluation of their proposed research.
Publications
Journal of the American Society for Mass Spectrometry
Measuring Mass: From Positive Rays to Proteins
Past presidents
The past presidents of ASMS are:
Conferences
The Society holds an annual conference in late May or early June as well as topical conferences (at Asilomar State Beach in California and Sanibel Island, Florida) and a fall workshop, which is also focused on a single topic. Conferences on Mass Spectrometry and Allied Topics have been held yearly since 1953.
See also
International Mass Spectrometry Foundation
List of female mass spectrometrists
References
External links
ASMS website
Chemistry societies
Mass spectrometry
Organizations established in 1969
1969 establishments in the United States | American Society for Mass Spectrometry | [
"Physics",
"Chemistry"
] | 364 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Chemistry societies",
"Matter"
] |
11,015,555 | https://en.wikipedia.org/wiki/Finite%20element%20exterior%20calculus | Finite element exterior calculus (FEEC) is a mathematical framework that formulates finite element methods using chain complexes. Its main application has been a comprehensive theory for finite element methods in computational electromagnetism, computational solid and fluid mechanics. FEEC was developed in the early 2000s by Douglas N. Arnold, Richard S. Falk and Ragnar Winther,
among others.
Finite element exterior calculus is sometimes called as an example of a compatible discretization technique, and bears similarities with discrete exterior calculus, although they are distinct theories.
One starts with the recognition that the used differential operators are often part of complexes: successive application results in zero. Then, the phrasing of the differential operators of relevant differential equations and relevant boundary conditions as a Hodge Laplacian. The Hodge Laplacian terms are split using the Hodge decomposition. A related variational saddle-point formulation for mixed quantities is then generated. Discretization to a mesh-related subcomplex is done requiring a collection of projection operators which commute with the differential operators. One can then prove uniqueness and optimal convergence as function of mesh density.
FEEC is of immediate relevancy for diffusion, elasticity, electromagnetism, Stokes flow.
For the important de Rham complex, pertaining to the grad, curl and div operators, suitable family of elements have been generated not only for tetrahedrons, but also for other shaped elements such as bricks. Moreover, also conforming with them, prism and pyramid shaped elements have been generated. For the latter, uniquely, the shape functions are not polynomial. The quantities are 0-forms (scalars), 1-forms (gradients), 2-forms (fluxes), and 3-forms (densities). Diffusion, electromagnetism, and elasticity, Stokes flow, general relativity, and actually all known complexes, can all be phrased in terms the de Rham complex. For Navier-Stokes, there may be possibilities too.
References
Finite element method | Finite element exterior calculus | [
"Mathematics"
] | 411 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
11,018,121 | https://en.wikipedia.org/wiki/Quaternion-K%C3%A4hler%20symmetric%20space | In differential geometry, a quaternion-Kähler symmetric space or Wolf space is a quaternion-Kähler manifold which, as a Riemannian manifold, is a Riemannian symmetric space. Any quaternion-Kähler symmetric space with positive Ricci curvature is compact and simply connected, and is a Riemannian product of quaternion-Kähler symmetric spaces associated to compact simple Lie groups.
For any compact simple Lie group G, there is a unique G/H obtained as a quotient of G by a subgroup
Here, Sp(1) is the compact form of the SL(2)-triple associated with the highest root of G, and K its centralizer in G. These are classified as follows.
The twistor spaces of quaternion-Kähler symmetric spaces are the homogeneous holomorphic contact manifolds, classified by Boothby: they are the adjoint varieties of the complex semisimple Lie groups.
These spaces can be obtained by taking a projectivization of
a minimal nilpotent orbit of the respective complex Lie group.
The holomorphic contact structure is apparent, because
the nilpotent orbits of semisimple Lie groups
are equipped with the Kirillov-Kostant holomorphic symplectic form. This argument also explains how one
can associate a unique Wolf space to each of the simple
complex Lie groups.
See also
Quaternionic discrete series representation
References
. Reprint of the 1987 edition.
.
Differential geometry
Structures on manifolds
Riemannian geometry
Homogeneous spaces
Lie groups | Quaternion-Kähler symmetric space | [
"Physics",
"Mathematics"
] | 325 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Geometry",
"Symmetry"
] |
11,018,224 | https://en.wikipedia.org/wiki/Comparative%20Tracking%20Index | The Comparative Tracking Index (CTI) is used to measure the electrical breakdown (tracking) properties of an insulating material. Tracking is an electrical breakdown on the surface of an insulating material wherein an initial exposure to electrical arcing heat carbonizes the material. The carbonized areas are more conductive than the pristine insulator, increasing current flow, resulting in increased heat generation, and eventually the insulation becomes completely conductive.
Details
A large voltage difference gradually creates a conductive leakage path across the surface of the material by forming a carbonized track. Testing method is specified in IEC standard 60112 and ASTM D3638.
To measure the tracking, 50 drops of 0.1% ammonium chloride solution are dropped on the material, and the voltage measured for a 3 mm thickness is considered representative of the material performance. Also term PTI (Proof Tracking Index) is used: it means voltage at which during testing on five samples the samples pass the test with no failures.
Performance Level Categories (PLC) were introduced to avoid excessive implied precision and bias.
The CTI value is used for electrical safety assessment of electrical apparatus, as for instance carried out by testing and certification laboratories. The minimum required creepage distances over an insulating material between electrically conducting parts in apparatus, especially between parts with a high voltage and parts that can be touched by human users, is dependent on the insulator's CTI value. Also for internal distances in an apparatus by maintaining CTI based distances, the risk of fire is reduced.
Creepage distance requirement depends on the CTI. Material which CTI is unknown are classified in IIIb group. There are no CTI requirement for glass, ceramic, and other inorganic material which do not breakdown on the surface.
The better the insulation, the higher the CTI (positive relationship). In terms of clearance, a higher CTI value means a lower minimum creepage distance required, and the closer two conductive parts can be.
In design of medical products, the CTI is treated differently. Material groups are classified as shown below, per IEC 60601-1:2005, International Standard published by the International Electrotechnical Commission (IEC):
The test method does not work well for voltages below 125VAC as the solution does not evaporate between successive drops. The test method has an upper limit of 600VAC; higher voltages are currently not covered by the standard.
In the recent version of the standard the evaluation of the test method at higher voltages (above 600 V) is stated as a target for the future. In principle, tests at higher voltages should be possible as the breakdown voltage of air at 50 Hz is larger than 40 kV at 4 mm. However, arching between the electrodes might increase at higher voltages and might have an impact on the test result. Therefore, this needs to be further evaluated before the maximum voltage in the standard can be increased. In addition, dependent standards such as IEC 60664 would need to be changed as well in case a new material class for higher voltages is introduced in IEC 60112.
References
External links
Underwriters Laboratory definition of CTI
IEC 60112 Method for the determination of the proof and the comparative tracking indices of solid insulating materials
IEC 60601-1 Medical electrical equipment - Part 1: General requirements for basic safety and essential performance
Electrical breakdown
Units of measurement | Comparative Tracking Index | [
"Physics",
"Mathematics"
] | 687 | [
"Physical phenomena",
"Quantity",
"Electrical phenomena",
"Electrical breakdown",
"Units of measurement"
] |
11,020,249 | https://en.wikipedia.org/wiki/Multiplicity%20%28chemistry%29 | In spectroscopy and quantum chemistry, the multiplicity of an energy level is defined as 2S+1, where S is the total spin angular momentum. States with multiplicity 1, 2, 3, 4, 5 are respectively called singlets, doublets, triplets, quartets and quintets.
In the ground state of an atom or molecule, the unpaired electrons usually all have parallel spin. In this case the multiplicity is also equal to the number of unpaired electrons plus one.
Atoms
The multiplicity is often equal to the number of possible orientations of the total spin relative to the total orbital angular momentum L, and therefore to the number of near–degenerate levels that differ only in their spin–orbit interaction energy.
For example, the ground state of a carbon atom is 3P (Term symbol). The superscript three (read as triplet) indicates that the multiplicity 2S+1 = 3, so that the total spin S = 1. This spin is due to two unpaired electrons, as a result of Hund's rule which favors the single filling of degenerate orbitals. The triplet consists of three states with spin components +1, 0 and –1 along the direction of the total orbital angular momentum, which is also 1 as indicated by the letter P. The total angular momentum quantum number J can vary from L+S = 2 to L–S = 0 in integer steps, so that J = 2, 1 or 0.
However the multiplicity equals the number of spin orientations only if S ≤ L. When S > L there are only 2L+1 orientations of total angular momentum possible, ranging from S+L to S-L. The ground state of the nitrogen atom is a 4S state, for which 2S + 1 = 4 in a quartet state, S = 3/2 due to three unpaired electrons. For an S state, L = 0 so that J can only be 3/2 and there is only one level even though the multiplicity is 4.
Molecules
Most stable organic molecules have complete electron shells with no unpaired electrons and therefore have singlet ground states. This is true also for inorganic molecules containing only main-group elements. Important exceptions are dioxygen (O2) as well as methylene (CH2) and other carbenes.
However, higher spin ground states are very common in coordination complexes of transition metals. A simple explanation of the spin states of such complexes is provided by crystal field theory.
Dioxygen
The highest occupied orbital energy level of dioxygen is a pair of antibonding π* orbitals. In the ground state of dioxygen, this energy level is occupied by two electrons of the same spin, as shown in the molecular orbital diagram. The molecule, therefore, has two unpaired electrons and is in a triplet state.
In contrast, the first and second excited states of dioxygen are both states of singlet oxygen. Each has two electrons of opposite spin in the π* level so that S = 0 and the multiplicity is 2S + 1 = 1 in consequence.
In the first excited state, the two π* electrons are paired in the same orbital, so that there are no unpaired electrons. In the second excited state, however, the two π* electrons occupy different orbitals with opposite spin. Each is therefore an unpaired electron, but the total spin is zero and the multiplicity is 2S + 1 = 1 despite the two unpaired electrons. The multiplicity of the second excited state is therefore not equal to the number of its unpaired electrons plus one, and the rule which is usually true for ground states is invalid for this excited state.
Carbenes
In organic chemistry, carbenes are molecules which have carbon atoms with only six electrons in their valence shells and therefore disobey the octet rule. Carbenes generally split into singlet carbenes and triplet carbenes, named for their spin multiplicities. Both have two non-bonding electrons; in singlet carbenes these exist as a lone pair and have opposite spins so that there is no net spin, while in triplet carbenes these electrons have parallel spins.
See also
Quantum numbers
Principal quantum number
Azimuthal quantum number
Magnetic quantum number
Spin quantum number
Exchange interaction
Term symbol
Slater's rules
Effective nuclear charge
Shielding effect
References
Bibliography
Quantum chemistry | Multiplicity (chemistry) | [
"Physics",
"Chemistry"
] | 917 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
11,021,990 | https://en.wikipedia.org/wiki/Phytoecdysteroid | Phytoecdysteroids are plant-derived ecdysteroids. Phytoecdysteroids are a class of chemicals that plants synthesize for defense against phytophagous (plant eating) insects. These compounds are mimics of hormones used by arthropods in the molting process known as ecdysis. It is presumed that these chemicals act as endocrine disruptors for insects, so that when insects eat the plants with these chemicals they may prematurely molt, lose weight, or suffer other metabolic damage and die.
Chemically, phytoecdysteroids are classed as triterpenoids, the group of compounds that includes triterpene saponins, phytosterols, and phytoecdysteroids. Plants, but not animals, synthesize phytoecdysteroids from mevalonic acid in the mevalonate pathway of the plant cell using acetyl-CoA as a precursor.
Some ecdysteroids, including ecdysone and 20-hydroxyecdysone (20E), are produced by both plants and arthopods. Besides those, over 250 ecdysteroid analogs have been identified so far in plants, and it has been theorized that there are over 1,000 possible structures which might occur in nature. Many more plants have the ability to "turn on" the production of phytoecdysteroids when under stress, animal attack or other conditions.
The term phytoecdysteroid can also apply to ecdysteroids found in fungi, even though fungi are not plants. The more precise term mycoecdysteroid has been applied to these chemicals.
Some plants and fungi that produce phytoecdysteroids include Achyranthes bidentata, Tinospora cordifolia, Pfaffia paniculata, Leuzea carthamoides, Rhaponticum uniflorum, Serratula coronata, Cordyceps, and Asparagus.
Effect on arthopods
It is generally believed that phytoecdysteroid exert a negative effect on pests. Indeed, phytoecdysteroids sprayed onto plants have been shown to reduce the infestation of nematodes and insects.
However, in very limited scenarios, phytoecdysteroids may end up becoming beneficial for the insect. For example, ginsenosides are able to activate the ecdysteroid receptor in fruit flies, but this activation happens to compensate for age-related reduction in 20E levels.
Effect on plants
Phytoecdysteroids have also been reported to influence the germination of other plants, making it an allelochemical. The plant producing phytoecdysteroids may also be affected by ecdysteroids, mainly by increasing the rate of photosynthesis.
Effect on mammals
They are not toxic to mammals and occur in the human diet. 20-hydroxyecdysone is a drug candidate, but this does not mean dietary amounts have any effect.
See also
Phytoandrogen
Phytoestrogen
Plant defense against herbivory
References
Phytochemicals
Steroids
Insect ecology
Chemical ecology | Phytoecdysteroid | [
"Chemistry",
"Biology"
] | 669 | [
"Biochemistry",
"Chemical ecology"
] |
11,022,628 | https://en.wikipedia.org/wiki/List%20of%20space%20groups | There are 230 space groups in three dimensions, given by a number index, and a full name in Hermann–Mauguin notation, and a short name (international short symbol). The long names are given with spaces for readability. The groups each have a point group of the unit cell.
Symbols
In Hermann–Mauguin notation, space groups are named by a symbol combining the point group identifier with the uppercase letters describing the lattice type. Translations within the lattice in the form of screw axes and glide planes are also noted, giving a complete crystallographic space group.
These are the Bravais lattices in three dimensions:
P primitive
I body centered (from the German Innenzentriert)
F face centered (from the German Flächenzentriert)
A centered on A faces only
B centered on B faces only
C centered on C faces only
R rhombohedral
A reflection plane m within the point groups can be replaced by a glide plane, labeled as a, b, or c depending on which axis the glide is along. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a quarter of either a face or space diagonal of the unit cell. The d glide is often called the diamond glide plane as it features in the diamond structure.
, , or : glide translation along half the lattice vector of this face
: glide translation along half the diagonal of this face
: glide planes with translation along a quarter of a face diagonal
: two glides with the same glide plane and translation along two (different) half-lattice vectors.
A gyration point can be replaced by a screw axis denoted by a number, n, where the angle of rotation is . The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. For example, 21 is a 180° (twofold) rotation followed by a translation of of the lattice vector. 31 is a 120° (threefold) rotation followed by a translation of of the lattice vector. The possible screw axes are: 21, 31, 32, 41, 42, 43, 61, 62, 63, 64, and 65.
Wherever there is both a rotation or screw axis n and a mirror or glide plane m along the same crystallographic direction, they are represented as a fraction or n/m. For example, 41/a means that the crystallographic axis in question contains both a 41 screw axis as well as a glide plane along a.
In Schoenflies notation, the symbol of a space group is represented by the symbol of corresponding point group with additional superscript. The superscript doesn't give any additional information about symmetry elements of the space group, but is instead related to the order in which Schoenflies derived the space groups. This is sometimes supplemented with a symbol of the form which specifies the Bravais lattice. Here is the lattice system, and is the centering type.
In Fedorov symbol, the type of space group is denoted as s (symmorphic ), h (hemisymmorphic), or a (asymmorphic). The number is related to the order in which Fedorov derived space groups. There are 73 symmorphic, 54 hemisymmorphic, and 103 asymmorphic space groups.
Symmorphic
The 73 symmorphic space groups can be obtained as combination of Bravais lattices with corresponding point group. These groups contain the same symmetry elements as the corresponding point groups. Example for point group 4/mmm (): the symmorphic space groups are P4/mmm (, 36s) and I4/mmm (, 37s).
Hemisymmorphic
The 54 hemisymmorphic space groups contain only axial combination of symmetry elements from the corresponding point groups. Example for point group 4/mmm (): hemisymmorphic space groups contain the axial combination 422, but at least one mirror plane m will be substituted with glide plane, for example P4/mcc (, 35h), P4/nbm (, 36h), P4/nnc (, 37h), and I4/mcm (, 38h).
Asymmorphic
The remaining 103 space groups are asymmorphic. Example for point group 4/mmm (): P4/mbm (, 54a), P42/mmc (, 60a), I41/acd (, 58a) - none of these groups contains the axial combination 422.
List of triclinic
List of monoclinic
List of orthorhombic
List of tetragonal
List of trigonal
List of hexagonal
List of cubic
Notes
References
External links
International Union of Crystallography
Point Groups and Bravais Lattices
Full list of 230 crystallographic space groups
Conway et al. on fibrifold notation
Symmetry
Crystallography | List of space groups | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,027 | [
"Materials science",
"Crystallography",
"Condensed matter physics",
"Geometry",
"Symmetry"
] |
9,528,907 | https://en.wikipedia.org/wiki/Nucleic%20acid%20quantitation | In molecular biology, quantitation of nucleic acids is commonly performed to determine the average concentrations of DNA or RNA present in a mixture, as well as their purity. Reactions that use nucleic acids often require particular amounts and purity for optimum performance. To date, there are two main approaches used by scientists to quantitate, or establish the concentration, of nucleic acids (such as DNA or RNA) in a solution. These are spectrophotometric quantification and UV fluorescence tagging in presence of a DNA dye.
Spectrophotometric analysis
One of the most commonly used practices to quantitate DNA or RNA is the use of spectrophotometric analysis using a spectrophotometer. A spectrophotometer is able to determine the average concentrations of the nucleic acids DNA or RNA present in a mixture, as well as their purity.
Spectrophotometric analysis is based on the principles that nucleic acids absorb ultraviolet light in a specific pattern. In the case of DNA and RNA, a sample is exposed to ultraviolet light at a wavelength of 260 nanometres (nm) and a photo-detector measures the light that passes through the sample. Some of the ultraviolet light will pass through and some will be absorbed by the DNA / RNA. The more light absorbed by the sample, the higher the nucleic acid concentration in the sample. The resulting effect is that less light will strike the photodetector and this will produce a higher optical density (OD)
Using the Beer–Lambert law it is possible to relate the amount of light absorbed to the concentration of the absorbing molecule. At a wavelength of 260 nm, the average extinction coefficient for double-stranded DNA is 0.020 (μg/mL)−1 cm−1, for single-stranded DNA it is 0.027 (μg/mL)−1 cm−1, for single-stranded RNA it is 0.025 (μg/mL)−1 cm−1 and for short single-stranded oligonucleotides it is dependent on the length and base composition. Thus, an Absorbance (A) of 1 corresponds to a concentration of 50 μg/mL for double-stranded DNA. This method of calculation is valid for up to an A of at least 2. A more accurate extinction coefficient may be needed for oligonucleotides; these can be predicted using the nearest-neighbor model.
Calculations
The optical density is generated from equation:
Optical density= Log (Intensity of incident light / Intensity of Transmitted light)
In practical terms, a sample that contains no DNA or RNA should not absorb any of the ultraviolet light and therefore produce an OD of 0
Optical density= Log (100/100)=0
When using spectrophotometric analysis to determine the concentration of DNA or RNA, the Beer–Lambert law is used to determine unknown concentrations without the need for standard curves. In essence, the Beer Lambert Law makes it possible to relate the amount of light absorbed to the concentration of the absorbing molecule. The following absorbance units to nucleic acid concentration conversion factors are used to convert OD to concentration of unknown nucleic acid samples:
A260 dsDNA = 50 μg/mL
A260 ssDNA = 33 μg/mL
A260 ssRNA = 40 μg/mL
Conversion factors
When using a 10 mm path length, simply multiply the OD by the conversion factor to determine the concentration. Example, a 2.0 OD dsDNA sample corresponds to a sample with a 100 μg/mL concentration.
When using a path length that is shorter than 10mm, the resultant OD will be reduced by a factor of 10/path length. Using the example above with a 3 mm path length, the OD for the 100 μg/mL sample would be reduced to 0.6. To normalize the concentration to a 10mm equivalent, the following is done:
0.6 OD X (10/3) * 50 μg/mL=100 μg/mL
Most spectrophotometers allow selection of the nucleic acid type and path length such that resultant concentration is normalized to the 10 mm path length which is based on the principles of Beer's law.
A260 as quantity measurement
The "A260 unit" is used as a quantity measure for nucleic acids. One A260 unit is the amount of nucleic acid contained in 1 mL and producing an OD of 1. The same conversion factors apply, and therefore, in such contexts:
1 A260 unit dsDNA = 50 μg
1 A260 unit ssDNA = 33 μg
1 A260 unit ssRNA = 40 μg
Sample purity (260:280 / 260:230 ratios)
It is common for nucleic acid samples to be contaminated with other molecules (i.e. proteins, organic compounds, other). The secondary benefit of using spectrophotometric analysis for nucleic acid quantitation is the ability to determine sample purity using the 260 nm:280 nm calculation. The ratio of the absorbance at 260 and 280 nm (A260/280) is used to assess the purity of nucleic acids. For pure DNA, A260/280 is widely considered ~1.8 but has been argued to translate - due to numeric errors in the original Warburg paper - into a mix of 60% protein and 40% DNA. The ratio for pure RNA A260/280 is ~2.0. These ratios are commonly used to assess the amount of protein contamination that is left from the nucleic acid isolation process since proteins absorb at 280 nm.
The ratio of absorbance at 260 nm vs 280 nm is commonly used to assess DNA contamination of protein solutions, since proteins (in particular, the aromatic amino acids) absorb light at 280 nm. The reverse, however, is not true — it takes a relatively large amount of protein contamination to significantly affect the 260:280 ratio in a nucleic acid solution.
260:280 ratio has high sensitivity for nucleic acid contamination in protein:
260:280 ratio lacks sensitivity for protein contamination in nucleic acids (table shown for RNA, 100% DNA is approximately 1.8):
This difference is due to the much higher mass attenuation coefficient nucleic acids have at 260 nm and 280 nm, compared to that of proteins. Because of this, even for relatively high concentrations of protein, the protein contributes relatively little to the 260 and 280 absorbance. While the protein contamination cannot be reliably assessed with a 260:280 ratio, this also means that it contributes little error to DNA quantity estimation.
Contamination identification
Examination of sample spectra may be useful in identifying that a problem with sample purity exists.
Other common contaminants
Contamination by phenol, which is commonly used in nucleic acid purification, can significantly throw off quantification estimates. Phenol absorbs with a peak at 270 nm and a A260/280 of 1.2. Nucleic acid preparations uncontaminated by phenol should have a A260/280 of around 2. Contamination by phenol can significantly contribute to overestimation of DNA concentration.
Absorption at 230 nm can be caused by contamination by phenolate ion, thiocyanates, and other organic compounds. For a pure RNA sample, the A230:260:280 should be around 1:2:1, and for a pure DNA sample, the A230:260:280 should be around 1:1.8:1.
Absorption at 330 nm and higher indicates particulates contaminating the solution, causing scattering of light in the visible range. The value in a pure nucleic acid sample should be zero.
Negative values could result if an incorrect solution was used as blank. Alternatively, these values could arise due to fluorescence of a dye in the solution.
Analysis with fluorescent dye tagging
An alternative method to assess DNA and RNA concentration is to tag the sample with a Fluorescent tag, which is a fluorescent dye used to measure the intensity of the dyes that bind to nucleic acids and selectively fluoresce when bound (e.g. Ethidium bromide). This method is useful for cases where concentration is too low to accurately assess with spectrophotometry and in cases where contaminants absorbing at 260 nm make accurate quantitation by that method impossible. The benefit of fluorescence quantitation of DNA and RNA is the improved sensitivity over spectrophotometric analysis. Although, that increase in sensitivity comes at the cost of a higher price per sample and a lengthier sample preparation process.
There are two main ways to approach this. "Spotting" involves placing a sample directly onto an agarose gel or plastic wrap. The fluorescent dye is either present in the agarose gel, or is added in appropriate concentrations to the samples on the plastic film. A set of samples with known concentrations are spotted alongside the sample. The concentration of the unknown sample is then estimated by comparison with the fluorescence of these known concentrations. Alternatively, one may run the sample through an agarose or polyacrylamide gel, alongside some samples of known concentration. As with the spot test, concentration is estimated through comparison of fluorescent intensity with the known samples.
If the sample volumes are large enough to use microplates or cuvettes, the dye-loaded samples can also be quantified with a fluorescence photometer. Minimum sample volume starts at 0.3 μL.
To date there is no fluorescence method to determine protein contamination of a DNA sample that is similar to the 260 nm/280 nm spectrophotometric version.
See also
Nucleic acid methods
Phenol–chloroform extraction
Column purification
Protein methods
References
External links
IDT online tool for predicting nucleotide UV absorption spectrum
Ambion guide to RNA quantitation
Hillary Luebbehusen, The significance of 260/230 Ratio in Determining Nucleic Acid Purity (pdf document)
double stranded, single stranded DNA and RNA quantification by 260nm absorption, Sauer lab at OpenWetWare
Absorbance to Concentration Web App @ DNA.UTAH.EDU
Nucleic Acid Quantification Accuracy and Reproducibility
Spectroscopy
Biochemistry methods
Nucleic acids | Nucleic acid quantitation | [
"Physics",
"Chemistry",
"Biology"
] | 2,092 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Biochemistry",
"Spectroscopy",
"Nucleic acids"
] |
9,536,737 | https://en.wikipedia.org/wiki/Kinematic%20chain | In mechanical engineering, a kinematic chain is an assembly of rigid bodies connected by joints to provide constrained motion that is the mathematical model for a mechanical system. As the word chain suggests, the rigid bodies, or links, are constrained by their connections to other links. An example is the simple open chain formed by links connected in series, like the usual chain, which is the kinematic model for a typical robot manipulator.
Mathematical models of the connections, or joints, between two links are termed kinematic pairs. Kinematic pairs model the hinged and sliding joints fundamental to robotics, often called lower pairs and the surface contact joints critical to cams and gearing, called higher pairs. These joints are generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that shows the kinematic chain.
The modern use of kinematic chains includes compliance that arises from flexure joints in precision mechanisms, link compliance in compliant mechanisms and micro-electro-mechanical systems, and cable compliance in cable robotic and tensegrity systems.
Mobility formula
The degrees of freedom, or mobility, of a kinematic chain is the number of parameters that define the configuration of the chain.
A system of rigid bodies moving in space has degrees of freedom measured relative to a fixed frame. This frame is included in the count of bodies, so that mobility does not depend on link that forms the fixed frame. This means the degree-of-freedom of this system is , where is the number of moving bodies plus the fixed body.
Joints that connect bodies impose constraints. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints that a joint imposes in terms of the joint's freedom , where . In the case of a hinge or slider, which are one-degree-of-freedom joints, have and therefore .
The result is that the mobility of a kinematic chain formed from moving links and joints each with freedom , , is given by
Recall that includes the fixed link.
Analysis of kinematic chains
The constraint equations of a kinematic chain couple the range of movement allowed at each joint to the dimensions of the links in the chain, and form algebraic equations that are solved to determine the configuration of the chain associated with specific values of input parameters, called degrees of freedom.
The constraint equations for a kinematic chain are obtained using rigid transformations to characterize the relative movement allowed at each joint and separate rigid transformations to define the dimensions of each link. In the case of a serial open chain, the result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link. A chain of links connected in series has the kinematic equations,
where is the transformation locating the end-link—notice that the chain includes a "zeroth" link consisting of the ground frame to which it is attached. These equations are called the forward kinematics equations of the serial chain.
Kinematic chains of a wide range of complexity are analyzed by equating the kinematics equations of serial chains that form loops within the kinematic chain. These equations are often called loop equations.
The complexity (in terms of calculating the forward and inverse kinematics) of the chain is determined by the following factors:
Its topology: a serial chain, a parallel manipulator, a tree structure, or a graph.
Its geometrical form: how are neighbouring joints spatially connected to each other?
Explanation
Two or more rigid bodies in space are collectively called a rigid body system. We can hinder the motion of these independent rigid bodies with kinematic constraints. Kinematic constraints are constraints between rigid bodies that result in the decrease of the degrees of freedom of rigid body system.
Synthesis of kinematic chains
The constraint equations of a kinematic chain can be used in reverse to determine the dimensions of the links from a specification of the desired movement of the system. This is termed kinematic synthesis.
Perhaps the most developed formulation of kinematic synthesis is for four-bar linkages, which is known as Burmester theory.
Ferdinand Freudenstein is often called the father of modern kinematics for his contributions to the kinematic synthesis of linkages beginning in the 1950s. His use of the newly developed computer to solve Freudenstein's equation became the prototype of computer-aided design systems.
This work has been generalized to the synthesis of spherical and spatial mechanisms.
See also
Assur group
Denavit–Hartenberg parameters
Chebychev–Grübler–Kutzbach criterion
Configuration space
Machine (mechanical)
Mechanism (engineering)
Six-bar linkage
Simple machines
Six degrees of freedom
Superposition principle
References
Computer graphics
3D computer graphics
Computational physics
Robot kinematics
Virtual reality
Mechanisms (engineering)
Diagrams
Classical mechanics | Kinematic chain | [
"Physics",
"Engineering"
] | 1,007 | [
"Robotics engineering",
"Classical mechanics",
"Computational physics",
"Mechanics",
"Robot kinematics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
9,538,207 | https://en.wikipedia.org/wiki/Mitigation%20of%20seismic%20motion | Mitigation of seismic motion is an important factor in earthquake engineering and construction in earthquake-prone areas. The destabilizing action of an earthquake on constructions may be direct (seismic motion of the ground) or indirect (earthquake-induced landslides, liquefaction of the foundation soils and waves of tsunami).
Knowledge of local amplification of the seismic motion from the bedrock is very important in order to choose the suitable design solutions. Local amplification can be anticipated from the presence of particular stratigraphic conditions, such as soft soil overlapping the bedrock, or where morphological settings (e.g. crest zones, steep slopes, valleys, or endorheic basins) may produce focalization of the seismic event.
The identification of the areas potentially affected by earthquake-induced landslides and by soil liquefaction can be made by geological survey and by analysis of historical documents. Even quiescent and stabilized landslide areas may be reactivated by severe earthquake. Young soil may be particularly susceptible to liquefaction.
See also
Base isolation
Seismic hazard
Seismic performance
Tuned mass damper
Vibration control
Crash testing
References
Building engineering
Earthquake and seismic risk mitigation | Mitigation of seismic motion | [
"Engineering"
] | 237 | [
"Structural engineering",
"Building engineering",
"Civil engineering",
"Earthquake and seismic risk mitigation",
"Architecture"
] |
13,550,433 | https://en.wikipedia.org/wiki/International%20VELUX%20Award%20for%20Students%20of%20Architecture | The International VELUX Award is for students of architecture in the theme of sunlight and daylight. The award is biennial and was first presented in 2004.
The award is for completed works on any scale from a small scale component to large urban contexts or abstract concepts and experimentation. The award is presented by VELUX in close cooperation with the International Union of Architects (UIA) and the European Association for Architectural Education (EAAE).
Description
“Light of Tomorrow” is the theme of the International VELUX Award. The award wants to challenge the future of daylight in the built environment. The award contains no specific categories and is in no way restricted to the use of VELUX products. The jury of the International VELUX Award comprises internationally recognized architects and other building professionals. Any registered student of architecture – individual or team – from all over the world may participate in the award. The award wants to acknowledge not only the students but their teachers as well. Therefore, all students must be backed and granted submission by a teacher from a school of architecture.
History
The first International VELUX Award took place in 2004. 760 students from 194 schools in 34 countries in Europe registered, and 258 students from 106 schools in 27 countries submitted their projects. The international jury led by Glenn Murcutt selected three winners and eight honourable mentions, who were announced at the Award event held in Paris.
2004 winners
In 2004 the first prize went to Norwegian student Claes Heske Ekornås for his project “Light as matter”. In 2004, ten winners were announced at the Award event in Paris.
2006 winners
In 2006, the award went global – inviting students from all over the world to participate. The number of submissions more than doubled reaching 557 projects from 225 schools in 53 countries. The international jury led by Per Olaf Fjeld decided to award three winners and 17 honourable mentions, and they were all celebrated at the Award event at the Guggenheim Museum in Bilbao, Spain.
Louise Groenlund of Denmark won the International VELUX Award for her project ”A museum of photography”. Twenty winners and honourable mentions were announced at the Award event at the Guggenheim Bilbao in November 2006.
2008 winners
2010 winners
2016 winners
The continental winners of Daylight in Buildings
The continental winners of Daylight Investigations
2018 winners
The continental winners of Daylight Investigations
The continental winners of Daylight in Buildings
2020 winners
The continental winners of Daylight Investigations
The continental winners of Daylight in Buildings
2022 Winners
The continental winners of Daylight Investigations
The continental winners of Daylight in Buildings
Architecture awards
Architectural competitions
Architectural education
References | International VELUX Award for Students of Architecture | [
"Engineering"
] | 507 | [
"Architectural education",
"Architectural competitions",
"Architecture"
] |
13,553,180 | https://en.wikipedia.org/wiki/Photothermal%20microspectroscopy | Photothermal microspectroscopy (PTMS), alternatively known as photothermal temperature fluctuation (PTTF), is derived from two parent instrumental techniques: infrared spectroscopy and atomic force microscopy (AFM). In one particular type of AFM, known as scanning thermal microscopy (SThM), the imaging probe is a sub-miniature temperature sensor, which may be a thermocouple or a resistance thermometer. This same type of detector is employed in a PTMS instrument, enabling it to provide AFM/SThM images: However, the chief additional use of PTMS is to yield infrared spectra from sample regions below a micrometer, as outlined below.
Technique
The AFM is interfaced with an infrared spectrometer. For work using Fourier transform infrared spectroscopy (FTIR), the spectrometer is equipped with a conventional black body infrared source. A particular region of the sample may first be chosen on the basis of the image obtained using the AFM imaging mode of operation. Then, when material at this location absorbs the electromagnetic radiation, heat is generated, which diffuses, giving rise to a decaying temperature profile. The thermal probe then detects the photothermal response of this region of the sample. The resultant measured temperature fluctuations provide an interferogram that replaces the interferogram obtained by a conventional FTIR setup, e.g., by direct detection of the radiation transmitted by a sample. The temperature profile can be made sharp by modulating the excitation beam. This results in the generation of thermal waves whose diffusion length is inversely proportional to the root of the modulation frequency. An important advantage of the thermal approach is that it permits to obtain depth-sensitive subsurface information from surface measurement, thanks to the dependence of thermal diffusion length on modulation frequency.
Applications
The two particular features of PTMS that have determined its applications so far are 1) spectroscopic mapping may be performed at a spatial resolution well below the diffraction limit of IR radiation, ultimately at a scale of 20-30 nm. In principle, this opens the way to sub-wavelength IR microscopy (see scanning probe microscopy) where the image contrast is to be determined by the thermal response of individual sample regions to particular spectral wavelengths and 2) in general, no special preparation technique is required when solid samples are to be studied. For most standard FTIR methods, this is not the case.
Related technique
This spectroscopic technique complements another recently developed method of chemical characterisation or fingerprinting, namely micro-thermal analysis (micro-TA). This also uses an “active” SThM probe, which acts as a heater as well as a thermometer, so as to inject evanescent temperature waves into a sample and to allow sub-surface imaging of polymers and other materials. The sub-surface detail detected corresponds to variations in heat capacity or thermal conductivity. Ramping the temperature of the probe, and thus the temperature of the small sample region in contact with it, allows localized thermal analysis and/or thermomechanometry to be performed.
References
Further reading
, erratum in 19(5), 14 (2004)
Scanning probe microscopy
Spectroscopy | Photothermal microspectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 656 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
2,243,424 | https://en.wikipedia.org/wiki/Single%20crystal | In materials science, a single crystal (or single-crystal solid or monocrystalline solid) is a material in which the crystal lattice of the entire sample is continuous and unbroken to the edges of the sample, with no grain boundaries. The absence of the defects associated with grain boundaries can give monocrystals unique properties, particularly mechanical, optical and electrical, which can also be anisotropic, depending on the type of crystallographic structure. These properties, in addition to making some gems precious, are industrially used in technological applications, especially in optics and electronics.
Because entropic effects favor the presence of some imperfections in the microstructure of solids, such as impurities, inhomogeneous strain and crystallographic defects such as dislocations, perfect single crystals of meaningful size are exceedingly rare in nature. The necessary laboratory conditions often add to the cost of production. On the other hand, imperfect single crystals can reach enormous sizes in nature: several mineral species such as beryl, gypsum and feldspars are known to have produced crystals several meters across.
The opposite of a single crystal is an amorphous structure where the atomic position is limited to short-range order only. In between the two extremes exist polycrystalline, which is made up of a number of smaller crystals known as crystallites, and paracrystalline phases. Single crystals will usually have distinctive plane faces and some symmetry, where the angles between the faces will dictate its ideal shape. Gemstones are often single crystals artificially cut along crystallographic planes to take advantage of refractive and reflective properties.
Production methods
Although current methods are extremely sophisticated with modern technology, the origins of crystal growth can be traced back to salt purification by crystallization in 2500 BCE. A more advanced method using an aqueous solution was started in 1600 CE while the melt and vapor methods began around 1850 CE.
Basic crystal growth methods can be separated into four categories based on what they are artificially grown from: melt, solid, vapor, and solution. Specific techniques to produce large single crystals (aka boules) include the Czochralski process (CZ), Floating zone (or Zone Movement), and the Bridgman technique. Dr. Teal and Dr. Little of Bell Telephone Laboratories were the first to use the Czochralski method to create Ge and Si single crystals. Other methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization. For example, a modified Kyropoulos method can be used to grow high quality 300 kg sapphire single crystals. The Verneuil method, also called the flame-fusion method, was used in the early 1900s to make rubies before CZ. The diagram on the right illustrates most of the conventional methods. There have been new breakthroughs such as chemical vapor depositions (CVD) along with different variations and tweaks to the existing methods. These are not shown in the diagram.
In the case of metal single crystals, fabrication techniques also include epitaxy and abnormal grain growth in solids. Epitaxy is used to deposit very thin (micrometer to nanometer scale) layers of the same or different materials on the surface of an existing single crystal. Applications of this technique lie in the areas of semiconductor production, with potential uses in other nanotechnological fields and catalysis.
It is extremely difficult to grow single crystals of the polymers. It is mainly because that the polymer chains are of different length and due to the various entropy reasons. However, topochemical reactions are one of the easy methods to get single crystals of the polymer.
Applications
Semiconductor industry
One of the most used single crystals is that of Silicon in the semiconductor industry. The four main production methods for semiconductor single crystals are from metallic solutions: liquid phase epitaxy (LPE), liquid phase electroepitaxy (LPEE), the traveling heater method (THM), and liquid phase diffusion (LPD). However, there are many other single crystals besides inorganic single crystals capable semiconducting, including single-crystal organic semiconductors.
Monocrystalline silicon used in the fabrication of semiconductors and photovoltaics is the greatest use of single-crystal technology today. In photovoltaics, the most efficient crystal structure will yield the highest light-to-electricity conversion. On the quantum scale that microprocessors operate on, the presence of grain boundaries would have a significant impact on the functionality of field effect transistors by altering local electrical properties. Therefore, microprocessor fabricators have invested heavily in facilities to produce large single crystals of silicon. The Czochralski method and floating zone are popular methods for the growth of Silicon crystals.
Other inorganic semiconducting single crystals include GaAs, GaP, GaSb, Ge, InAs, InP, InSb, CdS, CdSe, CdTe, ZnS, ZnSe, and ZnTe. Most of these can also be tuned with various doping for desired properties. Single-crystal graphene is also highly desired for applications in electronics and optoelectronics with its large carrier mobility and high thermal conductivity, and remains a topic of fervent research. One of the main challenges has been growing uniform single crystals of bilayer or multilayer graphene over large areas; epitaxial growth and the new CVD (mentioned above) are among the new promising methods under investigation.
Organic semiconducting single crystals are different from the inorganic crystals. The weak intermolecular bonds mean lower melting temperatures, and higher vapor pressures and greater solubility. For single crystals to grow, the purity of the material is crucial and the production of organic materials usually require many steps to reach the necessary purity. Extensive research is being done to look for materials that are thermally stable with high charge-carrier mobility. Past discoveries include naphthalene, tetracene, and 9,10-diphenylanthacene (DPA). Triphenylamine derivatives have shown promise, and recently in 2021, the single-crystal structure of α-phenyl-4′-(diphenylamino)stilbene (TPA) grown using the solution method exhibited even greater potential for semiconductor use with its anisotropic hole transport property.
Optical application
Single crystals have unique physical properties due to being a single grain with molecules in a strict order and no grain boundaries. This includes optical properties, and single crystals of silicon is also used as optical windows because of its transparency at specific infrared (IR) wavelengths, making it very useful for some instruments.
Sapphires: Also known as the alpha phase of aluminum oxide (Al2O3) to scientists, sapphire single crystals are widely used in hi-tech engineering. It can be grown from gaseous, solid, or solution phases. The diameter of the crystals resulting from the growth method are important when considering electronic uses after. They are used for lasers and nonlinear optics. Some notable uses are as in the window of a biometric fingerprint reader, optical disks for long-term data storage, and X-ray interferometer.
Indium Phosphide: These single crystals are particularly appropriate for combining optoelectronics with high-speed electronics in the form of optical fiber with its large-diameter substrates. Other photonic devices include lasers, photodetectors, avalanche photo diodes, optical modulators and amplifiers, signal processing, and both optoelectronic and photonic integrated circuits.
Germanium: This was the material in the first transistor invented by Bardeen, Brattain, and Shockley in 1947. It is used in some gamma-ray detectors and infrared optics. Now it has become the focus of ultrafast electronic devices for its intrinsic carrier mobility.
Arsenide: Arsenide III can be combined with various elements such as B, Al, Ga, and In, with the GaAs compound being in high demand for wafers.
Cadmium Telluride: CdTe crystals have several applications as substrates for IR imaging, electrooptic devices, and solar cells. By alloying CdTe and ZnTe together room-temperature X-ray and gamma-ray detectors can be made.
Electrical conductors
Metals can be produced in single-crystal form and provide a means to understand the ultimate performance of metallic conductors. It is vital for understanding the basic science such as catalytic chemistry, surface physics, electrons, and monochromators. Production of metallic single crystals have the highest quality requirements and are grown, or pulled, in the form of rods. Certain companies can produce specific geometries, grooves, holes, and reference faces along with varying diameters.
Of all the metallic elements, silver and copper have the best conductivity at room temperature, setting the bar for performance. The size of the market, and vagaries in supply and cost, have provided strong incentives to seek alternatives or find ways to use less of them by improving performance.
The conductivity of commercial conductors is often expressed relative to the International Annealed Copper Standard, according to which the purest copper wire available in 1914 measured around 100%. The purest modern copper wire is a better conductor, measuring over 103% on this scale. The gains are from two sources. First, modern copper is more pure. However, this avenue for improvement seems at an end. Making the copper purer still makes no significant improvement. Second, annealing and other processes have been improved. Annealing reduces the dislocations and other crystal defects which are sources of resistance. But the resulting wires are still polycrystalline. The grain boundaries and remaining crystal defects are responsible for some residual resistance. This can be quantified and better understood by examining single crystals.
Single-crystal copper did prove to have better conductivity than polycrystalline copper.
However, the single-crystal copper not only became a better conductor than high purity polycrystalline silver, but with prescribed heat and pressure treatment could surpass even single-crystal silver. Although impurities are usually bad for conductivity, a silver single crystal with a small amount of copper substitutions proved to be the best.
As of 2009, no single-crystal copper is manufactured on a large scale industrially, but methods of producing very large individual crystal sizes for copper conductors are exploited for high performance electrical applications. These can be considered meta-single crystals with only a few crystals per meter of length.
Single-crystal turbine blades
Another application of single-crystal solids is in materials science in the production of high strength materials with low thermal creep, such as turbine blades. Here, the absence of grain boundaries actually gives a decrease in yield strength, but more importantly decreases the amount of creep which is critical for high temperature, close tolerance part applications. Researcher Barry Piearcey found that a right-angle bend at the casting mold would decrease the number of columnar crystals and later, scientist Giamei used this to start the single-crystal structure of the turbine blade.
In research
Single crystals are essential in research especially condensed-matter physics and all aspects of materials science such as surface science. The detailed study of the crystal structure of a material by techniques such as Bragg diffraction and helium atom scattering is easier with single crystals because it is possible to study directional dependence of various properties and compare with theoretical predictions. Furthermore, macroscopically averaging techniques such as angle-resolved photoemission spectroscopy or low-energy electron diffraction are only possible or meaningful on surfaces of single crystals. In superconductivity there have been cases of materials where superconductivity is only seen in single-crystalline specimen. They may be grown for this purpose, even when the material is otherwise only needed in polycrystalline form.
As such, numerous new materials are being studied in their single-crystal form. The young field of metal-organic-frameworks (MOFs) is one of many which qualify to have single crystals. In January 2021 Dr. Dong and Dr. Feng demonstrated how polycyclic aromatic ligands can be optimized to produce large 2D MOF single crystals of sizes up to 200 μm. This could mean scientists can fabricate single-crystal devices and determine intrinsic electrical conductivity and charge transport mechanism.
The field of photodriven transformation can also be involved with single crystals with something called single-crystal-to-single-crystal (SCSC) transformations. These provide direct observation of molecular movement and understanding of mechanistic details. This photoswitching behavior has also been observed in cutting-edge research on intrinsically non-photo-responsive mononuclear lanthanide single-molecule-magnets (SMM).
See also
Engineering aspects of crystallisation
Fractional crystallization
Laser-heated pedestal growth
Micro-pulling-down
Recrystallization
Seed crystal
References
Further reading
"Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website
Crystals | Single crystal | [
"Chemistry",
"Materials_science"
] | 2,673 | [
"Crystallography",
"Crystals"
] |
2,243,574 | https://en.wikipedia.org/wiki/Thermal%20Hall%20effect | In solid-state physics, the thermal Hall effect, also known as the Righi–Leduc effect, named after independent co-discoverers Augusto Righi and Sylvestre Anatole Leduc, is the thermal analog of the Hall effect. Given a thermal gradient across a solid, this effect describes the appearance of an orthogonal temperature gradient when a magnetic field is applied.
For conductors, a significant portion of the thermal current is carried by the electrons. In particular, the Righi–Leduc effect describes the heat flow resulting from a perpendicular temperature gradient and vice versa. The Maggi–Righi–Leduc effect describes changes in thermal conductivity when placing a conductor in a magnetic field.
A thermal Hall effect has also been measured in a paramagnetic insulators, called the "phonon Hall effect". In this case, there are no charged currents in the solid, so the magnetic field cannot exert a Lorentz force. Phonon thermal Hall effect have been measured in various class of non-magnetic insulating solids, but the exact mechanism giving rise to this phenomenon is largely unknown. An analogous thermal Hall effect for neutral particles exists in polyatomic gases, known as the Senftleben–Beenakker effect.
Measurements of the thermal Hall conductivity are used to distinguish between the electronic and lattice contributions to thermal conductivity. These measurements are especially useful when studying superconductors.
Description
Given a conductor or semiconductor with a temperature difference in the x-direction and a magnetic field B perpendicular to it in the z-direction, then a temperature difference can occur in the transverse y-direction,
The Righi–Leduc effect is a thermal analogue of the Hall effect. With the Hall effect, an externally applied electrical voltage causes an electrical current to flow. The mobile charge carriers (usually electrons) are transversely deflected by the magnetic field due to the Lorentz force. In the Righi–Leduc effect, the temperature difference causes the mobile charge carriers to flow from the warmer end to the cooler end. Here, too, the Lorentz force causes a transverse deflection. Since the electrons transport heat, one side is heated more than the other.
The thermal Hall coefficient (sometimes also called the Righi–Leduc coefficient) depends on the material and has units of tesla−1. It is related to the Hall coefficient by the electrical conductivity , as
.
See also
Hall effect
References
Superconductivity
Thermal | Thermal Hall effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 502 | [
"Physical phenomena",
"Materials science stubs",
"Physical quantities",
"Hall effect",
"Superconductivity",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Condensed matter stubs",
"Solid state engineering",
"Electrical resist... |
2,243,964 | https://en.wikipedia.org/wiki/Titanium%20oxide | Titanium oxide may refer to:
Titanium dioxide (titanium(IV) oxide), TiO2
Titanium(II) oxide (titanium monoxide), TiO, a non-stoichiometric oxide
Titanium(III) oxide (dititanium trioxide), Ti2O3
Ti3O
Ti2O
δ-TiOx (x= 0.68–0.75)
TinO2n−1 where n ranges from 3–9 inclusive, e.g. Ti3O5, Ti4O7, etc.
Reduced titanium oxides
A common reduced titanium oxide is TiO, also known as titanium monoxide. It can be prepared from titanium dioxide and titanium metal at 1500 °C.
Ti3O5, Ti4O7, and Ti5O9 are non-stoichiometric oxides. These compounds are typically formed at high temperatures in the presence of excess oxygen. As a result, they exhibit unique structural and electronic properties, and have been studied for their potential use in various applications, including in gas sensors, lithium-ion batteries, and photocatalysis.
References
Dielectrics
Electronic engineering
High-κ dielectrics | Titanium oxide | [
"Physics",
"Technology",
"Engineering"
] | 240 | [
"Computer engineering",
"Electronic engineering",
"Materials",
"Electrical engineering",
"Dielectrics",
"Matter"
] |
2,244,316 | https://en.wikipedia.org/wiki/Risk%E2%80%93benefit%20ratio | A risk–benefit ratio (or benefit-risk ratio) is the ratio of the risk of an action to its potential benefits. Risk–benefit analysis (or benefit-risk analysis) is analysis that seeks to quantify the risk and benefits and hence their ratio.
Analyzing a risk can be heavily dependent on the human factor. A certain level of risk in our lives is accepted as necessary to achieve certain benefits. For example, driving an automobile is a risk many people take daily, also since it is mitigated by the controlling factor of their perception of their individual ability to manage the risk-creating situation. When individuals are exposed to involuntary risk (a risk over which they have no control), they make risk aversion their primary goal. Under these circumstances, individuals require the probability of risk to be as much as one thousand times smaller than for the same situation under their perceived control (a notable example being the common bias in the perception of risk in flying vs. driving).
Evaluations
Evaluations of future risk can be:
Real future risk, as disclosed by the fully matured future circumstances when they develop.
Statistical risk, as determined by currently available data, as measured actuarially for insurance premiums.
Projected risk, as analytically based on system models structured from historical studies.
Perceived risk, as intuitively seen by individuals.
Medical research
For research that involves more than minimal risk of harm to the subjects, the investigator must assure that the amount of benefit clearly outweighs the amount of risk. Only if there is a favorable risk–benefit ratio may a study be considered ethical.
The Declaration of Helsinki, adopted by the World Medical Association, states that biomedical research cannot be done legitimately unless the importance of the objective is in proportion to the risk to the subject. The Helsinki Declaration and the CONSORT Statement stress a favorable risk–benefit ratio.
See also
Benefit shortfall
Cost–benefit analysis
Odds algorithm
Optimism bias
Reference class forecasting
References
Risk analysis
Ethics and statistics
Medical statistics | Risk–benefit ratio | [
"Technology"
] | 401 | [
"Ethics and statistics",
"Ethics of science and technology"
] |
2,244,399 | https://en.wikipedia.org/wiki/Newton-second | The newton-second (also newton second; symbol: N⋅s or N s) is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second (kg⋅m/s). One newton-second corresponds to a one-newton force applied for one second.
It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval.
Definition
Momentum is given by the formula:
is the momentum in newton-seconds (N⋅s) or "kilogram-metres per second" (kg⋅m/s)
is the mass in kilograms (kg)
is the velocity in metres per second (m/s)
Examples
This table gives the magnitudes of some momenta for various masses and speeds.
See also
Power factor
Newton-metre – SI unit of torque
Orders of magnitude (momentum) – examples of momenta
References
Classical mechanics
SI derived units
Units of measurement | Newton-second | [
"Physics",
"Mathematics"
] | 204 | [
"Quantity",
"Classical mechanics stubs",
"Classical mechanics",
"Mechanics",
"Units of measurement"
] |
2,245,430 | https://en.wikipedia.org/wiki/Intrinsic%20semiconductor | An intrinsic semiconductor, also called a pure semiconductor, undoped semiconductor or i-type semiconductor, is a semiconductor without any significant dopant species present. The number of charge carriers is therefore determined by the properties of the material itself instead of the amount of impurities. In intrinsic semiconductors the number of excited electrons and the number of holes are equal: n = p. This may be the case even after doping the semiconductor, though only if it is doped with both donors and acceptors equally. In this case, n = p still holds, and the semiconductor remains intrinsic, though doped. This means that some conductors are both intrinsic as well as extrinsic but only if n (electron donor dopant/excited electrons) is equal to p (electron acceptor dopant/vacant holes that act as positive charges).
The electrical conductivity of chemically pure semiconductors can still be affected by crystallographic defects of technological origin (like vacancies), some of which can behave similar to dopants. Their effect can often be neglected, though, and the number of electrons in the conduction band is then exactly equal to the number of holes in the valence band. The conduction of current of intrinsic semiconductor is enabled purely by electron excitation across the band-gap, which is usually small at room temperature except for narrow-bandgap semiconductors, like .
The conductivity of a semiconductor can be modeled in terms of the band theory of solids. The band model of a semiconductor suggests that at ordinary temperatures there is a finite possibility that electrons can reach the conduction band and contribute to electrical conduction.
A silicon crystal is different from an insulator because at any temperature above absolute zero, there is a non-zero probability that an electron in the lattice will be knocked loose from its position, leaving behind an electron deficiency called a "hole". If a voltage is applied, then both the electron and the hole can contribute to a small current flow.
Electrons and holes
In an intrinsic semiconductor such as silicon at temperatures above absolute zero, there will be some electrons which are excited across the band gap into the conduction band and these electrons can support charge flowing. When the electron in pure silicon crosses the gap, it leaves behind an electron vacancy or "hole" in the regular silicon lattice. Under the influence of an external voltage, both the electron and the hole can move across the material. In an n-type semiconductor, the dopant contributes extra electrons, dramatically increasing the conductivity. In a p-type semiconductor, the dopant produces extra vacancies or holes, which likewise increase the conductivity. It is however the behavior of the p-n junction which is the key to the enormous variety of solid-state electronic devices
Semiconductor current
The current which will flow in an intrinsic semiconductor consists of both electron and hole current. That is, the electrons which have been freed from their lattice positions into the conduction band can move through the material. In addition, other electrons can hop between lattice positions to fill the vacancies left by the freed electrons. This additional mechanism is called hole conduction because it is as if the holes are migrating across the material in the direction opposite to the free electron movement.
The current flow in an intrinsic semiconductor is influenced by the density of energy states which in turn influences the electron density in the conduction band. This current is highly temperature dependent.
References
See also
Extrinsic semiconductor
N-type semiconductor
P-type semiconductor
Semiconductor material types
it:Semiconduttore#Semiconduttori intrinseci | Intrinsic semiconductor | [
"Chemistry"
] | 731 | [
"Semiconductor material types",
"Semiconductor materials"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.