id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
10,312,687 | https://en.wikipedia.org/wiki/Lubricity | Lubricity is the measure of the reduction in friction and/or wear by a lubricant. The study of lubrication and wear mechanisms is called tribology.
Measurement of lubricity
The lubricity of a substance is not a material property, and cannot be measured directly. Tests are performed to quantify a lubricant's performance for a specific system. This is often done by determining how much wear is caused to a surface by a given wear-inducing object in a given amount of time. Other factors such as surface size, temperature, and pressure are also specified. For two fluids with the same viscosity, the one that results in a smaller wear scar is considered to have higher lubricity. For this reason, lubricity is also termed a substance's anti-wear property.
Examples of tribometer test setups include "Ball-on-cylinder" and "Ball-on-three-discs" tests.
Lubricity in diesel engines
In a modern diesel engine, the fuel is part of the engine lubrication process. Diesel fuel naturally contains compounds that provide lubricity, but because of regulations in many countries (such as the US and the EU countries), sulphur must be removed from the fuel before it can be sold. The hydrotreatment of diesel fuel to remove sulphur also removes the compounds that provide lubricity. Reformulated diesel fuel that does not have biodiesel added has a lower lubricity and requires lubricity improving additives to prevent excessive engine wear.
See also
Boundary lubrication
Superlubricity
Tribology
References
Fluid dynamics
Tribology | Lubricity | [
"Chemistry",
"Materials_science",
"Engineering"
] | 342 | [
"Tribology",
"Chemical engineering",
"Materials science",
"Surface science",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
10,313,131 | https://en.wikipedia.org/wiki/Earle%20K.%20Plyler%20Prize | The Earle K. Plyler Prize for Molecular Spectroscopy and Dynamics is a prize that has been awarded annually by the American Physical Society since 1977. The recipient is chosen for "notable contributions to the field of molecular spectroscopy and dynamics". The prize is named after Earle K. Plyler, who was a leading experimenter in the field of infrared spectroscopy; as of 2024 it is valued at $10,000. The prize is currently sponsored by the AIP Journal of Chemical Physics.
Recipients
Source: American Physical Society
See also
List of physics awards
List of chemistry awards
References
External links
Earle K. Plyler Prize for Molecular Spectroscopy and Dynamics, American Physical Society
Awards of the American Physical Society
Chemistry awards
Spectroscopy
Awards established in 1977 | Earle K. Plyler Prize | [
"Physics",
"Chemistry",
"Technology"
] | 146 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Chemistry awards",
"Science and technology awards",
"Spectroscopy"
] |
436,077 | https://en.wikipedia.org/wiki/Diamond%20Light%20Source | Diamond Light Source (or Diamond) is the UK's national synchrotron light source science facility located at the Harwell Science and Innovation Campus in Oxfordshire.
Its purpose is to produce intense beams of light whose special characteristics are useful in many areas of scientific research. In particular it can be used to investigate the structure and properties of a wide range of materials from proteins (to provide information for designing new and better drugs), and engineering components (such as a fan blade from an aero-engine) to conservation of archeological artifacts (for example Henry VIII's flagship the Mary Rose).
There are more than 50 light sources across the world. With an energy of 3 GeV, Diamond is a medium energy synchrotron currently operating with 32 beamlines.
Design, construction and finance
The Diamond synchrotron is the largest UK-funded scientific facility to be built in the UK since the Nimrod proton synchrotron which was sited at the Rutherford Appleton Laboratory in 1964. Nearby facilities include the ISIS Neutron and Muon Source, the Central Laser Facility, and the laboratories at Harwell and Culham (including the Joint European Torus (JET) project). It replaced the Synchrotron Radiation Source, a second-generation synchrotron at the Daresbury Laboratory in Cheshire.
Diamond produced its first user beam towards the end of January 2007, and was formally opened by Queen Elizabeth II on 19 October 2007.
Construction
A design study during the 1990s was completed in 2001 by scientists at Daresbury and construction began following the creation of the operating company, Diamond Light Source Ltd.
The construction costs of £260m covered the synchrotron building, the accelerators inside it, the first seven experimental stations (beamlines) and the adjacent office block, Diamond House.
Governance
The facility is operated by Diamond Light Source Ltd, a joint venture company established in March 2002. The company receives 86% of its funding from the UK Government via the Science and Technology Facilities Council (STFC) and 14% from the Wellcome Trust.
Synchrotron
Diamond generates synchrotron light at wavelengths ranging from X-rays to the far infrared. This is also known as synchrotron radiation and is the electromagnetic radiation emitted by charged particles travelling near the speed of light when their path deviates from a straight line. It is used in a huge variety of experiments to study the structure and behaviour of many different types of matter.
The particles Diamond uses are electrons travelling at an energy of 3 GeV round a 561.6 m circumference storage ring. This is not a true circle, but a 48-sided polygon with a bending magnet at each vertex and straight sections in between. The bending magnets are dipole magnets whose magnetic field deflects the electrons so as to steer them around the ring. As Diamond is a third generation light source it also uses special arrays of magnets called insertion devices. These cause the electrons to undulate and it is their sudden change of direction that causes the electrons to emit an exceptionally bright beam of electromagnetic radiation, brighter than that of a single bend when traveling through a bending magnet. This is the synchrotron light used for experiments. Some beamlines, however, use light solely from a bending magnet without the need of an insertion device.
The electrons reach this high energy via a series of pre-accelerator stages before being injected into the 3 GeV storage ring:
an electron gun – 90 keV
a 100 MeV linear accelerator
a 100 MeV – 3 GeV booster synchrotron (158 m in circumference).
The Diamond synchrotron is housed in a silver toroidal building of 738 m in circumference, covering an area in excess of 43,300 square metres, or the area of over six football pitches. This contains the storage ring and a number of beamlines, with the linear accelerator and booster synchrotron housed in the centre of the ring. These beamlines are the experimental stations where the synchrotron light's interaction with matter is used for research purposes. Seven beamlines were available when Diamond became operational in 2007, with more coming online as construction continued. As of April 2019 there were 32 beamlines in operation. Diamond is intended ultimately to host about 33 beamlines, supporting the life, physical and environmental sciences.
Diamond is also home to eleven electron microscopes. Nine of these are cryo-electron microscopes specialising in life sciences including two provided for industry use in partnership with Thermo Fisher Scientific; the remaining two microscopes are dedicated to research of advanced materials.
Case studies
In September 2007, scientists from Cardiff University led by Tim Wess, found that the Diamond synchrotron could be used to see hidden content of ancient documents by illumination without opening them (penetrating layers of parchment).
In November 2010 data collected at Diamond by Imperial College London formed the basis for a paper in the journal Nature advancing the understanding of how HIV and other retroviruses infect human and animal cells. The findings may enable improvements in gene therapy to correct gene malfunctions.
In June 2011 data from Diamond led to an article in the journal Nature detailing the 3D structure of the human Histamine H1 receptor protein. This led to the development of 'third generation' anti-histamines, drugs effective against some allergies without adverse side-effects.
In December 2017, UK established the Synchrotron Techniques for African Research and Technology (START) with a £3.7 million funded by the UK Research and Innovation for 3 years. START aimed to provide access to African researchers with focus on energy materials and structural biology. The step is circuital for the inception of the first African Light Source.
Published in the Proceedings of the National Academy of Sciences in April 2018, a five institution collaboration including scientists from Diamond used three of Diamond's macromolecular beamlines to discover details of how a bacterium used plastic as an energy source. High resolution data allowed the researchers to determine the workings of an enzyme that degraded the plastic PET. Subsequently computational modelling was carried out to investigate and thus improve this mechanism.
An article published in Nature in 2019 described how a worldwide multidisciplinary collaboration designed several ways to control metal nano-particles, including synthesis at a substantially reduced cost for use as catalysts for the production of everyday goods.
Research conducted at Diamond Light Source in 2020 helped determine the atomic structure of SARS‑CoV‑2, the virus responsible for COVID-19.
In 2023, Diamond Light Source scanned the Herculaneum papyri including scroll PHerc. Paris. 4 to facilitate non-invasive decipherment through machine learning.
See also
List of synchrotron radiation facilities
Synchrotron Radiation Source (SRS)
European Synchrotron Radiation Facility (ESRF)
MAX IV
BESSY
DESY
SOLEIL
Canadian Light Source (CLS)
Elettra Synchrotron
The African Light Source (AfLS)
References
External links
Diamond: Britain's answer to the Large Hadron Collider Guardian article describing the machine and its applications
Physics research institutes
Research institutes in Oxfordshire
Science and Technology Facilities Council
Synchrotron radiation facilities
Vale of White Horse
Wellcome Trust | Diamond Light Source | [
"Materials_science"
] | 1,478 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
436,166 | https://en.wikipedia.org/wiki/Variable-frequency%20oscillator | A variable frequency oscillator (VFO) in electronics is an oscillator whose frequency can be tuned (i.e., varied) over some range. It is a necessary component in any tunable radio transmitter and in receivers that work by the superheterodyne principle. The oscillator controls the frequency to which the apparatus is tuned.
Purpose
In a simple superheterodyne receiver, the incoming radio frequency signal (at frequency ) from the antenna is mixed with the VFO output signal tuned to , producing an intermediate frequency (IF) signal that can be processed downstream to extract the modulated information. Depending on the receiver design, the IF signal frequency is chosen to be either the sum of the two frequencies at the mixer inputs (up-conversion), or more commonly, the difference frequency (down-conversion), .
In addition to the desired IF signal and its unwanted image (the mixing product of opposite sign above), the mixer output will also contain the two original frequencies, and and various harmonic combinations of the input signals. These undesired signals are rejected by the IF filter. If a double balanced mixer is employed, the input signals appearing at the mixer outputs are greatly attenuated, reducing the required complexity of the IF filter.
The advantage of using a VFO as a heterodyning oscillator is that only a small portion of the radio receiver (the sections before the mixer such as the preamplifier) need to have a wide bandwidth. The rest of the receiver can be finely tuned to the IF frequency.
In a direct-conversion receiver, the VFO is tuned to the same frequency as the incoming radio frequency and Hz. Demodulation takes place at baseband using low-pass filters and amplifiers.
In a radio frequency (RF) transmitter, VFOs are often used to tune the frequency of the output signal, often indirectly through a heterodyning process similar to that described above. Other uses include chirp generators for radar systems where the VFO is swept rapidly through a range of frequencies, timing signal generation for oscilloscopes and time domain reflectometers, and variable frequency audio generators used in musical instruments and audio test equipment.
Types
There are two main types of VFO in use: analog and digital.
Analog VFOs
An analog VFO is an electronic oscillator where the value of at least one of the passive components is adjustable under user control so as to alter its output frequency.
The passive component whose value is adjustable is usually a capacitor, but could be a variable inductor.
Tuning capacitor
The variable capacitor is a mechanical device in which the separation of a series of interleaved metal plates is physically altered to vary its capacitance. Adjustment of this capacitor is sometimes facilitated by a mechanical step-down gearbox to achieve fine tuning.
Varactor
A reversed-biased semiconductor diode exhibits capacitance. Since the width of its non-conducting depletion region depends on the magnitude of the reverse bias voltage, this voltage can be used to control the junction capacitance. The varactor bias voltage may be generated in a number of ways and there may need to be no significant moving parts in the final design.
Varactors have a number of disadvantages including temperature drift and aging, electronic noise, low Q factor and non-linearity.
Digital VFOs
Modern radio receivers and transmitters usually use some form of digital frequency synthesis to generate their VFO signal.
The advantages include smaller designs, lack of moving parts, the higher stability of set frequency reference oscillators, and the ease with which preset frequencies can be stored and manipulated in the digital computer that is usually embedded in the design in any case.
It is also possible for the radio to become extremely frequency-agile in that the control computer could alter the radio's tuned frequency many tens, thousands or even millions of times a second.
This capability allows communications receivers effectively to monitor many channels at once, perhaps using digital selective calling (DSC) techniques to decide when to open an audio output channel and alert users to incoming communications.
Pre-programmed frequency agility also forms the basis of some military radio encryption and stealth techniques.
Extreme frequency agility lies at the heart of spread spectrum techniques that have gained mainstream acceptance in computer wireless networking such as Wi-Fi.
There are disadvantages to digital synthesis such as the inability of a digital synthesiser to tune smoothly through all frequencies, but with the channelisation of many radio bands, this can also be seen as an advantage in that it prevents radios from operating in between two recognised channels.
Digital frequency synthesis relies on stable crystal controlled reference frequency sources. Crystal-controlled oscillators are more stable than inductively and capacitively controlled oscillators. Their disadvantage is that changing frequency (more than a small amount) requires changing the crystal, but frequency synthesizer techniques have made this unnecessary in modern designs.
Digital frequency synthesis
The electronic and digital techniques involved in this include:
Direct digital synthesis (DDS) Enough data points for a mathematical sine function are stored in digital memory. These are recalled at the right speed and fed to a digital-to-analog converter where the required sine wave is built up.
Direct frequency synthesis Early channelized communication radios had multiple crystals - one for each channel on which they could operate. After a while this thinking was combined with the basic ideas of heterodyning and mixing described under purpose above. Multiple crystals can be mixed in various combinations to produce various output frequencies.
Phase locked loop (PLL) Using a varactor-controlled or voltage-controlled oscillator (VCO) (described above in varactor under analog VFO techniques) and a phase detector, a control-loop can be set up so that the VCO's output is frequency-locked to a crystal-controlled reference oscillator. The phase detector's comparison is made between the outputs of the two oscillators after frequency division by different divisors. Then by altering the frequency-division divisor(s) under computer control, a variety of actual (undivided) VCO output frequencies can be generated. The PLL technique dominates most radio VFO designs today.
Performance
The quality metrics for a VFO include frequency stability, phase noise and spectral purity. All of these factors tend to be inversely proportional to the tuning circuit's Q factor. Since in general the tuning range is also inversely proportional to Q, these performance factors generally degrade as the VFO's frequency range is increased.
Stability
Stability is the measure of how far a VFO's output frequency drifts with time and temperature. To mitigate this problem, VFOs are generally "phase locked" to a stable reference oscillator. PLLs use negative feedback to correct for the frequency drift of the VFO allowing for both wide tuning range and good frequency stability.
Repeatability
Ideally, for the same control input to the VFO, the oscillator should generate exactly the same frequency. A change in the calibration of the VFO can change receiver tuning calibration; periodic re-alignment of a receiver may be needed. VFO's used as part of a phase-locked loop frequency synthesizer have less stringent requirements since the system is as stable as the crystal-controlled reference frequency.
Purity
A plot of a VFO's amplitude vs. frequency may show several peaks, probably harmonically related. Each of these peaks can potentially mix with some other incoming signal and produce a spurious response. These spurii (sometimes spelled spuriae) can result in increased noise or two signals detected where there should only be one. Additional components can be added to a VFO to suppress high-frequency parasitic oscillations, should these be present.
In a transmitter, these spurious signals are generated along with the one desired signal. Filtering may be required to ensure the transmitted signal meets regulations for bandwidth and spurious emissions.
Phase noise
When examined with very sensitive equipment, the pure sine-wave peak in a VFO's frequency graph will most likely turn out not to be sitting on a flat noise-floor. Slight random 'jitters' in the signal's timing will mean that the peak is sitting on 'skirts' of phase noise at frequencies either side of the desired one.
These are also troublesome in crowded bands. They allow through unwanted signals that are fairly close to the expected one, but because of the random quality of these phase-noise 'skirts', the signals are usually unintelligible, appearing just as extra noise in the received signal. The effect is that what should be a clean signal in a crowded band can appear to be a very noisy signal, because of the effects of strong signals nearby.
The effect of VFO phase noise on a transmitter is that random noise is actually transmitted either side of the required signal. Again, this must be avoided for legal reasons in many cases.
Frequency reference
Digital or digitally controlled oscillators typically rely on constant single frequency references, which can be made to a higher standard than semiconductor and LC circuit-based alternatives. Most commonly a quartz crystal based oscillator is used, although in high accuracy applications such as TDMA cellular networks, atomic clocks such as the Rubidium standard are as of 2018 also common.
Because of the stability of the reference used, digital oscillators themselves tend to be more stable and more repeatable in the long term. This in part explains their huge popularity in low-cost and computer-controlled VFOs. In the shorter term the imperfections introduced by digital frequency division and multiplication (jitter), and the susceptibility of the common quartz standard to acoustic shocks, temperature variation, aging, and even radiation, limit the applicability of a naïve digital oscillator.
This is why higher end VFO's like RF transmitters locked to atomic time, tend to combine multiple different references, and in complex ways. Some references like rubidium or cesium clocks provide higher long term stability, while others like hydrogen masers yield lower short term phase noise. Then lower frequency (and so lower cost) oscillators phase locked to a digitally divided version of the master clock deliver the eventual VFO output, smoothing out the noise induced by the division algorithms. Such an arrangement can then give all of the longer term stability and repeatability of an exact reference, the benefits of exact digital frequency selection, and the short term stability, imparted even onto an arbitrary frequency analogue waveform—the best of all worlds.
See also
Numerically controlled oscillator
Resonance
Tuner (radio)
References
Electronic oscillators
Communication circuits
Radio electronics
Electronic design
Wireless tuning and filtering | Variable-frequency oscillator | [
"Engineering"
] | 2,210 | [
"Radio electronics",
"Wireless tuning and filtering",
"Telecommunications engineering",
"Electronic design",
"Electronic engineering",
"Design",
"Communication circuits"
] |
437,619 | https://en.wikipedia.org/wiki/Shear%20stress | Shear stress (often denoted by , Greek: tau) is the component of stress coplanar with a material cross section. It arises from the shear force, the component of force vector parallel to the material cross section. Normal stress, on the other hand, arises from the force vector component perpendicular to the material cross section on which it acts.
General shear stress
The formula to calculate average shear stress or force per unit area is:
where is the force applied and is the cross-sectional area.
The area involved corresponds to the material face parallel to the applied force vector, i.e., with surface normal vector perpendicular to the force.
Other forms
Wall shear stress
Wall shear stress expresses the retarding force (per unit area) from a wall in the layers of a fluid flowing next to the wall. It is defined as:where is the dynamic viscosity, is the flow velocity, and is the distance from the wall.
It is used, for example, in the description of arterial blood flow, where there is evidence that it affects the atherogenic process.
Pure
Pure shear stress is related to pure shear strain, denoted , by the equationwhere is the shear modulus of the isotropic material, given byHere, is Young's modulus and is Poisson's ratio.
Beam shear
Beam shear is defined as the internal shear stress of a beam caused by the shear force applied to the beam:where
The beam shear formula is also known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii, who derived it in 1855.
Semi-monocoque shear
Shear stresses within a semi-monocoque structure may be calculated by idealizing the cross-section of the structure into a set of stringers (carrying only axial loads) and webs (carrying only shear flows). Dividing the shear flow by the thickness of a given portion of the semi-monocoque structure yields the shear stress. Thus, the maximum shear stress will occur either in the web of maximum shear flow or minimum thickness.
Constructions in soil can also fail due to shear; e.g., the weight of an earth-filled dam or dike may cause the subsoil to collapse, like a small landslide.
Impact shear
The maximum shear stress created in a solid round bar subject to impact is given by the equationwhere
Furthermore,
,
where
Shear stress in fluids
Any real fluids (liquids and gases included) moving along a solid boundary will incur a shear stress at that boundary. The no-slip condition dictates that the speed of the fluid at the boundary (relative to the boundary) is zero; although at some height from the boundary, the flow speed must equal that of the fluid. The region between these two points is named the boundary layer. For all Newtonian fluids in laminar flow, the shear stress is proportional to the strain rate in the fluid, where the viscosity is the constant of proportionality. For non-Newtonian fluids, the viscosity is not constant. The shear stress is imparted onto the boundary as a result of this loss of velocity.
For a Newtonian fluid, the shear stress at a surface element parallel to a flat plate at the point is given bywhere
Specifically, the wall shear stress is defined asNewton's constitutive law, for any general geometry (including the flat plate above mentioned), states that shear tensor (a second-order tensor) is proportional to the flow velocity gradient (the velocity is a vector, so its gradient is a second-order tensor):The constant of proportionality is named dynamic viscosity. For an isotropic Newtonian flow, it is a scalar, while for anisotropic Newtonian flows, it can be a second-order tensor. The fundamental aspect is that for a Newtonian fluid, the dynamic viscosity is independent of flow velocity (i.e., the shear stress constitutive law is linear), while for non-Newtonian flows this is not true, and one should allow for the modificationThis no longer Newton's law but a generic tensorial identity: one can always find an expression of the viscosity as function of the flow velocity given any expression of the shear stress as function of the flow velocity. On the other hand, given a shear stress as function of the flow velocity, it represents a Newtonian flow only if it can be expressed as a constant for the gradient of the flow velocity. The constant one finds in this case is the dynamic viscosity of the flow.
Example
Considering a 2D space in Cartesian coordinates (the flow velocity components are respectively ), the shear stress matrix given byrepresents a Newtonian flow; in fact it can be expressed asi.e., an anisotropic flow with the viscosity tensorwhich is nonuniform (depends on space coordinates) and transient, but is independent of the flow velocity:This flow is therefore Newtonian. On the other hand, a flow in which the viscosity wasis non-Newtonian since the viscosity depends on flow velocity. This non-Newtonian flow is isotropic (the matrix is proportional to the identity matrix), so the viscosity is simply a scalar:
Measurement with sensors
Diverging fringe shear stress sensor
This relationship can be exploited to measure the wall shear stress. If a sensor could directly measure the gradient of the velocity profile at the wall, then multiplying by the dynamic viscosity would yield the shear stress. Such a sensor was demonstrated by A. A. Naqwi and W. C. Reynolds. The interference pattern generated by sending a beam of light through two parallel slits forms a network of linearly diverging fringes that seem to originate from the plane of the two slits (see double-slit experiment). As a particle in a fluid passes through the fringes, a receiver detects the reflection of the fringe pattern. The signal can be processed, and from the fringe angle, the height and velocity of the particle can be extrapolated. The measured value of the wall velocity gradient is independent of the fluid properties, and as a result does not require calibration. Recent advancements in the micro-optic fabrication technologies have made it possible to use integrated diffractive optical elements to fabricate diverging fringe shear stress sensors usable both in air and liquid.
Micro-pillar shear-stress sensor
A further measurement technique is that of slender wall-mounted micro-pillars made of the flexible polymer polydimethylsiloxane, which bend in reaction to the applying drag forces in the vicinity of the wall. The sensor thereby belongs to the indirect measurement principles relying on the relationship between near-wall velocity gradients and the local wall-shear stress.
Electro-diffusional method
The electro-diffusional method measures the wall shear rate in the liquid phase from microelectrodes under limiting diffusion current conditions. A potential difference between an anode of a broad surface (usually located far from the measuring area) and the small working electrode acting as a cathode leads to a fast redox reaction. The ion disappearance occurs only on the microprobe active surface, causing the development of the diffusion boundary layer, in which the fast electro-diffusion reaction rate is controlled only by diffusion. The resolution of the convective-diffusive equation in the near-wall region of the microelectrode lead to analytical solutions relying the characteristics length of the microprobes, the diffusional properties of the electrochemical solution, and the wall shear rate.
See also
Critical resolved shear stress
Direct shear test
Friction
Shear and moment diagrams
Shear rate
Shear strain
Shear strength
Tensile stress
Triaxial shear test
References
Continuum mechanics
Shear strength
Mechanical quantities | Shear stress | [
"Physics",
"Mathematics",
"Engineering"
] | 1,597 | [
"Structural engineering",
"Mechanical quantities",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Shear strength",
"Classical mechanics",
"Mechanics",
"Mechanical engineering"
] |
437,861 | https://en.wikipedia.org/wiki/Atomic%20spectroscopy | In physics, atomic spectroscopy is the study of the electromagnetic radiation absorbed and emitted by atoms. Since unique elements have unique emission spectra, atomic spectroscopy is applied for determination of elemental compositions. It can be divided by atomization source or by the type of spectroscopy used. In the latter case, the main division is between optical and mass spectrometry. Mass spectrometry generally gives significantly better analytical performance, but is also significantly more complex. This complexity translates into higher purchase costs, higher operational costs, more operator training, and a greater number of components that can potentially fail. Because optical spectroscopy is often less expensive and has performance adequate for many tasks, it is far more common. Atomic absorption spectrometers are one of the most commonly sold and used analytical devices.
Atomic spectroscopy
Electrons exist in energy levels (i.e. atomic orbitals) within an atom. Atomic orbitals are quantized, meaning they exist as defined values instead of being continuous (see: atomic orbitals). Electrons may move between orbitals, but in doing so they must absorb or emit energy equal to the energy difference between their atom's specific quantized orbital energy levels. In optical spectroscopy, energy absorbed to move an electron to a higher energy level (higher orbital) and/or the energy emitted as the electron moves to a lower energy level is absorbed or emitted in the form of photons (light particles). Because each element has a unique number of electrons, an atom will absorb/release energy in a pattern unique to its elemental identity (e.g. Ca, Na, etc.) and thus will absorb/emit photons in a correspondingly unique pattern. The type of atoms present in a sample, or the amount of atoms present in a sample can be deduced from measuring these changes in light wavelength and light intensity.
Atomic spectroscopy is further divided into atomic absorption spectroscopy and atomic emission spectroscopy. In atomic absorption spectroscopy, light of a predetermined wavelength is passed through a collection of atoms. If the wavelength of the source light has energy corresponding to the energy difference between two energy levels in the atoms, a portion of the light will be absorbed. The difference between the intensity of the light emitted from the source (e.g., lamp) and the light collected by the detector yields an absorbance value. This absorbance value can then be used to determine the concentration of a given element (or atoms) within the sample. The relationship between the concentration of atoms, the distance the light travels through the collection of atoms, and the portion of the light absorbed is given by the Beer–Lambert law. In atomic emission spectroscopy, the intensity of the emitted light is directly proportional to the concentration of atoms.
Ion and atom sources
Sources can be adapted in many ways, but the lists below give the general uses of a number of sources. Of these, flames are the most common due to their low cost and their simplicity. Although significantly less common, inductively-coupled plasmas, especially when used with mass spectrometers, are recognized for their outstanding analytical performance and their versatility.
For all atomic spectroscopy, a sample must be vaporized and atomized. For atomic mass spectrometry, a sample must also be ionized. Vaporization, atomization, and ionization are often, but not always, accomplished with a single source. Alternatively, one source may be used to vaporize a sample while another is used to atomize (and possibly ionize). An example of this is laser ablation inductively-coupled plasma atomic emission spectrometry, where a laser is used to vaporize a solid sample and an inductively-coupled plasma is used to atomize the vapor.
With the exception of flames and graphite furnaces, which are most commonly used for atomic absorption spectroscopy, most sources are used for atomic emission spectroscopy.
Liquid-sampling sources include flames and sparks (atom source), inductively-coupled plasma (atom and ion source), graphite furnace (atom source), microwave plasma (atom and ion source), and direct-current plasma (atom and ion source). Solid-sampling sources include lasers (atom and vapor source), glow discharge (atom and ion source), arc (atom and ion source), spark (atom and ion source), and graphite furnace (atom and vapor source). Gas-sampling sources include flame (atom source), inductively-coupled plasma (atom and ion source), microwave plasma (atom and ion source), direct-current plasma (atom and ion source), and glow discharge (atom and ion source).
Selection Rules
For any given atom, there are quantum numbers that can specify the wavefunction of that atom. Using the hydrogen atom as an example, four quantum numbers are required to fully describe the state of the system. Quantum numbers that are eigenvalues of the operators that commute with the wavefunction to describe physical aspects of the system, and are called “good” numbers because of this. Once good quantum numbers have been found for a given atomic transition, the selection rules determine what changes in quantum numbers are allowed.
The electric dipole (E1) transition of a hydrogen atom can be described with the quantum numbers l (orbital angular momentum quantum number), ml (magnetic quantum number), ms (electron spin quantum number), and n (principal quantum number). When evaluating the effect of the electric dipole moment operator μ on the wavefunction of the system, we see that all values of the eigenvalue are 0, except for when the changes in the quantum numbers follow a specific pattern.
For example in the E1 transition, unless Δ l = ± 1, Δ ml = 0 or ± 1, Δ ms = 0, and Δ n = any integer, the equation above will yield a value equal to zero and the transition would be known as a “forbidden transition”. For example, this would occur for certain cases like when Δ l = 2. In this case, the transition would not be allowed and therefore would be much weaker than an allowed transition. These specific values for the changes in quantum numbers are known as the selection rules for the allowed transitions and are shown for common transitions in the table below:
See also
Cold vapour atomic fluorescence spectroscopy
Atomic spectral line
References
External links
Prospects in Analytical Atomic Spectrometry – tendencies in five main branches of atomic spectrometry (absorption, emission, mass, fluorescence and ionization spectrometry)
Learning by Simulations – various atomic absorption and emission spectra
Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas
Spectroscopy | Atomic spectroscopy | [
"Physics",
"Chemistry"
] | 1,356 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
438,040 | https://en.wikipedia.org/wiki/Cabaret%20Mechanical%20Theatre | Cabaret Mechanical Theatre is an English organisation that mounts exhibitions around the world of contemporary automata by a collective of artists. Founded by Sue Jackson, the group played a central role in the revival of automata from the 1970s onwards,
and Jackson championed the idea of automata as a form of contemporary art.
Cabaret Mechanical Theatre was started in 1979 in Falmouth, Cornwall, where Jackson encouraged local artists Peter Markey, Paul Spooner and Ron Fuller to manufacture automata for her craft shop, "Cabaret". The shop became an exhibition space and the collection moved to Covent Garden, central London, in 1984, remaining there until 2000
when rising rates forced it to close.
Part of the Cabaret Mechanical Theatre collection can be seen at the American Visionary Arts Museum in Baltimore, Maryland. Three machines made by Tim Hunkin, which were formerly part of the Covent Garden display, have been moved to Hunkin's own exhibition Novelty Automation in Holborn, London.
References
External links
Cabaret Mechanical Theatre website
Cabaret Mechanical Theatre on YouTube
1979 establishments in England
Arts organizations established in 1979
Arts organisations based in the United Kingdom
Art exhibitions in the United Kingdom
English contemporary art
Traveling exhibits
Automata (mechanical) | Cabaret Mechanical Theatre | [
"Engineering"
] | 240 | [
"Automata (mechanical)",
"Automation"
] |
438,602 | https://en.wikipedia.org/wiki/Rossby%20wave | Rossby waves, also known as planetary waves, are a type of inertial wave naturally occurring in rotating fluids. They were first identified by Sweden-born American meteorologist Carl-Gustaf Arvid Rossby in the Earth's atmosphere in 1939. They are observed in the atmospheres and oceans of Earth and other planets, owing to the rotation of Earth or of the planet involved. Atmospheric Rossby waves on Earth are giant meanders in high-altitude winds that have a major influence on weather. These waves are associated with pressure systems and the jet stream (especially around the polar vortices). Oceanic Rossby waves move along the thermocline: the boundary between the warm upper layer and the cold deeper part of the ocean.
Rossby wave types
Atmospheric waves
Atmospheric Rossby waves result from the conservation of potential vorticity and are influenced by the Coriolis force and pressure gradient. The image on the left sketches fundamental principles of the wave, e.g., its restoring force and westward phase velocity. The rotation causes fluids to turn to the right as they move in the northern hemisphere and to the left in the southern hemisphere. For example, a fluid that moves from the equator toward the north pole will deviate toward the east; a fluid moving toward the equator from the north will deviate toward the west. These deviations are caused by the Coriolis force and conservation of potential vorticity which leads to changes of relative vorticity. This is analogous to conservation of angular momentum in mechanics. In planetary atmospheres, including Earth, Rossby waves are due to the variation in the Coriolis effect with latitude.
One can identify a terrestrial Rossby wave as its phase velocity, marked by its wave crest, always has a westward component. However, the collected set of Rossby waves may appear to move in either direction with what is known as its group velocity. In general, shorter waves have an eastward group velocity and long waves a westward group velocity.
The terms "barotropic" and "baroclinic" are used to distinguish the vertical structure of Rossby waves. Barotropic Rossby waves do not vary in the vertical, and have the fastest propagation speeds. The baroclinic wave modes, on the other hand, do vary in the vertical. They are also slower, with speeds of only a few centimeters per second or less.
Most investigations of Rossby waves have been done on those in Earth's atmosphere.
Rossby waves in the Earth's atmosphere are easy to observe as (usually 4–6) large-scale meanders of the jet stream. When these deviations become very pronounced, masses of cold or warm air detach, and become low-strength cyclones and anticyclones, respectively, and are responsible for day-to-day weather patterns at mid-latitudes. The action of Rossby waves partially explains why eastern continental edges in the Northern Hemisphere, such as the Northeast United States and Eastern Canada, are colder than Western Europe at the same latitudes, and why the Mediterranean is dry during summer (Rodwell–Hoskins mechanism).
Poleward-propagating atmospheric waves
Deep convection (heat transfer) to the troposphere is enhanced over very warm sea surfaces in the tropics, such as during El Niño events. This tropical forcing generates atmospheric Rossby waves that have a poleward and eastward migration.
Poleward-propagating Rossby waves explain many of the observed statistical connections between low- and high-latitude climates. One such phenomenon is sudden stratospheric warming. Poleward-propagating Rossby waves are an important and unambiguous part of the variability in the Northern Hemisphere, as expressed in the Pacific North America pattern. Similar mechanisms apply in the Southern Hemisphere and partly explain the strong variability in the Amundsen Sea region of Antarctica. In 2011, a Nature Geoscience study using general circulation models linked Pacific Rossby waves generated by increasing central tropical Pacific temperatures to warming of the Amundsen Sea region, leading to winter and spring continental warming of Ellsworth Land and Marie Byrd Land in West Antarctica via an increase in advection.
Rossby waves on other planets
Atmospheric Rossby waves, like Kelvin waves, can occur on any rotating planet with an atmosphere. The Y-shaped cloud feature on Venus is attributed to Kelvin and Rossby waves.
Oceanic waves
Oceanic Rossby waves are large-scale waves within an ocean basin. They have a low amplitude, in the order of centimetres (at the surface) to metres (at the thermocline), compared with atmospheric Rossby waves which are in the order of hundreds of kilometres. They may take months to cross an ocean basin. They gain momentum from wind stress at the ocean surface layer and are thought to communicate climatic changes due to variability in forcing, due to both the wind and buoyancy. Off-equatorial Rossby waves are believed to propagate through eastward-propagating Kelvin waves that upwell against Eastern Boundary Currents, while equatorial Kelvin waves are believed to derive some of their energy from the reflection of Rossby waves against Western Boundary Currents.
Both barotropic and baroclinic waves cause variations of the sea surface height, although the length of the waves made them difficult to detect until the advent of satellite altimetry. Satellite observations have confirmed the existence of oceanic Rossby waves.
Baroclinic waves also generate significant displacements of the oceanic thermocline, often of tens of meters. Satellite observations have revealed the stately progression of Rossby waves across all the ocean basins, particularly at low- and mid-latitudes. Due to the beta effect, transit times of Rossby waves increase with latitude. In a basin like the Pacific, waves travelling at the equator may take months, while closer to the poles transit may take decades.
Rossby waves have been suggested as an important mechanism to account for the heating of the ocean on Europa, a moon of Jupiter.
Waves in astrophysical discs
Rossby wave instabilities are also thought to be found in astrophysical discs, for example, around newly forming stars.
Amplification of Rossby waves
It has been proposed that a number of regional weather extremes in the Northern Hemisphere associated with blocked atmospheric circulation patterns may have been caused by quasiresonant amplification of Rossby waves. Examples include the 2013 European floods, the 2012 China floods, the 2010 Russian heat wave, the 2010 Pakistan floods and the 2003 European heat wave. Even taking global warming into account, the 2003 heat wave would have been highly unlikely without such a mechanism.
Normally freely travelling synoptic-scale Rossby waves and quasistationary planetary-scale Rossby waves exist in the mid-latitudes with only weak interactions. The hypothesis, proposed by Vladimir Petoukhov, Stefan Rahmstorf, Stefan Petri, and Hans Joachim Schellnhuber, is that under some circumstances these waves interact to produce the static pattern. For this to happen, they suggest, the zonal (east-west) wave number of both types of wave should be in the range 6–8, the synoptic waves should be arrested within the troposphere (so that energy does not escape to the stratosphere) and mid-latitude waveguides should trap the quasistationary components of the synoptic waves. In this case the planetary-scale waves may respond unusually strongly to orography and thermal sources and sinks because of "quasiresonance".
A 2017 study by Mann, Rahmstorf, et al. connected the phenomenon of anthropogenic Arctic amplification to planetary wave resonance and extreme weather events.
Mathematical definitions
Free barotropic Rossby waves under a zonal flow with linearized vorticity equation
To start with, a zonal mean flow, U, can be considered to be perturbed where U is constant in time and space. Let be the total horizontal wind field, where u and v are the components of the wind in the x- and y- directions, respectively. The total wind field can be written as a mean flow, U, with a small superimposed perturbation, u′ and v′.
The perturbation is assumed to be much smaller than the mean zonal flow.
The relative vorticity and the perturbations and can be written in terms of the stream function (assuming non-divergent flow, for which the stream function completely describes the flow):
Considering a parcel of air that has no relative vorticity before perturbation (uniform U has no vorticity) but with planetary vorticity f as a function of the latitude, perturbation will lead to a slight change of latitude, so the perturbed relative vorticity must change in order to conserve potential vorticity. Also the above approximation U >> u''' ensures that the perturbation flow does not advect relative vorticity.
with . Plug in the definition of stream function to obtain:
Using the method of undetermined coefficients one can consider a traveling wave solution with zonal and meridional wavenumbers k and ℓ, respectively, and frequency :
This yields the dispersion relation:
The zonal (x-direction) phase speed and group velocity of the Rossby wave are then given by
where c is the phase speed, cg is the group speed, U is the mean westerly flow, is the Rossby parameter, k is the zonal wavenumber, and ℓ is the meridional wavenumber. It is noted that the zonal phase speed of Rossby waves is always westward (traveling east to west) relative to mean flow U, but the zonal group speed of Rossby waves can be eastward or westward depending on wavenumber.
Rossby parameter
The Rossby parameter is defined as the rate of change of the Coriolis frequency along the meridional direction:
where is the latitude, ω is the angular speed of the Earth's rotation, and a'' is the mean radius of the Earth.
If , there will be no Rossby waves; Rossby waves owe their origin to the gradient of the tangential speed of the planetary rotation (planetary vorticity). A "cylinder" planet has no Rossby waves. It also means that at the equator of any rotating, sphere-like planet, including Earth, one will still have Rossby waves, despite the fact that , because . These are known as Equatorial Rossby waves.
See also
Atmospheric wave
Equatorial wave
Equatorial Rossby wave – mathematical treatment
Harmonic
Kelvin wave
Polar vortex
Rossby whistle
References
Bibliography
External links
Description of Rossby Waves from the American Meteorological Society
An introduction to oceanic Rossby waves and their study with satellite data
Rossby waves and extreme weather (Video)
Physical oceanography
Atmospheric dynamics
Fluid mechanics
Waves | Rossby wave | [
"Physics",
"Chemistry",
"Engineering"
] | 2,213 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Atmospheric dynamics",
"Waves",
"Motion (physics)",
"Civil engineering",
"Physical oceanography",
"Fluid mechanics",
"Fluid dynamics"
] |
439,171 | https://en.wikipedia.org/wiki/Bergmann%27s%20rule | Bergmann's rule is an ecogeographical rule that states that, within a broadly distributed taxonomic clade, populations and species of larger size are found in colder environments, while populations and species of smaller size are found in warmer regions. The rule derives from the relationship between size in linear dimensions meaning that both height and volume will increase in colder environments. Bergmann's rule only describes the overall size of the animals, but does not include body proportions like Allen's rule does.
Although originally formulated in relation to species within a genus, it has often been recast in relation to populations within a species. It is also often cast in relation to latitude. It is possible that the rule also applies to some plants, such as Rapicactus.
The rule is named after nineteenth century German biologist Carl Bergmann, who described the pattern in 1847, although he was not the first to notice it. Bergmann's rule is most often applied to mammals and birds which are endotherms, but some researchers have also found evidence for the rule in studies of ectothermic species, such as the ant Leptothorax acervorum. While Bergmann's rule appears to hold true for many mammals and birds, there are exceptions.
Larger-bodied animals tend to conform more closely to Bergmann's rule than smaller-bodied animals, at least up to certain latitudes. This perhaps reflects a reduced ability to avoid stressful environments, such as by burrowing. In addition to being a general pattern across space, Bergmann's rule has been reported in populations over historical and evolutionary time when exposed to varying thermal regimes. In particular, temporary, reversible dwarfing of mammals has been noted during two relatively brief upward excursions in temperature during the Paleogene: the Paleocene-Eocene thermal maximum and the Eocene Thermal Maximum 2.
Examples
Humans
Human populations near the poles, including the Inuit, Aleut, and Sami people, are on average heavier than populations from mid-latitudes, consistent with Bergmann's rule. They also tend to have shorter limbs and broader trunks, consistent with Allen's rule. According to Marshall T. Newman in 1953, Native American populations are generally consistent with Bergmann's rule although the cold climate and small body size combination of the Eastern Inuit, Canoe Nation, Yuki people, Andes natives and Harrison Lake Lillooet runs contrary to the expectations of Bergmann's rule. Newman contends that Bergmann's rule holds for the populations of Eurasia, but it does not hold for those of sub-Saharan Africa.
Human populations also show a decrease in stature with an increase in mean annual temperature. Bergmann's rule holds for Africans with the pygmy phenotype and other pygmy peoples. These populations show a shorter stature and smaller body size due to an adaptation to hotter and more humid environments. With elevated environmental humidity, evaporative cooling (sweating) is a less effective way to dissipate body heat, but a higher surface area to volume ratio should provide a slight advantage through passive convective heat loss.
Birds
A 2019 study of changes in the morphology of migratory birds used bodies of birds which had collided with buildings in Chicago from 1978 to 2016. The length of birds' lower leg bones (an indicator of body size) shortened by an average of 2.4% and their wings lengthened by 1.3%. A similar study published in 2021 used measurements of 77 nonmigratory bird species captured live for banding in lowland Amazon rainforest. Between 1979 and 2019, all study species have gotten smaller on average, by up to 2% per decade. The morphological changes are regarded as resulting from global warming, and may demonstrate an example of evolutionary change following Bergmann's rule.
Reptiles
Bergmann's rule has been reported to be vaguely followed by female crocodilians. However, for turtles or lizards the rule's validity has not been supported.
Invertebrates
Evidence of Bergmann's rule has been found in marine copepods.
Plants
Bergmann's rule cannot generally be applied to plants. Regarding Cactaceae, the case of the saguaro (Carnegiea gigantea), once described as "a botanical Bergmann trend", has instead been shown to depend on rainfall, particularly winter precipitation, and not temperature. Members of the genus Rapicactus are larger in cooler environments, as their stem diameter increases with altitude and particularly with latitude. However, since Rapicactus grow in a distributional area in which average precipitation tends to diminish at higher latitudes, and their body size is not conditioned by climatic variables, this could suggest a possible Bergmann trend.
Explanations
The earliest explanation, given by Bergmann when originally formulating the rule, is that larger animals have a lower surface area to volume ratio than smaller animals, so they radiate less body heat per unit of mass, and therefore stay warmer in cold climates. Warmer climates impose the opposite problem: body heat generated by metabolism needs to be dissipated quickly rather than stored within.
Thus, the higher surface area-to-volume ratio of smaller animals in hot and dry climates facilitates heat loss through the skin and helps cool the body. When analyzing Bergmann's Rule in the field, groups of populations being studied are of different thermal environments, and also have been separated long enough to genetically differentiate in response to these thermal conditions. The relationship between stature and mean annual temperature can be explained by modeling any shape that is increasing in any dimension. As you increase the height of a shape, its surface area-to-volume ratio will decrease. Modeling a person's trunk and limbs as cylinders shows a 17% decrease in surface area-to-volume ratio from a person who is five feet tall to a person who is six feet tall even at the same body mass index (BMI).
In marine crustaceans, it has been proposed that an increase in size with latitude is observed because decreasing temperature results in increased cell size and increased life span, both of which lead to an increase in maximum body size (continued growth throughout life is characteristic of crustaceans). The size trend has been observed in hyperiid and gammarid amphipods, copepods, stomatopods, mysids, and planktonic euphausiids, both in comparisons of related species as well as within widely distributed species. Deep-sea gigantism is observed in some of the same groups, possibly for the same reasons. An additional factor in aquatic species may be the greater dissolved oxygen concentration at lower temperature. This view is supported by the reduced size of crustaceans in high-altitude lakes. A further possible influence on invertebrates is reduced predation pressure at high latitude. A study of shallow water brachiopods found that predation was reduced in polar areas relative to temperate latitudes (the same trend was not found in deep water, where predation is also reduced, or in comparison of tropical and temperate brachiopods, perhaps because tropical brachiopods have evolved to smaller sizes to successfully evade predation).
Hesse's rule
In 1937 German zoologist and ecologist Richard Hesse proposed an extension of Bergmann's rule. Hesse's rule, also known as the heart–weight rule, states that species inhabiting colder climates have a larger heart in relation to body weight than closely related species inhabiting warmer climates.
Criticism
In a 1986 study, Valerius Geist claimed Bergmann's rule to be false: the correlation with temperature is spurious; instead, Geist found that body size is proportional to the duration of the annual productivity pulse, or food availability per animal during the growing season.
Because many factors can affect body size, there are many critics of Bergmann's rule. Some believe that latitude itself is a poor predictor of body mass. Examples of other selective factors that may contribute to body mass changes are the size of food items available, effects of body size on success as a predator, effects of body size on vulnerability to predation, and resource availability. For example, if an organism is adapted to tolerate cold temperatures, it may also tolerate periods of food shortage, due to correlation between cold temperature and food scarcity. A larger organism can rely on its greater fat stores to provide the energy needed for survival as well being able to procreate for longer periods.
Resource availability is a major constraint on the overall success of many organisms. Resource scarcity can limit the total number of organisms in a habitat, and over time can also cause organisms to adapt by becoming smaller in body size. Resource availability thus becomes a modifying restraint on Bergmann's Rule.
Some examinations of the fossil record have found contradictions to the rule. For example, during the Pleistocene, hippopotamuses in Europe tended to get smaller during colder and drier intervals. Further, a 2024 study found the size of dinosaurs did not increase at northern Arctic latitudes, and that the rule was "only applicable to a subset of homeothermic animals" with regard to temperature when all other climatic variables are ignored.
See also
Animal migration
Biogeography
Gene flow
Gigantothermy
References
Notes
Animal size
Ecogeographic rules
Laws of thermodynamics | Bergmann's rule | [
"Physics",
"Chemistry",
"Biology"
] | 1,879 | [
"Organism size",
"Biological rules",
"Ecogeographic rules",
"Thermodynamics",
"Animal size",
"Laws of thermodynamics"
] |
439,202 | https://en.wikipedia.org/wiki/Electron%20cyclotron%20resonance | Electron cyclotron resonance (ECR) is a phenomenon observed in plasma physics, condensed matter physics, and accelerator physics. It happens when the frequency of incident radiation coincides with the natural frequency of rotation of electrons in magnetic fields. A free electron in a static and uniform magnetic field will move in a circle due to the Lorentz force. The circular motion may be superimposed with a uniform axial motion, resulting in a helix, or with a uniform motion perpendicular to the field (e.g., in the presence of an electrical or gravitational field) resulting in a cycloid. The angular frequency (ω = 2πf ) of this cyclotron motion for a given magnetic field strength B is given (in SI units) by
.
where is the elementary charge and is the mass of the electron. For the commonly used microwave frequency 2.45 GHz and the bare electron charge and mass, the resonance condition is met when B = .
For electron moving at relativistic speeds v, the formula needs to be adjusted according to the special theory of relativity to:
where
me is the electron rest mass
.
In plasma physics
An ionized plasma may be efficiently produced or heated by superimposing a static magnetic field and a high-frequency electromagnetic field at the electron cyclotron resonance frequency. In the toroidal magnetic fields used in magnetic fusion energy research, the magnetic field decreases with the major radius, so the location of the power deposition can be controlled within about a centimetre. Furthermore, the heating power can be rapidly modulated and is deposited directly into the electrons. These properties make electron cyclotron heating a very valuable research tool for energy transport studies. In addition to heating, electron cyclotron waves can be used to drive current. The inverse process of electron cyclotron emission can be used as a diagnostic of the radial electron temperature profile.
ECR ion sources
Since the early 1980s, following the award-winning pioneering work done by Dr. Richard Geller, Dr. Claude Lyneis, and Dr. H. Postma; respectively from French Atomic Energy Commission, Lawrence Berkeley National Laboratory and the Oak Ridge National Laboratory, the use of electron cyclotron resonance for efficient plasma generation, especially to obtain large numbers of multiply charged ions, has acquired a unique importance in various technological fields. Many diverse activities depend on electron cyclotron resonance technology, including
advanced cancer treatment, where ECR ion sources are crucial for proton therapy,
advanced semiconductor manufacturing, especially for high density DRAM memories, through plasma etching or other plasma processing technologies,
electric propulsion devices for spacecraft propulsion, where a broad range of devices (HiPEP, some ion thrusters, or electrodeless plasma thrusters),
for particle accelerators, on-line mass separation and radioactive ion charge breeding,
and, as a more mundane example, painting of plastic bumpers for cars.
The ECR ion source makes use of the electron cyclotron resonance to ionize a plasma. Microwaves are injected into a volume at the frequency corresponding to the electron cyclotron resonance, defined by the magnetic field applied to a region inside the volume. The volume contains a low pressure gas. The alternating electric field of the microwaves is set to be synchronous with the gyration period of the free electrons of the gas, and increases their perpendicular kinetic energy. Subsequently, when the energized free electrons collide with the gas in the volume they can cause ionization if their kinetic energy is larger than the ionization energy of the atoms or molecules. The ions produced correspond to the gas type used, which may be pure, a compound, or vapour of a solid or liquid material.
ECR ion sources are able to produce singly charged ions with high intensities (e.g. H+ and D+ ions of more than 100 mA (electrical) in DC mode using a 2.45 GHz ECR ion source).
For multiply charged ions, the ECR ion source has the advantages that it is able to confine the ions for long enough for multiple collisions and multiple ionization to take place, and the low gas pressure in the source avoids recombination. The VENUS ECR ion source at Lawrence Berkeley National Laboratory has produced in intensity of 0.25 mA (electrical) of Bi29+.
Some important industrial fields would not exist without the use of this fundamental technology, which makes electron cyclotron resonance ion and plasma sources one of the enabling technologies of today's world.
In condensed matter physics
Within a solid the mass in the cyclotron frequency equation above is replaced with the effective mass tensor . Cyclotron resonance is therefore a useful technique to measure effective mass and Fermi surface cross-section in solids. In a sufficiently high magnetic field at low temperature in a relatively pure material
where is the carrier scattering lifetime, is the Boltzmann constant and is temperature. When these conditions are satisfied, an electron will complete its cyclotron orbit without engaging in a collision, at which point it is said to be in a well-defined Landau level.
See also
Cyclotron resonance
Cyclotron
ARC-ECRIS
Ion cyclotron resonance
Synchrotron
Gyrotron
De Haas–van Alphen effect
References
Further reading
"Personal Reminiscences of Cyclotron Resonance", G. Dresselhaus, Proceedings of ICPS-27 (2004). This paper describes the early history of cyclotron resonance in its heyday as a band structure determination technique.
Waves in plasmas
Condensed matter physics
Electric and magnetic fields in matter
Ion source
Particle accelerators | Electron cyclotron resonance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,157 | [
"Waves in plasmas",
"Physical phenomena",
"Spectrum (physical sciences)",
"Plasma phenomena",
"Phases of matter",
"Electric and magnetic fields in matter",
"Ion source",
"Materials science",
"Waves",
"Mass spectrometry",
"Condensed matter physics",
"Matter"
] |
1,802,096 | https://en.wikipedia.org/wiki/Wake%20Shield%20Facility | Wake Shield Facility (WSF) was a NASA experimental science platform that was placed in low Earth orbit by the Space Shuttle. It was a diameter, free-flying stainless steel disk.
The WSF was deployed using the Space Shuttle's Canadarm. The WSF then used nitrogen gas thrusters to position itself about behind the Space Shuttle, which was at an orbital altitude of over , within the thermosphere, where the atmosphere is exceedingly tenuous. The WSF's orbital speed was at least three to four times faster than the speed of thermospheric gas molecules in the area, which resulted in a cone behind the WSF that was entirely free of gas molecules. The WSF thus created an ultrahigh vacuum in its wake. The resulting vacuum was used to study epitaxial film growth. The WSF operated at a distance from the Space Shuttle to avoid contamination from the Shuttle's rocket thrusters and water dumped overboard from the Shuttle's Waste Collection System (space toilet). After two days, the Space Shuttle would rendezvous with the WSF and again use its robotic arm to collect the WSF and to store it in the Shuttle's payload bay for return to Earth.
The WSF was flown into space three times, aboard Shuttle flights STS-60 (WSF-1), STS-69 (WSF-2) and STS-80 (WSF-3). During STS-60, some hardware issues were experienced, and, as a result, the WSF-1 was only deployed at the end of the Shuttle's Canadarm. During the later missions, the WSF was deployed as a free-flying platform in the wake of the Shuttle.
These flights proved the vacuum wake concept and realized the space epitaxy concept by growing the first-ever crystalline semiconductor thin films in the vacuum of space. These included gallium arsenide (GaAs) and aluminum gallium arsenide (AlGaAs) depositions. These experiments have been used to develop better photocells and thin films. Among the potential resulting applications are artificial retinas made from tiny ceramic detectors.
Pre-flight calculations suggested that the pressure on the wake side could be decreased by about 6 orders of magnitude over the ambient pressure in low Earth orbit (from to Torr). Analysis of the pressure and temperature data gathered from the two flights concluded that the decrease was about 2 orders of magnitude (4 orders of magnitude less than expected).
The WSF was sponsored by the Space Processing Division in NASA's Office of Life and Microgravity Sciences and Applications. It was designed, built and operated by the Space Vacuum Epitaxy Center, since renamed the Center for Advanced Materials, at the University of Houston, a NASA Commercial Space Center in conjunction with its industrial partner, Space Industries, Inc., also in Houston, Texas.
, the Wake Shield Facility spacecraft is being preserved at the Center for Advanced Materials.
See also
Space manufacturing
References
External links
Space Materials Science by the Center for Advanced Materials
Wake Shield Facility program by the Center for Advanced Materials (archive)
Wake Shield Facility program by the Space Vacuum Epitaxy Center (archive)
NASA programs
Space science experiments
Space manufacturing
Thin film deposition
Spacecraft launched in 1994
Spacecraft launched in 1995
Spacecraft launched in 1996
Spacecraft launched by the Space Shuttle
Space hardware returned to Earth intact | Wake Shield Facility | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 680 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Planes (geometry)",
"Solid state engineering"
] |
1,802,169 | https://en.wikipedia.org/wiki/Continuous%20linear%20operator | In functional analysis and related areas of mathematics, a continuous linear operator or continuous linear mapping is a continuous linear transformation between topological vector spaces.
An operator between two normed spaces is a bounded linear operator if and only if it is a continuous linear operator.
Continuous linear operators
Characterizations of continuity
Suppose that is a linear operator between two topological vector spaces (TVSs).
The following are equivalent:
is continuous.
is continuous at some point
is continuous at the origin in
If is locally convex then this list may be extended to include:
for every continuous seminorm on there exists a continuous seminorm on such that
If and are both Hausdorff locally convex spaces then this list may be extended to include:
is weakly continuous and its transpose maps equicontinuous subsets of to equicontinuous subsets of
If is a sequential space (such as a pseudometrizable space) then this list may be extended to include:
is sequentially continuous at some (or equivalently, at every) point of its domain.
If is pseudometrizable or metrizable (such as a normed or Banach space) then we may add to this list:
is a bounded linear operator (that is, it maps bounded subsets of to bounded subsets of ).
If is seminormable space (such as a normed space) then this list may be extended to include:
maps some neighborhood of 0 to a bounded subset of
If and are both normed or seminormed spaces (with both seminorms denoted by ) then this list may be extended to include:
for every there exists some such that
If and are Hausdorff locally convex spaces with finite-dimensional then this list may be extended to include:
the graph of is closed in
Continuity and boundedness
Throughout, is a linear map between topological vector spaces (TVSs).
Bounded subset
The notion of a "bounded set" for a topological vector space is that of being a von Neumann bounded set.
If the space happens to also be a normed space (or a seminormed space) then a subset is von Neumann bounded if and only if it is , meaning that
A subset of a normed (or seminormed) space is called if it is norm-bounded (or equivalently, von Neumann bounded).
For example, the scalar field ( or ) with the absolute value is a normed space, so a subset is bounded if and only if is finite, which happens if and only if is contained in some open (or closed) ball centered at the origin (zero).
Any translation, scalar multiple, and subset of a bounded set is again bounded.
Function bounded on a set
If is a set then is said to be if is a bounded subset of which if is a normed (or seminormed) space happens if and only if
A linear map is bounded on a set if and only if it is bounded on for every (because and any translation of a bounded set is again bounded) if and only if it is bounded on for every non-zero scalar (because and any scalar multiple of a bounded set is again bounded).
Consequently, if is a normed or seminormed space, then a linear map is bounded on some (equivalently, on every) non-degenerate open or closed ball (not necessarily centered at the origin, and of any radius) if and only if it is bounded on the closed unit ball centered at the origin
Bounded linear maps
By definition, a linear map between TVSs is said to be and is called a if for every (von Neumann) bounded subset of its domain, is a bounded subset of it codomain; or said more briefly, if it is bounded on every bounded subset of its domain. When the domain is a normed (or seminormed) space then it suffices to check this condition for the open or closed unit ball centered at the origin. Explicitly, if denotes this ball then is a bounded linear operator if and only if is a bounded subset of if is also a (semi)normed space then this happens if and only if the operator norm is finite. Every sequentially continuous linear operator is bounded.
Function bounded on a neighborhood and local boundedness
In contrast, a map is said to be a point or if there exists a neighborhood of this point in such that is a bounded subset of
It is "" (of some point) if there exists point in its domain at which it is locally bounded, in which case this linear map is necessarily locally bounded at point of its domain.
The term "" is sometimes used to refer to a map that is locally bounded at every point of its domain, but some functional analysis authors define "locally bounded" to instead be a synonym of "bounded linear operator", which are related but equivalent concepts. For this reason, this article will avoid the term "locally bounded" and instead say "locally bounded at every point" (there is no disagreement about the definition of "locally bounded ").
Bounded on a neighborhood implies continuous implies bounded
A linear map is "bounded on a neighborhood" (of some point) if and only if it is locally bounded at every point of its domain, in which case it is necessarily continuous (even if its domain is not a normed space) and thus also bounded (because a continuous linear operator is always a bounded linear operator).
For any linear map, if it is bounded on a neighborhood then it is continuous, and if it is continuous then it is bounded. The converse statements are not true in general but they are both true when the linear map's domain is a normed space. Examples and additional details are now given below.
Continuous and bounded but not bounded on a neighborhood
The next example shows that it is possible for a linear map to be continuous (and thus also bounded) but not bounded on any neighborhood. In particular, it demonstrates that being "bounded on a neighborhood" is always synonymous with being "bounded".
: If is the identity map on some locally convex topological vector space then this linear map is always continuous (indeed, even a TVS-isomorphism) and bounded, but is bounded on a neighborhood if and only if there exists a bounded neighborhood of the origin in which is equivalent to being a seminormable space (which if is Hausdorff, is the same as being a normable space).
This shows that it is possible for a linear map to be continuous but bounded on any neighborhood.
Indeed, this example shows that every locally convex space that is not seminormable has a linear TVS-automorphism that is not bounded on any neighborhood of any point.
Thus although every linear map that is bounded on a neighborhood is necessarily continuous, the converse is not guaranteed in general.
Guaranteeing converses
To summarize the discussion below, for a linear map on a normed (or seminormed) space, being continuous, being bounded, and being bounded on a neighborhood are all equivalent.
A linear map whose domain codomain is normable (or seminormable) is continuous if and only if it bounded on a neighborhood.
And a bounded linear operator valued in a locally convex space will be continuous if its domain is (pseudo)metrizable or bornological.
Guaranteeing that "continuous" implies "bounded on a neighborhood"
A TVS is said to be if there exists a neighborhood that is also a bounded set. For example, every normed or seminormed space is a locally bounded TVS since the unit ball centered at the origin is a bounded neighborhood of the origin.
If is a bounded neighborhood of the origin in a (locally bounded) TVS then its image under any continuous linear map will be a bounded set (so this map is thus bounded on this neighborhood ).
Consequently, a linear map from a locally bounded TVS into any other TVS is continuous if and only if it is bounded on a neighborhood.
Moreover, any TVS with this property must be a locally bounded TVS. Explicitly, if is a TVS such that every continuous linear map (into any TVS) whose domain is is necessarily bounded on a neighborhood, then must be a locally bounded TVS (because the identity function is always a continuous linear map).
Any linear map from a TVS into a locally bounded TVS (such as any linear functional) is continuous if and only if it is bounded on a neighborhood.
Conversely, if is a TVS such that every continuous linear map (from any TVS) with codomain is necessarily bounded on a neighborhood, then must be a locally bounded TVS.
In particular, a linear functional on a arbitrary TVS is continuous if and only if it is bounded on a neighborhood.
Thus when the domain the codomain of a linear map is normable or seminormable, then continuity will be equivalent to being bounded on a neighborhood.
Guaranteeing that "bounded" implies "continuous"
A continuous linear operator is always a bounded linear operator.
But importantly, in the most general setting of a linear operator between arbitrary topological vector spaces, it is possible for a linear operator to be bounded but to be continuous.
A linear map whose domain is pseudometrizable (such as any normed space) is bounded if and only if it is continuous.
The same is true of a linear map from a bornological space into a locally convex space.
Guaranteeing that "bounded" implies "bounded on a neighborhood"
In general, without additional information about either the linear map or its domain or codomain, the map being "bounded" is not equivalent to it being "bounded on a neighborhood".
If is a bounded linear operator from a normed space into some TVS then is necessarily continuous; this is because any open ball centered at the origin in is both a bounded subset (which implies that is bounded since is a bounded linear map) and a neighborhood of the origin in so that is thus bounded on this neighborhood of the origin, which (as mentioned above) guarantees continuity.
Continuous linear functionals
Every linear functional on a topological vector space (TVS) is a linear operator so all of the properties described above for continuous linear operators apply to them.
However, because of their specialized nature, we can say even more about continuous linear functionals than we can about more general continuous linear operators.
Characterizing continuous linear functionals
Let be a topological vector space (TVS) over the field ( need not be Hausdorff or locally convex) and let be a linear functional on
The following are equivalent:
is continuous.
is uniformly continuous on
is continuous at some point of
is continuous at the origin.
By definition, said to be continuous at the origin if for every open (or closed) ball of radius centered at in the codomain there exists some neighborhood of the origin in such that
If is a closed ball then the condition holds if and only if
It is important that be a closed ball in this supremum characterization. Assuming that is instead an open ball, then is a sufficient but condition for to be true (consider for example when is the identity map on and ), whereas the non-strict inequality is instead a necessary but condition for to be true (consider for example and the closed neighborhood ). This is one of several reasons why many definitions involving linear functionals, such as polar sets for example, involve closed (rather than open) neighborhoods and non-strict (rather than strict) inequalities.
is bounded on a neighborhood (of some point). Said differently, is a locally bounded at some point of its domain.
Explicitly, this means that there exists some neighborhood of some point such that is a bounded subset of that is, such that This supremum over the neighborhood is equal to if and only if
Importantly, a linear functional being "bounded on a neighborhood" is in general equivalent to being a "bounded linear functional" because (as described above) it is possible for a linear map to be bounded but continuous. However, continuity and boundedness are equivalent if the domain is a normed or seminormed space; that is, for a linear functional on a normed space, being "bounded" is equivalent to being "bounded on a neighborhood".
is bounded on a neighborhood of the origin. Said differently, is a locally bounded at the origin.
The equality holds for all scalars and when then will be neighborhood of the origin. So in particular, if is a positive real number then for every positive real the set is a neighborhood of the origin and Using proves the next statement when
There exists some neighborhood of the origin such that
This inequality holds if and only if for every real which shows that the positive scalar multiples of this single neighborhood will satisfy the definition of continuity at the origin given in (4) above.
By definition of the set which is called the (absolute) polar of the inequality holds if and only if Polar sets, and so also this particular inequality, play important roles in duality theory.
is a locally bounded at every point of its domain.
The kernel of is closed in
Either or else the kernel of is dense in
There exists a continuous seminorm on such that
In particular, is continuous if and only if the seminorm is a continuous.
The graph of is closed.
is continuous, where denotes the real part of
If and are complex vector spaces then this list may be extended to include:
The imaginary part of is continuous.
If the domain is a sequential space then this list may be extended to include:
is sequentially continuous at some (or equivalently, at every) point of its domain.
If the domain is metrizable or pseudometrizable (for example, a Fréchet space or a normed space) then this list may be extended to include:
is a bounded linear operator (that is, it maps bounded subsets of its domain to bounded subsets of its codomain).
If the domain is a bornological space (for example, a pseudometrizable TVS) and is locally convex then this list may be extended to include:
is a bounded linear operator.
is sequentially continuous at some (or equivalently, at every) point of its domain.
is sequentially continuous at the origin.
and if in addition is a vector space over the real numbers (which in particular, implies that is real-valued) then this list may be extended to include:
There exists a continuous seminorm on such that
For some real the half-space is closed.
For any real the half-space is closed.
If is complex then either all three of and are continuous (respectively, bounded), or else all three are discontinuous (respectively, unbounded).
Examples
Every linear map whose domain is a finite-dimensional Hausdorff topological vector space (TVS) is continuous. This is not true if the finite-dimensional TVS is not Hausdorff.
Every (constant) map between TVS that is identically equal to zero is a linear map that is continuous, bounded, and bounded on the neighborhood of the origin. In particular, every TVS has a non-empty continuous dual space (although it is possible for the constant zero map to be its only continuous linear functional).
Suppose is any Hausdorff TVS. Then linear functional on is necessarily continuous if and only if every vector subspace of is closed. Every linear functional on is necessarily a bounded linear functional if and only if every bounded subset of is contained in a finite-dimensional vector subspace.
Properties
A locally convex metrizable topological vector space is normable if and only if every bounded linear functional on it is continuous.
A continuous linear operator maps bounded sets into bounded sets.
The proof uses the facts that the translation of an open set in a linear topological space is again an open set, and the equality
for any subset of and any which is true due to the additivity of
Properties of continuous linear functionals
If is a complex normed space and is a linear functional on then (where in particular, one side is infinite if and only if the other side is infinite).
Every non-trivial continuous linear functional on a TVS is an open map.
If is a linear functional on a real vector space and if is a seminorm on then if and only if
If is a linear functional and is a non-empty subset, then by defining the sets
the supremum can be written more succinctly as because
If is a scalar then
so that if is a real number and is the closed ball of radius centered at the origin then the following are equivalent:
See also
References
Functional analysis
Linear operators
Operator theory
Theory of continuous functions | Continuous linear operator | [
"Mathematics"
] | 3,386 | [
"Functions and mappings",
"Functional analysis",
"Theory of continuous functions",
"Mathematical objects",
"Linear operators",
"Topology",
"Mathematical relations"
] |
1,803,558 | https://en.wikipedia.org/wiki/3%2C4-Methylenedioxyphenylpropan-2-one | 3,4-Methylenedioxyphenylpropan-2-one or piperonyl methyl ketone (MDP2P or PMK) is a chemical compound consisting of a phenylacetone moiety substituted with a methylenedioxy functional group. It is commonly synthesized from either safrole (which, for comparison, is 3-[3,4-(methylenedioxy)phenyl]-2-propene) or its isomer isosafrole via oxidation using the Wacker oxidation or peroxyacid oxidation methods. MDP2P is unstable at room temperature and must be kept in the freezer in order to be preserved properly.
MDP2P is a precursor in the chemical synthesis of the methylenedioxyphenethylamine (MDxx) class of compounds, the classic example of which is 3,4-methylenedioxy-N-methylamphetamine (MDMA), and is also an intermediate between the MDxx family and their slightly more distant precursor safrole or isosafrole. On account of its relation to the MDxx chemical class, MDP2P, as well as safrole and isosafrole, are in the United States (U.S.) Drug Enforcement Administration (DEA) List I of Chemicals of the Controlled Substances Act (CSA) via the Chemical Diversion and Trafficking Act (CDTA). It is also considered a category 1 precursor in the European Union.
See also
Isosafrole
Phenylacetone
Safrole
References
Ketones
Benzodioxoles
Human drug metabolites | 3,4-Methylenedioxyphenylpropan-2-one | [
"Chemistry"
] | 337 | [
"Ketones",
"Chemicals in medicine",
"Functional groups",
"Human drug metabolites"
] |
1,803,894 | https://en.wikipedia.org/wiki/Coagulative%20necrosis | Coagulative necrosis is a type of accidental cell death typically caused by ischemia or infarction. In coagulative necrosis, the architectures of dead tissue are preserved for at least a couple of days. It is believed that the injury denatures structural proteins as well as lysosomal enzymes, thus blocking the proteolysis of the damaged cells. The lack of lysosomal enzymes allows it to maintain a "coagulated" morphology for some time. Like most types of necrosis, if enough viable cells are present around the affected area, regeneration will usually occur. Coagulative necrosis occurs in most bodily organs, excluding the brain. Different diseases are associated with coagulative necrosis, including acute tubular necrosis and acute myocardial infarction.
Coagulative necrosis can also be induced by high local temperature; it is a desired effect of treatments such as high intensity focused ultrasound applied to cancerous cells.
Causes
Coagulative necrosis is most commonly caused by conditions that do not involve severe trauma, toxins or an acute or chronic immune response. The lack of oxygen (hypoxia) causes cell death in a localized area which is perfused by blood vessels failing to deliver primarily oxygen, but also other important nutrients. While ischemia in most tissues of the body will cause coagulative necrosis, in the central nervous system ischemia causes liquefactive necrosis, as there is very little structural framework in neural tissue.
Pathology
Macroscopic
The macroscopic appearance of an area of coagulative necrosis is a pale segment of tissue contrasting against surrounding well vascularized tissue and is dry on cut surface. The tissue may later turn red due to inflammatory response. The surrounding surviving cells can aid in regeneration of the affected tissue unless they are stable or permanent.
Microscopic
Microscopically, coagulative necrosis causes cells to appear to have the same outline, but no nuclei. The nucleus is lost and there is cytoplasmic hypereosinophilia on H&E stain.(Protein denaturation results in exposure of hydrophobic regions normally sequestered within the three-dimensional center of the molecules and may explain why necrotic cells display an increased capacity to bind the hydrophobic Eosin pigment) Also, it is characteristic of coagulative necrosis to not have a zone in between necrotic cells and viable cells. There is an instant transition, lacking granulation tissue in between.
Treatments
Coagulative necrosis can be induced for treatments of cancers. Radiofrequency (RF) energy can be used in liver resection surgeries to produce coagulative necrosis, creating a coagulative necrosis zone. This coagulates the liver resection margins and is useful in liver resection surgeries for helping to stop bleeding within the resection margin, increasing the safety margin. To achieve coagulative necrosis in tumor tissue, it only takes around 20 minutes of application with the RF probe. Additionally, high-intensity focused ultrasound (HIFU) also induces coagulative necrosis in target tumors. Both of these treatments use coagulative necrosis in treatment of cancer.
Regeneration
As the majority of the structural remnants of the necrotic tissue remains, labile cells adjacent to the affected tissue will replicate and replace the cells that have been killed during the event. Labile cells are constantly undergoing mitosis and can therefore help reform the tissue, whereas nearby stable and permanent cells (e.g. neurons and cardiomyocytes) do not undergo mitosis and will not replace the tissue affected. Fibroblasts will also migrate to the affected area, depositing fibrous tissue producing fibrosis or scarring in areas where viable cells do not replicate and replace tissue.
References
Cellular processes
Necrosis | Coagulative necrosis | [
"Biology"
] | 778 | [
"Cellular processes",
"Necrosis"
] |
1,804,031 | https://en.wikipedia.org/wiki/Caseous%20necrosis | Caseous necrosis or caseous degeneration () is a unique form of cell death in which the tissue maintains a cheese-like appearance. Unlike with coagulative necrosis, tissue structure is destroyed. Caseous necrosis is enclosed within a granuloma. Caseous necrosis is most notably associated with tuberculoma. The dead tissue appears as a soft and white proteinaceous dead cell mass.
The term caseous means 'pertaining or related to cheese', and comes from the Latin word 'cheese'.
Histology
In caseous necrosis no histological architecture is preserved (unlike with coagulative necrosis). On microscopic examination with H&E staining, the area is acellular, characterised by amorphous, roughly granular eosinophilic debris of now dead cells, also containing interspearsed haematoxyphilic remnants of cell nucleus contents. This caseus necrotic center is enclosed within a granuloma.
Causes
Frequently caseous necrosis is characteristically associated with tuberculomas.
A similar appearance can be associated with histoplasmosis, cryptococcosis, and coccidioidomycosis.
Pathophysiology
This begins as infection is recognized by the body and macrophages begin walling off the microorganisms or pathogens. As macrophages release chemicals that digest cells, the cells begin to die. As the cells die they disintegrate but are not completely digested and the debris of the disintegrated cells clump together creating soft granular mass that has the appearance of cheese. As cell death begins, the granuloma forms and cell death continues the inflammatory response is mediated by a type IV hypersensitivity reaction.
Some data suggests that the epithelioid morphology and associated barrier function of host macrophages associated with granulomas may prevent effective immune clearance of mycobacteria.
References
External links
Microscope images of caseous necrosis
Image of a hilar lymph node demonstrating caseous necrosis
Image of a caseating granuloma of tuberculosis in the adrenal gland
Histopathology
Tuberculosis
Necrosis | Caseous necrosis | [
"Chemistry",
"Biology"
] | 446 | [
"Necrosis",
"Cellular processes",
"Histopathology",
"Microscopy"
] |
1,804,365 | https://en.wikipedia.org/wiki/Retrograde%20inversion | In music theory, retrograde inversion is a musical term that literally means "backwards and upside down": "The inverse of the series is sounded in reverse order." Retrograde reverses the order of the motif's pitches: what was the first pitch becomes the last, and vice versa. This is a technique used in music, specifically in twelve-tone technique, where the inversion and retrograde techniques are performed on the same tone row successively, "[t]he inversion of the prime series in reverse order from last pitch to first."
Conventionally, inversion is carried out first, and the inverted form is then taken backward to form the retrograde inversion, so that the untransposed retrograde inversion ends with the pitch that began the prime form of the series. In his late twelve-tone works, however, Igor Stravinsky preferred the opposite order, so that his row charts use inverse retrograde (IR) forms for his source sets, instead of retrograde inversions (RI), although he sometimes labeled them RI in his sketches.
For example, the forms of the row from Requiem Canticles are as follows:
P0:
R0:
I0:
RI0:
IR0:
Note that IR is a transposition of RI, the pitch class between the last pitches of P and I above RI.
Other compositions that include retrograde inversions in its rows include works by Tadeusz Baird and Karel Goeyvaerts. One work in particular by the latter composer, Nummer 2, employs retrograde of the recurring twelve-tone row B–F–F–E–G–A–E–D–A–B–D–C in the piano part. It is performed in both styles, particularly in the outer sections of the piece. The final movement of Paul Hindemith's Ludus Tonalis, the Postludium, is an exact retrograde inversion of the work's opening Praeludium.
Sources
Musical symmetry
Serialism | Retrograde inversion | [
"Physics"
] | 408 | [
"Symmetry",
"Musical symmetry"
] |
1,804,451 | https://en.wikipedia.org/wiki/Sir%20Frank%20Whittle%20Medal | The Sir Frank Whittle Medal is awarded annually by the Royal Academy of Engineering to an engineer,
normally resident in the United Kingdom, for outstanding and sustained achievement which has contributed to the well-being of the nation. The field of activity in which the medal is awarded changes annually.
Named after Sir Frank Whittle, the award was instituted in 2001.
Previous winners:
References
Awards established in 2001
British science and technology awards
Engineering awards
Royal Academy of Engineering
Technology history of the United Kingdom | Sir Frank Whittle Medal | [
"Technology",
"Engineering"
] | 99 | [
"Science and technology awards",
"Royal Academy of Engineering",
"National academies of engineering",
"Engineering awards"
] |
1,804,746 | https://en.wikipedia.org/wiki/Distributed%20version%20control | In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control (cf. monorepo), this enables automatic management branching and merging, speeds up most operations (except pushing and fetching), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system.
In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years".
Distributed vs. centralized
Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history.
Advantages of DVCS (compared with centralized systems) include:
Allows users to work productively when not connected to a network.
Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers.
Allows private work, so users can use their changes even for early drafts they do not want to publish.
Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure.
Allows various development models to be used, such as using development branches or a Commander/Lieutenant model.
Permits centralized control of the "release version" of the project
On FOSS software projects it is much easier to create a project fork from a project that is stalled because of leadership conflicts or design disagreements.
Disadvantages of DVCS (compared with centralized systems) include:
Initial checkout of a repository is slower as compared to checkout in a centralized version control system, because all branches and revision history are copied to the local machine by default.
The lack of locking mechanisms that is part of most centralized VCS and still plays an important role when it comes to non-mergeable binary files such as graphic assets or too complex single file binary or XML packages (e.g. office documents, PowerBI files, SQL Server Data Tools BI packages, etc.).
Additional storage required for every user to have a complete copy of the complete codebase history.
Increased exposure of the code base since every participant has a locally vulnerable copy.
Some originally centralized systems now offer some distributed features. Team Foundation Server and Visual Studio Team Services now host centralized and distributed version control repositories via hosting Git.
Similarly, some distributed systems now offer features that mitigate the issues of checkout times and storage costs, such as the Virtual File System for Git developed by Microsoft to work with very large codebases, which exposes a virtual file system that downloads files to local storage only as they are needed.
Work model
A distributed model is generally better suited for large projects with partly independent developers, such as the Linux Kernel. It allows developers to work in independent branches and apply changes that can later be committed, audited and merged (or rejected) by others. This model allows for better flexibility and permits for the creation and adaptation of custom source code branches (forks) whose purpose might differ from the original project. In addition, it permits developers to locally clone an existing code repository and work on such from a local environment where changes are tracked and committed to the local repository allowing for better tracking of changes before being committed to the master branch of the repository. Such an approach enables developers to work in local and disconnected branches, making it more convenient for larger distributed teams.
Central and branch repositories
In a truly distributed project, such as Linux, every contributor maintains their own version of the project, with different contributors hosting their own respective versions and pulling in changes from other users as needed, resulting in a general consensus emerging from multiple different nodes. This also makes the process of "forking" easy, as all that is required is one contributor stop accepting pull requests from other contributors and letting the codebases gradually grow apart.
This arrangement, however, can be difficult to maintain, resulting in many projects choosing to shift to a paradigm in which one contributor is the universal "upstream", a repository from whom changes are almost always pulled. Under this paradigm, development is somewhat recentralized, as every project now has a central repository that is informally considered as the official repository, managed by the project maintainers collectively. While distributed version control systems make it easy for new developers to "clone" a copy of any other contributor's repository, in a central model, new developers always clone the central repository to create identical local copies of the code base. Under this system, code changes in the central repository are periodically synchronized with the local repository, and once the development is done, the change should be integrated into the central repository as soon as possible.
Organizations utilizing this centralize pattern often choose to host the central repository on a third party service like GitHub, which offers not only more reliable uptime than self-hosted repositories, but can also add centralized features like issue trackers and continuous integration.
Pull requests
Contributions to a source code repository that uses a distributed version control system are commonly made by means of a pull request, also known as a merge request. The contributor requests that the project maintainer pull the source code change, hence the name "pull request". The maintainer has to merge the pull request if the contribution should become part of the source base.
The developer creates a pull request to notify maintainers of a new change; a comment thread is associated with each pull request. This allows for focused discussion of code changes. Submitted pull requests are visible to anyone with repository access. A pull request can be accepted or rejected by maintainers.
Once the pull request is reviewed and approved, it is merged into the repository. Depending on the established workflow, the code may need to be tested before being included into official release. Therefore, some projects contain a special branch for merging untested pull requests. Other projects run an automated test suite on every pull request, using a continuous integration tool, and the reviewer checks that any new code has appropriate test coverage.
History
The first open-source DVCS systems included Arch, Monotone, and Darcs. However, open source DVCSs were never very popular until the release of Git and Mercurial.
BitKeeper was used in the development of the Linux kernel from 2002 to 2005. The development of Git, now the world's most popular version control system, was prompted by the decision of the company that made BitKeeper to rescind the free license that Linus Torvalds and some other Linux kernel developers had previously taken advantage of.
See also
References
External links
Essay on various revision control systems, especially the section "Centralized vs. Decentralized SCM"
Introduction to distributed version control systems - IBM Developer Works article
Version control
Free software projects
Free version control software
Distributed version control systems
de:Versionsverwaltung#Verteilte Versionsverwaltung
fr:Gestion de version décentralisée
ja:分散型バージョン管理システム | Distributed version control | [
"Engineering"
] | 1,536 | [
"Software engineering",
"Version control"
] |
1,804,889 | https://en.wikipedia.org/wiki/Acoustic%20network | An acoustic network is a method of positioning equipment using sound waves. It is primarily used in water, and can be as small or as large as required by the users specifications.
Size of network
The simplest acoustic network consists of one measurement resulting in a single range between sound source and sound receiver.
Bigger networks are only limited by the amount of equipment available, and computing power needed to resolve the resulting data. The latest acoustic networks used in the marine seismic industry can resolve a network of some 16,000 individual ranges in a matter of seconds.
The principle
The principle behind all acoustic networks is the same. Distance = speed x travel time. If the travel time and speed of the sound signal are known, we can calculate the distance between source and receiver. In most networks, the speed of the acoustic signal is assumed at a specific value. This value is either derived from measuring a signal between two known points, or by using specific equipment to calculate it from environmental conditions.
The diagram below shows the basic operation of measuring a single range.
At a specified time the processor issues a signal to the source, which then sends out the sound wave.
Once the sound wave is received another signal is received at the processor resulting in a time difference between transmission and reception. This gives the travel time.
Using the travel time and assumed speed of the signal, the processor can calculate the distance between source and receiver.
If the operator is using acoustic ranges to position items in unknown locations they will need to use more than the single range example shown above.
As there is only one measurement, the receiver could be anywhere on a circle with a radius equal to the calculated range and centered on the transmitter.
Acoustic Processing
If a second transmitter is added to the system the number of possible positions for the receiver is reduced to two.
It is only when three or more ranges are introduced into the system, is the position of the receiver achieved.
References
Acoustics
Networks | Acoustic network | [
"Physics"
] | 385 | [
"Classical mechanics",
"Acoustics"
] |
1,804,907 | https://en.wikipedia.org/wiki/Woodchipper | A tree chipper or woodchipper is a machine used for reducing wood (generally tree limbs or trunks) into smaller woodchips. They are often portable, being mounted on wheels on frames suitable for towing behind a truck or van. Power is generally provided by an internal combustion engine from . There are also high power chipper models mounted on trucks and powered by a separate engine. These models usually also have a hydraulic winch.
Tree chippers are typically made of a hopper with a collar, the chipper mechanism itself, and an optional collection bin for the chips. A tree limb is inserted into the hopper (the collar serving as a partial safety mechanism to keep human body parts away from the chipping blades) and started into the chipping mechanism. The chips exit through a chute and can be directed into a truck-mounted container or onto the ground. Typical output is chips on the order of across in size. The resulting wood chips have various uses such as being spread as a ground cover or being fed into a digester during papermaking.
Most woodchippers rely on energy stored in a heavy flywheel to do their work (although some use drums). The chipping blades are mounted on the face of the flywheel, and the flywheel is accelerated by an electric motor or internal combustion engine.
Large woodchoppers are frequently equipped with grooved rollers in the throat of their feed funnels. Once a branch has been gripped by the rollers, the rollers transport the branch to the chipping blades at a steady rate. These rollers are a safety feature and are generally reversible for situations where a branch gets caught on clothing.
History
The woodchipper was invented by Peter Jensen (Maasbüll, Germany) in 1884, the "Marke Angeln" soon became the core business of his company, which already produced and repaired communal- and woodworking-machinery.
Types
Disc
The original chipper design employs a steel disk with blades mounted upon it as the chipping mechanism. This technology dates back to an invention by German Heinrich Wigger, for which he obtained a patent in 1922. In this design, (usually) reversible hydraulically powered wheels draw the material from the hopper towards the disk, which is mounted perpendicularly to the incoming material. As the disk is turned by a motor, the blades mounted on the face of the disk cut the material into chips. These are thrown out the chute by flanges on the edges of the disk.
Commercial-grade disk-style chippers usually have a material diameter capacity of . Industrial-grade chippers (tub grinders) are available with discs as large as in diameter, requiring . One application of industrial disk chippers is to produce the wood chips used in the manufacture of particle board.
Drum
Drum chippers employ mechanisms consisting of a large steel drum powered by a motor. The drum is mounted parallel to the hopper and spins toward the chute. Blades mounted to the outer surface of the drum cut the material into chips and propel the chips into the discharge chute. Commercial-grade drum-style chippers usually have a material diameter capacity of .
Conventionally-fed drum chippers use the drum as the feed mechanism, drawing the material through as it chips it. These are colloquially known as "chuck-and-duck" chippers, due to the immediate speed attained by material dropped into the drum. Chippers of this type have many drawbacks and safety issues. If an operator becomes snagged on material being fed into the machine, injury or death is very likely. Hydraulically fed drum chippers have largely replaced conventionally-fed machines. These chippers use a set of hydraulically powered wheels to regulate the rate of feed of material into the chipper drum.
Other
Much larger machines for wood processing exist. "Whole tree chippers" and "Recyclers", which can typically handle material diameters of may employ drums, disks, or a combination of both. The largest machines used in wood processing, often called "Tub or Horizontal Grinders", may handle a material diameter of or greater, and use carbide tipped flail hammers to pulverize wood rather than cut it, producing a shredded wood rather than chip or chunk. These machines usually have a power of . Most are so heavy that they require a semi-trailer truck to be transported. Smaller models can be towed by a medium duty truck.
Blades
Although chippers vary greatly in size, type, and capacity, the blades processing the wood are similar in construction. They are rectangular in shape and are usually across by long. They vary in thickness from about . Chipper blades are made from high grade steel and usually contain a minimum of 8% chromium for hardness.
City services
Fallen branches, especially when it is suspected that they are infested by beetles or their larva, are
chipped to prevent further infestation.
City government acquires and operates chippers as needed,
including for seasonal use.
Safety
Thirty-one people were killed in woodchipper accidents between 1992 and 2002 in the US, according to a 2005 report by the Journal of the American Medical Association.
In popular culture
Joel and Ethan Coen's film Fargo features an infamous scene in which Peter Stormare, as Gaear Grimsrud, feeds the remains of Steve Buscemi's character, Carl Showalter, into a woodchipper.<ref>{{Cite web|url=https://ew.com/article/2016/03/08/fargo-20th-anniversary-wood-chipper-oral-history/|title='Fargos wood-chipper turns 20: A brief oral history|website=EW.com|language=en|access-date=2019-05-07}}</ref> The scene, according to the film's special edition DVD, was based on the 1986 murder of Helle Crafts. The woodchipper used in the scene is now a tourist attraction at the Fargo-Moorhead Visitors Center.
It was claimed that Saddam Hussein used chippers to murder dissident citizens of his country, although there was extremely little evidence to support this claim.
Horror films Tucker and Dale vs. Evil (2011) and Winnie-the-Pooh: Blood and Honey'' (2023) contain scenes depicting the use of a woodchipper as a murder weapon.
See also
References
External links
brush chipper for biomass, Vermeer India
Chipper/Shredder Safety, Kansas State University
Agricultural machinery
Organic farming
Gardening tools
Woodworking machines
Forestry equipment | Woodchipper | [
"Physics",
"Technology"
] | 1,348 | [
"Woodworking machines",
"Machines",
"Physical systems"
] |
1,805,832 | https://en.wikipedia.org/wiki/Kilocalorie%20per%20mole | The kilocalorie per mole is a unit to measure an amount of energy per number of molecules, atoms, or other similar particles. It is defined as one kilocalorie of energy (1000 thermochemical gram calories) per one mole of substance. The unit symbol is written kcal/mol or kcal⋅mol−1. As typically measured, one kcal/mol represents a temperature increase of one degree Celsius in one liter of water (with a mass of 1 kg) resulting from the reaction of one mole of reagents.
In SI units, one kilocalorie per mole is equal to 4.184 kilojoules per mole (kJ/mol), which comes to approximately joules per molecule, or about 0.043 eV per molecule. At room temperature (25 °C, 77 °F, or 298.15 K), one kilocalorie per mole is approximately equal to 1.688 kT per molecule.
Even though it is not an SI unit, the kilocalorie per mole is still widely used in chemistry and biology for thermodynamical quantities such as thermodynamic free energy, heat of vaporization, heat of fusion and ionization energy. This is due to a variety of factors, including the ease with which it can be calculated based on the units of measure typically employed in quantifying a chemical reaction, especially in aqueous solution. In addition, for many important biological processes, thermodynamic changes are on a convenient order of magnitude when expressed in kcal/mol. For example, for the reaction of glucose with ATP to form glucose-6-phosphate and ADP, the free energy of reaction is −4.0 kcal/mol using the pH = 7 standard state.
References
Energy (physics)
Thermodynamics
Heat transfer
Units of chemical measurement | Kilocalorie per mole | [
"Physics",
"Chemistry",
"Mathematics"
] | 400 | [
"Transport phenomena",
"Thermodynamics stubs",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Energy (physics)",
"Thermodynamics",
"Units of chemical measurement",
"Wikipedia categories named after physical quantities",
"Physical chemistry stu... |
20,541,712 | https://en.wikipedia.org/wiki/PITZ | The Photo Injector Test Facility at the DESY location in Zeuthen (PITZ) was built in 2002 in order to test and optimize sources of high-brightness electron beams for future free-electron lasers (FELs) and linear colliders. The focus at PITZ is on the production of intense electron beams with very small transverse emittance and reasonably small longitudinal emittance which are required in order to meet the high-gain conditions of FEL operation. This challenge is met by applying the most advanced techniques in combination with key parameters of projects based on TESLA technology, such as FLASH and the European XFEL. The PITZ collaboration involves several accelerator centres and institutes from around the world.
References
External links
PITZ website
DESY website
FLASH website
European XFEL website
Experimental particle physics | PITZ | [
"Physics"
] | 167 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
20,542,082 | https://en.wikipedia.org/wiki/Internal%20environment | The internal environment (or milieu intérieur in French; ) was a concept developed by Claude Bernard, a French physiologist in the 19th century, to describe the interstitial fluid and its physiological capacity to ensure protective stability for the tissues and organs of multicellular organisms.
Etymology
Claude Bernard used the French phrase milieu intérieur (internal environment in English) in several works from 1854 until his death in 1878. He most likely adopted it from the histologist Charles Robin, who had employed the phrase "milieu de l’intérieur" as a synonym for the ancient hippocratic idea of humors. Bernard was initially only concerned with the role of the blood but he later included that of the whole body in ensuring this internal stability. He summed up his idea as follows:
Bernard's work regarding the internal environment of regulation was supported by work in Germany at the same time. While Rudolf Virchow placed the focus on the cell, others, such as Carl von Rokitansky (1804–1878) continued to study humoral pathology particularly the matter of microcirculation. Von Rokitansky suggested that illness originated in damage to this vital microcirculation or internal system of communication. Hans Eppinger (1879–1946), a professor of internal medicine in Vienna, further developed von Rokitansky's point of view and showed that every cell requires a suitable environment which he called the ground substance for successful microcirculation. This work of German scientists was continued in the 20th century by Alfred Pischinger (1899–1982) who defined the connections between the ground substance or extracellular matrix and both the hormonal and autonomic nervous systems and saw therein a complex system of regulation for the body as a whole and for cellular functioning, which he termed the ground regulatory (das System der Grundregulation).
History
Bernard created his concept to replace the ancient idea of life forces with that of a mechanistic process in which the body's physiology was regulated through multiple mechanical equilibrium adjustment feedbacks. Walter Cannon's later notion of homeostasis (while also mechanistic) lacked this concern, and was even advocated in the context of such ancient notions as vis medicatrix naturae.
Cannon, in contrast to Bernard, saw the self-regulation of the body as a requirement for the evolutionary emergence and exercise of intelligence, and further placed the idea in a political context: "What corresponds in a nation to the internal environment of the body? The closest analogue appears to be the whole intricate system of production and distribution of merchandise". He suggested, as an analogy to the body's own ability to ensure internal stability, that society should preserve itself with a technocratic bureaucracy, "biocracy".
The idea of milieu intérieur, it has been noted, led Norbert Wiener to the notion of cybernetics and negative feedback creating self-regulation in the nervous system and in nonliving machines, and that "today, cybernetics, a formalization of Bernard's constancy hypothesis, is viewed as one of the critical antecedents of contemporary cognitive science".
Early reception
Bernard's idea was initially ignored in the 19th century. This happened in spite of Bernard being highly honored as the founder of modern physiology (he indeed received the first French state funeral for a scientist). Even the 1911 edition of Encyclopædia Britannica does not mention it. His ideas about milieu intérieur only became central to the understanding of physiology in the early part of the 20th century. It was only with Joseph Barcroft, Lawrence J. Henderson, and particularly Walter Cannon and his idea of homeostasis, that it received its present recognition and status. The current 15th edition notes it as being Bernard's most important idea.
Idea of internal communication
In addition to providing the basis for understanding the internal physiology in terms of the interdependence of the cellular and extracellular matrix or ground system, Bernard's fruitful concept of the milieu intérieur has also led to significant research regarding the system of communication that allows for the complex dynamics of homeostasis.
Work by Szent-Györgyi
Initial work was conducted by Albert Szent-Györgyi who concluded that organic communication could not be explained solely by the random collisions of molecules and studied energy fields as well as the connective tissue. He was aware of earlier work by Moglich and Schon (1938) and Jordan (1938) on non-electrolytic mechanisms of charge transfer in living systems. This was further explored and advanced by Szent-Györgyi in 1941 in a Koranyi Memorical Lecture in Budapest, published in both Science and Nature, wherein he proposed that proteins are semi-conductors and capable of rapid transfer of free electrons within an organism. This idea was received with skepticism, but it is now generally accepted that most if not all parts of the extracellular matrix have semiconductor properties. The Koranyi Lecture triggered a growing molecular-electronics industry, using biomolecular semiconductors in nanoelectronic circuits.
In 1988 Szent-Györgyi stated that "Molecules do not have to touch each other to interact. Energy can flow through... the electromagnetic field" which "along with water, forms the matrix of life." This water is related also to the surfaces of proteins, DNA and all living molecules in the matrix. This is a structured water that provides stability for metabolic functioning, and related to collagen as well, the major protein in the extracellular matrix and in DNA. The structured water can form channels of energy flow for protons (unlike electrons that flow through the protein structure to create bio-electricity). Mitchell (1976) refers to these flow as 'proticity'.
Work in Germany
Work in Germany over the last half-century has also focused on the internal communication system, in particular as it relates to the ground system. This work has led to their characterization of the ground system or extracellular matrix interaction with the cellular system as a 'ground regulatory system', seeing therein the key to homeostasis, a body-wide communication and support system, vital to all functions.
In 1953 a German doctor and scientist, Reinhold Voll, discovered that points used in acupuncture had different electrical properties from the surrounding skin, namely a lower resistance. Voll further discovered that the measurement of the resistances at the points gave valuable indications as to the state of the internal organs. Further research was done by Dr. Alfred Pischinger, the originator of the concept of the 'system of ground regulation', as well as Drs. Helmut Schimmel, and Hartmut Heine, using Voll's method of electro-dermal screening. This further research revealed that the gene is not so much the controller but the repository of blueprints on how cells and higher systems should operate, and that the actual regulation of biological activities (see Epigenetic cellular biology) lies in a 'system of ground regulation'. This system is built on the ground substance, a complex connective tissue between all the cells, often also called the extra-cellular matrix. This ground substance is made up of 'amorphous' and 'structural' ground substance. The former is "a transparent, half-fluid gel produced and sustained by the fibroblast cells of the connective tissues" consisting of highly polymerized sugar-protein complexes.
The ground substance, according to German research, determines what enters and exits the cell and maintains homeostasis, which requires a rapid communication system to respond to complex signals (see also Bruce Lipton).
This is made possible by the diversity of molecular structures of the sugar polymers of the ground substance, the ability to swiftly generate new such substances, and their high interconnectedness. This creates a redundance that makes possible the controlled oscillation of values above and below the dynamic homeostasis present in all living creatures. This is a kind of fast-responding, "short term memory" of the ground substance. Without this labile capacity, the system would quickly move to an energetic equilibrium, which would bring inactivity and death.
For its biochemical survival, every organism requires the ability to rapidly construct, destroy and reconstruct the constituents of the ground substance.
Between the molecules that make up the ground substance there are minimal surfaces of potential energy. The charging and discharging of the materials of the ground substance cause 'biofield oscillations' (photon fields). The interference of these fields creates short lived (from 10–9 to up to 10–5 seconds) tunnels through the ground substance. Through these tunnels, shaped like the hole through a donut, large chemicals may traverse from capillaries through the ground substance and into the functional cells of organs and back again. All metabolic processes depend upon this transport mechanism.
Major ordering energy structures in the body are created by the ground substance, such as collagen, which not only conducts energy but generates it, due to its piezoelectric properties.
Like quartz crystal, collagen in the ground substance and the more stable connective tissues (fascia, tendons, bones, etc.). transforms mechanical energy (pressure, torsion, stretch) into electromagnetic energy, which then resonates through the ground substance (Athenstaedt, 1974). However, if the ground substance is chemically imbalanced, the energy resonating through the body loses coherence.
This is what occurs in the adaptation response described by Hans Selye. When the ground regulation is out of balance, the probability of chronic illness increases. Research by Heine indicates that unresolved emotional traumas release a neurotransmitter substance P which causes the collagen to take on a hexagonal structure that is more ordered than their usual structure, putting the ground substance out of balance, what he calls an "emotional scar "providing" an important scientific verification that diseases can have psychological causes." (see also Bruce Lipton)
Work in the U.S.
While the initial work on identifying the importance of the ground regulatory system was done in Germany, more recent work examining the implications of inter and intra-cellular communication via the extra-cellular matrix has taken place in the U.S. and elsewhere.
Structural continuity between extracellular, cyst skeletal and nuclear components was discussed by Hay, Berezny et al. and Oschman. Historically, these elements have been referred to as ground substances, and because of their continuity, they act to form a complex, interlaced system that reaches into and contacts every part of the body. Even as early as 1851 it was recognized that the nerve and blood systems do not directly connect to the cell, but are mediated by and through an extracellular matrix.
Recent research regarding the electrical charges of the various glycol-protein components of the extracellular matrix shows that because of the high density of negative charges on glycosaminoglycans (provided by sulfate and carboxylate groups of the uronic acid residues) the matrix is an extensive redox system capable of absorbing and donating electrons at any point. This electron transfer function reaches into the interiors of cells as the cytoplasmic matrix is also strongly negatively charged. The entire extracellular and cellular matrix functions as a biophysical storage system or accumulator for electrical charge.
From thermodynamic, energetic and geometrical considerations, molecules of the ground substance are considered to form minimal physical and electrical surfaces, such that, based on the mathematics of minimal surfaces, minuscule changes can lead to significant changes in distant areas of the ground substance. This discovery is seen as having implications for many physiological and biochemical processes, including membrane transport, antigen–antibody interactions, protein synthesis, oxidation reactions, actin–myosin interactions, sol to gel transformations in polysaccharides.
One description of the charge transfer process in the matrix is, "highly vectoral electron transport along biopolymer pathways". Other mechanisms involve clouds of negative charge created around the proteoglycans in the matrix. There are also soluble and mobile charge transfer complexes in cells and tissues (e.g. Slifkin, 1971; Gutman, 1978; Mattay, 1994).
Rudolph A. Marcus of the California Institute of Technology found that when the driving force increases beyond a certain level, electron transfer will begin to slow down instead of speed up (Marcus, 1999) and he received a Nobel Prize in chemistry in 1992 for this contribution to the theory of electron transfer reactions in chemical systems. The implication of the work is that a vectoral electron transport process may be greater the smaller the potential, as in living systems.
Notes
Control theory
Organizational cybernetics
Homeostasis
Medical terminology
Physiology
Scientific terminology
Systems theory
French medical phrases | Internal environment | [
"Mathematics",
"Biology"
] | 2,643 | [
"Physiology",
"Applied mathematics",
"Control theory",
"Homeostasis",
"Dynamical systems"
] |
20,543,137 | https://en.wikipedia.org/wiki/Amanita%20persicina | Amanita persicina, commonly known as the peach-colored fly agaric, is a basidiomycete fungus of the genus Amanita with a peach-colored center. Until , the fungus was believed to be a variety of A. muscaria.
A. persicina is distributed in eastern North America. It is both poisonous and psychoactive.
Taxonomy
Amanita persicina was formerly treated as a variety of A. muscaria (the fly agaric) and it was classified as A. muscaria var. persicina. Recent DNA evidence, however, has indicated that A. persicina is better treated as a distinct species, and it was elevated to species status in 2015 by Tulloss & Geml.
Description
Cap
The cap is wide, hemispheric to convex when young, becoming plano-convex to plano-depressed in age. It is pinkish-melon-colored to peach-orange, sometimes pastel red towards the disc. The cap is slightly appendiculate. The volva is distributed over the cap as thin pale yellowish to pale tannish warts; it is otherwise smooth and subviscid, and the margin becomes slightly to moderately striate in age. The flesh is white and does not stain when cut or injured. The flesh has a pleasant taste and odor.
Gills
The gills are free, crowded, moderately broad, creamy with a pale pinkish tint, and have a very floccose edge. They are abruptly truncate.
Spores
Amanita persicina spores are white in deposit, ellipsoid to elongate, infrequently broadly ellipsoid, rarely cylindric, inamyloid, and are (8.0) 9.4–12.7 (18.0) x (5.5) 6.5–8.5 (11.1) μm.
Stipe
The stipe is 4–10.5 cm long, 1–2 cm wide, and more or less equal or narrowing upwards and slightly flaring at the apex. It is pale yellow in the superior region, tannish white below, and densely stuffed with a pith. The ring is fragile, white above and yellowish below, and poorly formed or absent. Remnants of the universal veil on the vasal bulb as concentric rings are fragile or absent.
Chemistry
This species contains variable amounts of the neurotoxic compound ibotenic acid and the psychoactive compound muscimol.
Distribution and habitat
A. persicina is found growing solitary or gregariously. It is mycorrhizal with conifers (Pine) and deciduous (Oak) trees in North America. It often fruits in the fall, but sometimes in the spring and summer in the southern states. The fungus is common in the southeast United States, from Texas to Georgia, and north to New Jersey.
Toxicity
A. persicina is both poisonous and psychoactive if not properly prepared by parboiling. Pending further research, it should not be eaten.
Gallery
References
Miller, O. K. Jr., D. T. Jenkins and P. Dery. 1986. Mycorrhizal synthesis of Amanita muscaria var. persicina with hard pines. Mycotaxon 26: 165–172.
Jenkins, D. T. 1977. A taxonomic and nomenclatural study of the genus Amanita section Amanita for North America. Biblioth. Mycol. 57: 126 pp.
External links
Amanita persicina page by Rod Tulloss
persicina
Fungi described in 1977
Poisonous fungi
Psychoactive fungi
Fungus species | Amanita persicina | [
"Biology",
"Environmental_science"
] | 764 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
20,545,991 | https://en.wikipedia.org/wiki/NGC%207424 | NGC 7424 is a barred spiral galaxy located 37.5 million light-years away in the southern constellation Grus (the Crane). Its size (about 100,000 light-years) makes it similar to our own galaxy, the Milky Way.
It is called a "grand design" galaxy because of its well defined spiral arms. Two supernovae and two ultraluminous X-ray sources have been discovered in NGC 7424.
Characteristics
NGC 7424 is intermediate between normal spirals (SA) and strongly barred galaxies (SB). Other features include the presence of a central ring-like structure and a relatively low core brightness relative to the arms. The redder color of the prominent bar indicates an older population of stars while the bright blue color of the loose arms indicates the presence of ionised hydrogen and clusters of massive young stars. NGC 7424 is listed as a member of the IC 1459 Grus Group of galaxies, but is suspected of being a "field galaxy"; that is, not gravitationally bound to any group.
Supernova 2001ig
SN 2001ig was a rare Type IIb supernova discovered by Australian amateur Robert Evans on the outer edge of NGC 7424 on 10 December 2001.
Type IIb supernovae (SNe) initially exhibit spectral lines of hydrogen (like typical Type II's), but these disappear after a short time to be replaced by lines of oxygen, calcium and magnesium (like typical Type Ib's and Ic's).
In 2006, Anglo-Australian Observatory astronomer Stuart Ryder et al. found what they argued could be the binary companion to SN 2001ig using the Gemini Observatory. It is a massive A or F class star that had an eccentric orbit around the progenitor, a Wolf-Rayet star. They believe that the companion periodically stripped the outer hydrogen-rich envelope of the progenitor, accounting for the observed spectral changes.
Princeton University fellow Alicia Soderberg et al. also believe that the progenitor was a Wolf-Rayet star, but suggest that the periodic mass loss was a result of the intense stellar wind these stars produce.
In a paper published in March 2018, Ryder et al. announced that the surviving companion had been observed with the Hubble Space Telescope in the ultraviolet.
This is the first time a companion to a Type IIb supernova has been imaged.
On 7 March, 2017, Stuart Parker of New Zealand, discovered a supernova: SN 2017bzb (Type II, mag. 13).
Ultraluminous X-ray sources
In May and June 2002, Roberto Soria and his colleagues at the Harvard-Smithsonian Center for Astrophysics discovered two Ultraluminous X-ray sources (ULXs) with the Chandra X-ray Observatory. ULXs are objects that emit tremendous amounts of X-rays (> 1032 watts or 1039 erg/s), assuming they radiate isotropically (the same in all directions). This amount is larger than currently understood stellar processes (including supernovae) but smaller than the amount of X-rays emitted by active galactic nuclei, which accounts for their alternate name, Intermediate-luminosity X-ray Objects (IXOs). The source designated ULX1 was found in a relatively empty interarm region, far from any bright clusters or star-forming complexes, and showed a 75% increase in X-ray luminosity over the course of 20 days. ULX2 was found in an exceptionally bright young stellar complex, and showed an order of magnitude increase over the same time period.
References
External links
Barred spiral galaxies
Grus (constellation)
7424
070096 | NGC 7424 | [
"Astronomy"
] | 752 | [
"Grus (constellation)",
"Constellations"
] |
20,547,156 | https://en.wikipedia.org/wiki/FDMNES | The FDMNES program calculates the spectra of different spectroscopies related to the real or virtual absorption of x-ray in material. It gives the absorption cross sections of photons around the ionization edge, that is in the energy range of XANES. The calculation is performed with all conditions of rectilinear or circular polarization. In the same way, it calculates the structure factors and intensities of anomalous or resonant diffraction spectra (DAFS or RXS).
The code uses two techniques of monoelectronic calculations. The first one is based on the Finite Difference Method (FDM) to solve the Schrödinger equation. In that way the shape of the potential is free and in particular avoid the muffin-tin approximation. The second one uses the Green formalism (multiple scattering) on a muffin- tin potential. This approach can be less precise but is faster.
FDMNES is used as external program to calculate basic spectra for XANES fitting using FitIt.
It can also be used to calculate X-ray Raman scattering spectra.
References
External links
FDMNES home page
Physics software
Computational chemistry software | FDMNES | [
"Physics",
"Chemistry"
] | 246 | [
"Computational chemistry software",
"Chemistry software",
"Theoretical chemistry stubs",
"Computational physics",
"Computational chemistry stubs",
"Computational chemistry",
"Physical chemistry stubs",
"Computational physics stubs",
"Physics software"
] |
19,363,014 | https://en.wikipedia.org/wiki/Geometric%20calculus | In mathematics, geometric calculus extends geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to reproduce other mathematical theories including vector calculus, differential geometry, and differential forms.
Differentiation
With a geometric algebra given, let and be vectors and let be a multivector-valued function of a vector. The directional derivative of along at is defined as
provided that the limit exists for all , where the limit is taken for scalar . This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalar-valued.
Next, choose a set of basis vectors and consider the operators, denoted , that perform directional derivatives in the directions of :
Then, using the Einstein summation notation, consider the operator:
which means
where the geometric product is applied after the directional derivative. More verbosely:
This operator is independent of the choice of frame, and can thus be used to define what in geometric calculus is called the vector derivative:
This is similar to the usual definition of the gradient, but it, too, extends to functions that are not necessarily scalar-valued.
The directional derivative is linear regarding its direction, that is:
From this follows that the directional derivative is the inner product of its direction by the vector derivative. All needs to be observed is that the direction can be written , so that:
For this reason, is often noted .
The standard order of operations for the vector derivative is that it acts only on the function closest to its immediate right. Given two functions and , then for example we have
Product rule
Although the partial derivative exhibits a product rule, the vector derivative only partially inherits this property. Consider two functions and :
Since the geometric product is not commutative with in general, we need a new notation to proceed. A solution is to adopt the overdot notation, in which the scope of a vector derivative with an overdot is the multivector-valued function sharing the same overdot. In this case, if we define
then the product rule for the vector derivative is
Interior and exterior derivative
Let be an -grade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,
In particular, if is grade 1 (vector-valued function), then we can write
and identify the divergence and curl as
Unlike the vector derivative, neither the interior derivative operator nor the exterior derivative operator is invertible.
Multivector derivative
The derivative with respect to a vector as discussed above can be generalized to a derivative with respect to a general multivector, called the multivector derivative.
Let be a multivector-valued function of a multivector. The directional derivative of with respect to in the direction , where and are multivectors, is defined as
where is the scalar product. With a vector basis and the corresponding dual basis, the multivector derivative is defined in terms of the directional derivative as
This equation is just expressing in terms of components in a reciprocal basis of blades, as discussed in the article section Geometric algebra#Dual basis.
A key property of the multivector derivative is that
where is the projection of onto the grades contained in .
The multivector derivative finds applications in Lagrangian field theory.
Integration
Let be a set of basis vectors that span an -dimensional vector space. From geometric algebra, we interpret the pseudoscalar to be the signed volume of the -parallelotope subtended by these basis vectors. If the basis vectors are orthonormal, then this is the unit pseudoscalar.
More generally, we may restrict ourselves to a subset of of the basis vectors, where , to treat the length, area, or other general -volume of a subspace in the overall -dimensional vector space. We denote these selected basis vectors by . A general -volume of the -parallelotope subtended by these basis vectors is the grade multivector .
Even more generally, we may consider a new set of vectors proportional to the basis vectors, where each of the is a component that scales one of the basis vectors. We are free to choose components as infinitesimally small as we wish as long as they remain nonzero. Since the outer product of these terms can be interpreted as a -volume, a natural way to define a measure is
The measure is therefore always proportional to the unit pseudoscalar of a -dimensional subspace of the vector space. Compare the Riemannian volume form in the theory of differential forms. The integral is taken with respect to this measure:
More formally, consider some directed volume of the subspace. We may divide this volume into a sum of simplices. Let be the coordinates of the vertices. At each vertex we assign a measure as the average measure of the simplices sharing the vertex. Then the integral of with respect to over this volume is obtained in the limit of finer partitioning of the volume into smaller simplices:
Fundamental theorem of geometric calculus
The reason for defining the vector derivative and integral as above is that they allow a strong generalization of Stokes' theorem. Let be a multivector-valued function of -grade input and general position , linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume to the integral over its boundary:
As an example, let for a vector-valued function and a ()-grade multivector . We find that
Likewise,
Thus we recover the divergence theorem,
Covariant derivative
A sufficiently smooth -surface in an -dimensional space is deemed a manifold. To each point on the manifold, we may attach a -blade that is tangent to the manifold. Locally, acts as a pseudoscalar of the -dimensional space. This blade defines a projection of vectors onto the manifold:
Just as the vector derivative is defined over the entire -dimensional space, we may wish to define an intrinsic derivative , locally defined on the manifold:
(Note: The right hand side of the above may not lie in the tangent space to the manifold. Therefore, it is not the same as , which necessarily does lie in the tangent space.)
If is a vector tangent to the manifold, then indeed both the vector derivative and intrinsic derivative give the same directional derivative:
Although this operation is perfectly valid, it is not always useful because itself is not necessarily on the manifold. Therefore, we define the covariant derivative to be the forced projection of the intrinsic derivative back onto the manifold:
Since any general multivector can be expressed as a sum of a projection and a rejection, in this case
we introduce a new function, the shape tensor , which satisfies
where is the commutator product. In a local coordinate basis spanning the tangent surface, the shape tensor is given by
Importantly, on a general manifold, the covariant derivative does not commute. In particular, the commutator is related to the shape tensor by
Clearly the term is of interest. However it, like the intrinsic derivative, is not necessarily on the manifold. Therefore, we can define the Riemann tensor to be the projection back onto the manifold:
Lastly, if is of grade , then we can define interior and exterior covariant derivatives as
and likewise for the intrinsic derivative.
Relation to differential geometry
On a manifold, locally we may assign a tangent surface spanned by a set of basis vectors . We can associate the components of a metric tensor, the Christoffel symbols, and the Riemann curvature tensor as follows:
These relations embed the theory of differential geometry within geometric calculus.
Relation to differential forms
In a local coordinate system (), the coordinate differentials , ..., form a basic set of one-forms within the coordinate chart. Given a multi-index with for , we can define a -form
We can alternatively introduce a -grade multivector as
and a measure
Apart from a subtle difference in meaning for the exterior product with respect to differential forms versus the exterior product with respect to vectors (in the former the increments are covectors, whereas in the latter they represent scalars), we see the correspondences of the differential form
its derivative
and its Hodge dual
embed the theory of differential forms within geometric calculus.
History
Following is a diagram summarizing the history of geometric calculus.
References and further reading
Applied mathematics
Calculus
Geometric algebra | Geometric calculus | [
"Mathematics"
] | 1,693 | [
"Applied mathematics",
"Calculus"
] |
19,364,010 | https://en.wikipedia.org/wiki/Reverse%20vaccinology | Reverse vaccinology is an improvement of vaccinology that employs bioinformatics and reverse pharmacology practices, pioneered by Rino Rappuoli and first used against Serogroup B meningococcus. Since then, it has been used on several other bacterial vaccines.
Computational approach
The basic idea behind reverse vaccinology is that an entire pathogenic genome can be screened using bioinformatics approaches to find genes. Some traits that the genes are monitored for, may indicate antigenicity and include genes that code for proteins with extracellular localization, signal peptides & B cell epitopes. Those genes are filtered for desirable attributes that would make good vaccine targets such as outer membrane proteins. Once the candidates are identified, they are produced synthetically and are screened in animal models of the infection.
History
After Craig Venter published the genome of the first free-living organism in 1995, the genomes of other microorganisms became more readily available throughout the end of the twentieth century. Reverse vaccinology, designing vaccines using the pathogen's sequenced genome, came from this new wealth of genomic information, as well as technological advances. Reverse vaccinology is much more efficient than traditional vaccinology, which requires growing large amounts of specific microorganisms as well as extensive wet lab tests.
In 2000, Rino Rappuoli and the J. Craig Venter Institute developed the first vaccine using Reverse Vaccinology against Serogroup B meningococcus. The J. Craig Venter Institute and others then continued work on vaccines for A Streptococcus, B Streptococcus, Staphylococcus aureus, and Streptococcus pneumoniae.
Reverse vaccinology with Meningococcus B
Attempts at reverse vaccinology first began with Meningococcus B (MenB). Meningococcus B caused over 50% of meningococcal meningitis, and scientists had been unable to create a successful vaccine for the pathogen because of the bacterium's unique structure. This bacterium's polysaccharide shell is identical to that of a human self-antigen, but its surface proteins vary greatly; and the lack of information about the surface proteins caused developing a vaccine to be extremely difficult. As a result, Rino Rappuoli and other scientists turned towards bioinformatics to design a functional vaccine.
Rappuoli and others at the J. Craig Venter Institute first sequenced the MenB genome. Then, they scanned the sequenced genome for potential antigens. They found over 600 possible antigens, which were tested by expression in Escherichia coli. The most universally applicable antigens were used in the prototype vaccines. Several proved to function successfully in mice, however, these proteins alone did not effectively interact with the human immune system due to not inducing a good immune response in order for the protection to be achieved. Later, by addition of outer membrane vesicles that contain lipopolysaccharides from the purification of blebs on gram negative cultures. The addition of this adjuvant (previously identified by using conventional vaccinology approaches) enhanced immune response to the level that was required. Later, the vaccine was proven to be safe and effective in adult humans.
Subsequent reverse vaccinology research
During the development of the MenB vaccine, scientists adopted the same Reverse Vaccinology methods for other bacterial pathogens. A Streptococcus and B Streptococcus vaccines were two of the first Reverse Vaccines created. Because those bacterial strains induce antibodies that react with human antigens, the vaccines for those bacteria needed to not contain homologies to proteins encoded in the human genome in order to not cause adverse reactions, thus establishing the need for genome-based Reverse Vaccinology.
Later, Reverse Vaccinology was used to develop vaccines for antibiotic-resistant Staphylococcus aureus and Streptococcus pneumoniae
Pros and cons
The major advantage for reverse vaccinology is finding vaccine targets quickly and efficiently. Traditional methods may take decades to unravel pathogens and antigens, diseases and immunity. However, In silico can be very fast, allowing to identify new vaccines for testing in only a few years. The downside is that only proteins can be targeted using this process. Whereas, conventional vaccinology approaches can find other biomolecular targets such as polysaccharides.
Available software
Though using bioinformatic technology to develop vaccines has become typical in the past ten years, general laboratories often do not have the advanced software that can do this. However, there are a growing number of programs making reverse vaccinology information more accessible. NERVE is one relatively new data processing program. Though it must be downloaded and does not include all epitope predictions, it does help save some time by combining the computational steps of reverse vaccinology into one program. Vaxign, an even more comprehensive program, was created in 2008. Vaxign is web-based and completely public-access.
Though Vaxign has been found to be extremely accurate and efficient, some scientists still utilize the online software RANKPEP for the peptide bonding predictions. Both Vaxign and RANKPEP employ PSSMs (Position Specific Scoring Matrices) when analyzing protein sequences or sequence alignments.
Computer-Aided bioinformatics projects are becoming extremely popular, as they help guide the laboratory experiments.
Other developments because of reverse vaccinology and bioinformatics
Reverse vaccinology has caused an increased focus on pathogenic biology.
Reverse vaccinology led to the discovery of pili in gram-positive pathogens such as A streptococcus, B streptococcus, and pneumococcus. Previously, all gram-positive bacteria were thought to not have any pili.
Reverse vaccinology also led to the discovery of factor G binding protein in meningococcus, which binds to complement factor H in humans. Binding to the complement factor H allows for meningococcus to grow in human blood while blocking alternative pathways. This model does not fit many animal species, which do not have the same complement factor H as humans, indicating differentiation of meningococcus between differing species.
References
Vaccination | Reverse vaccinology | [
"Biology"
] | 1,282 | [
"Vaccination"
] |
19,370,181 | https://en.wikipedia.org/wiki/CENBOL | In astronomy, CENBOL (derived from "CENtrifugal pressure supported BOundary Layer) is a model developed by the astrophysicist Sandip Chakrabarti and collaborators to explain the region of an accretion flow around a black hole.
Centrifugal force dominated boundary layer
Because centrifugal force l2/r3 increases very rapidly compared to the gravitational force (which goes as 1/r2) as the distance r decreases, matter feels increasing centrifugal force as it approaches a black hole. Thus the matter initially slows down, typically through a shock transition, and then accelerates again to become a supersonic flow.
The importance of CENBOL is that it behaves like a boundary layer of a black hole. This region is located between the shock and the innermost sonic point of an accretion flow. CENBOL becomes hot due to sudden reduction of radial kinetic energy and the flow is puffed up, since hot gas can fight against gravitational. In a certain sense it behaves like a thick accretion disk (ion pressure supported torus if the accretion rate is low; or radiation pressure supported torus, if the accretion rate is high), except that it also has radial velocity, which original models of thick disk did not have. Because it is hot, the electrons transfer their thermal energy to the photons. In other words, CENBOL inverse Comptonizes low energy X-rays or soft photons (also called seed photons) and produces very high energy X-rays (also called hard photons). Just as a boundary layer, it also produces jets and outflows.
The observed spectrum of a black hole accretion disk is in reality partly coming from a Keplerian disk (viscous Shakura and Sunyaev (1973) type disk) in the form of multi-colour black body emission. But the power-law component primarily comes from the CENBOL. In presence of high accretion rates in the Keplerian component, the CENBOL can be cooled down so that the spectrum may be totally dominated by the low energy X-rays. The spectrum goes to a 'so-called' soft state. When the Keplerian rate is not high compared to the low-angular momentum component, the CENBOL survives and the spectrum is dominated by the high energy X-rays. Then it is said to be in a so-called 'hard-state'.
In the presence of radiative or thermal cooling effects, CENBOL may start to oscillate, especially when the infall time scale and the cooling time scale are comparable. In this case, number of intercepted low energy photons would be modulated. As a result, the number of high energy photons are also modulated. This effect produces what is known as quasi-periodic oscillations (or QPOs) in black hole candidates.
References
Black holes | CENBOL | [
"Physics",
"Astronomy"
] | 605 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
3,423,123 | https://en.wikipedia.org/wiki/Water%20retention%20curve | Water retention curve is the relationship between the water content, θ, and the soil water potential, ψ. The soil moisture curve is characteristic for different types of soil, and is also called the soil moisture characteristic.
It is used to predict the soil water storage, water supply to the plants (field capacity) and soil aggregate stability. Due to the hysteretic effect of water filling and draining the pores, different wetting and drying curves may be distinguished.
The general features of a water retention curve can be seen in the figure, in which the volume water content, θ, is plotted against the matric potential, . At potentials close to zero, a soil is close to saturation, and water is held in the soil primarily by capillary forces. As θ decreases, binding of the water becomes stronger, and at small potentials (more negative, approaching wilting point) water is strongly bound in the smallest of pores, at contact points between grains and as films bound by adsorptive forces around particles.
Sandy soils will involve mainly capillary binding, and will therefore release most of the water at higher potentials, while clayey soils, with adhesive and osmotic binding, will release water at lower (more negative) potentials. At any given potential, peaty soils will usually display much higher moisture contents than clayey soils, which would be expected to hold more water than sandy soils. The water holding capacity of any soil is due to the porosity and the nature of the bonding in the soil.
Curve models
The shape of water retention curves can be characterized by several models, one of them known as the Van Genuchten model:
where
is the water retention curve [L3L−3];
is suction pressure ([L] or cm of water);
saturated water content [L3L−3];
residual water content [L3L−3];
is related to the inverse of the air entry suction, ([L−1], or cm−1); and,
is a measure of the pore-size distribution, (dimensionless).
Based on this parametrization a prediction model for the shape of the unsaturated hydraulic conductivity - saturation - pressure relationship was developed.
History
In 1907, Edgar Buckingham created the first water retention curve. It was measured and made for six soils varying in texture from sand to clay. The data came from experiments made on soil columns 48 inch tall, where a constant water level maintained about 2 inches above the bottom through periodic addition of water from a side tube. The upper ends were closed to prevent evaporation.
Method
The Van Genuchten parameters ( and ) can be determined through field or laboratory testing. One of the methods is the instantaneous profile method, where water content (or effective saturation ) are determined for a series of suction pressure measurements . Due to the non-linearity of the equation, numerical techniques such as the non-linear least-squares method can be used to solve the van Genuchten parameters. The accuracy of the estimated parameters will depend on the quality of the acquired dataset ( and ). Structural overestimation or underestimation can occur when water retention curves are fitted with non-linear least squares. In these cases, the representation of water retention curves can be improved in terms of accuracy and uncertainty by applying Gaussian Process regression to the residuals obtained after non-linear least-squares. This is mostly due to the correlation between the data points, which is accounted for with Gaussian Process regression through the kernel function.
See also
Soil water (retention)
References
External links
UNSODA Model database of unsaturated soil hydraulic properties (UNSODA viewer)
SWRC Fit fit soil hydraulic models to soil water retention data
Soil physics
Hysteresis | Water retention curve | [
"Physics",
"Materials_science",
"Engineering"
] | 780 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Soil physics",
"Materials science",
"Hysteresis"
] |
3,423,593 | https://en.wikipedia.org/wiki/Disc%20permeameter | The disc permeameter is a field instrument used for measuring water infiltration in the soil, which is characterized by in situ saturated and unsaturated soil hydraulic properties. It is mainly used to provide estimates of the hydraulic conductivity of the soil near saturation.
History
Conventional techniques for measuring in-situ infiltration include the use of a single or double ring infiltrometer. Single and double ring infiltrometer only measures flow under ponded (saturated) conditions, and when used in soil with distinct macropores, preferential flow will dominate the flow. (See: Poiseuille's law) This does not reflect infiltration under rainfall or sprinkler irrigation. Therefore, many authors attempted to create a negative potential (tension) on the water flow. This is to exclude macropores in the flow process, hence only measuring the soil matrix flow.
Willard Gardner and Walter Gardner developed a negative head permeameter as early as 1939. Dixon (1975) developed a closed-top ring infiltrometer to quantify macropores. Water is applied to a closed-top system, which permits the imposition of negative head or pressure on the ponded water surface. Negative tension can be considered as simulating a positive soil air pressure, created by a negative air pressure above ponded surface water. A simplification was made by Topp and Zebchuk (1985). The limitation of this device is the infiltration has to be started by ponding the closed-top infiltrometer (applying a positive head), then adjusted to a negative pressure. Little research effort was continued in this area, instead attention has been given mainly to the sorptivity apparatus of Dirksen (1975) which used a ceramic plate as a base. Based on this design, Brent Clothier and Ian White (1981) developed the sorptivity tube which can provide a constant negative potential (tension) on the soil surface. However, the sorptivity tube had many shortcomings, hence modifications to the design led to the development of the disc permeameter by Perroux and White (1988) from CSIRO. In the US it is known as the tension infiltrometer.
For more on the development of the first permeameter as told by Walter Gardner, visit (http://www.decagon.com/ag_research/hydro/history.php)
The Disc
The CSIRO disc permeameter of Perroux and White (1988) (not patented) comprises a nylon mesh supply membrane (with a very small diameter around 10–40 mm), a water reservoir and a bubbling tower. The bubbling tower is connected to the reservoir and is open to air. The bubbling tower controls the potential h0 applied to the membrane by adjusting the water height in the air-inlet tube. So essentially the soil pores need to have energy equivalent to h0 to overcome water that is held under tension in the reservoir. It can be used to supply potential ranging -200 mm to 0 mm, effectively excluding pores with diameter bigger than 0.075 mm.
Many different designs have evolved, including:
automated recording tension infiltrometer (Ankeny, Kaspar & Horton, 1988), patented by the Iowa State University (Soil Moisture Measurement https://web.archive.org/web/20060127085537/http://www.soilmeasurement.com/tension_infil.html)
mini-disc infiltrometer (Decagon Devices, http://www.decagon.com/products/lysimeters-and-infiltrometers/mini-disk-tension-infiltrometer)
hood infiltrometer (Umwelt-Geräte-Technik, http://www.ugt-online.de)
Mathematical analysis
Due to the three-dimensional water flow from the disc, a special formulation is needed to take into account the lateral absorption of water. The analyses are derived from the simple, steady-state analysis of Wooding (1968). For steady infiltration from a circular, shallow, inundated area, Wooding found that a remarkable feature of this curve is the fact that it never departs far from the straight line:
where Q* is the dimensionless flux, . r is the radius of the disc [cm] and [1/cm] is the sorptive number or the parameter of Gardner's (1958) hydraulic conductivity function:
where K is the hydraulic conductivity [cm/h], Ks is saturated conductivity and h is soil water potential [cm]. In terms of the actual steady-state infiltration rate q¥ [cm/h]:
References
Clothier, B.E., White, I., 1981. Measurement of sorptivity and soil water diffusivity in the field. Soil Science Society of America Journal 45, 241-245.
Dirksen, C., 1975. Determination of soil water diffusivity by sorptivity measurements. Soil Science Society of America Proceedings 39, 22-27.
Dixon, R.M., 1975. Design and use of closed-top infiltrometers. Soil Science Society of America Proceedings 39, 755-763.
Topp, G.C., Zebchuk, W.D., 1985. A closed adjustable head infiltrometer. Canadian Agricultural Engineering 27, 99-104.
Perroux, K.M., White, I., 1988. Design for disc permeameters. Soil Science Society of America Journal, 52, 1205-1215.
Wooding, R.A., 1968. Steady infiltration from a shallow circular pond. Water Resources Research 4, 1259-1273.
Soil physics
Environmental instrumentation | Disc permeameter | [
"Physics",
"Technology",
"Engineering",
"Environmental_science"
] | 1,225 | [
"Environmental instrumentation",
"Applied and interdisciplinary physics",
"Measuring instruments",
"Soil physics"
] |
3,424,459 | https://en.wikipedia.org/wiki/Annealing%20%28materials%20science%29 | In metallurgy and materials science, annealing is a heat treatment that alters the physical and sometimes chemical properties of a material to increase its ductility and reduce its hardness, making it more workable. It involves heating a material above its recrystallization temperature, maintaining a suitable temperature for an appropriate amount of time and then cooling.
In annealing, atoms migrate in the crystal lattice and the number of dislocations decreases, leading to a change in ductility and hardness. As the material cools it recrystallizes. For many alloys, including carbon steel, the crystal grain size and phase composition, which ultimately determine the material properties, are dependent on the heating rate and cooling rate. Hot working or cold working after the annealing process alters the metal structure, so further heat treatments may be used to achieve the properties required. With knowledge of the composition and phase diagram, heat treatment can be used to adjust from harder and more brittle to softer and more ductile.
In the case of ferrous metals, such as steel, annealing is performed by heating the material (generally until glowing) for a while and then slowly letting it cool to room temperature in still air. Copper, silver and brass can be either cooled slowly in air, or quickly by quenching in water. In this fashion, the metal is softened and prepared for further work such as shaping, stamping, or forming.
Many other materials, including glass and plastic films, use annealing to improve the finished properties.
Thermodynamics
Annealing occurs by the diffusion of atoms within a solid material, so that the material progresses towards its equilibrium state. Heat increases the rate of diffusion by providing the energy needed to break bonds. The movement of atoms has the effect of redistributing and eradicating the dislocations in metals and (to a lesser extent) in ceramics. This alteration to existing dislocations allows a metal object to deform more easily, increasing its ductility.
The amount of process-initiating Gibbs free energy in a deformed metal is also reduced by the annealing process. In practice and industry, this reduction of Gibbs free energy is termed stress relief.
The relief of internal stresses is a thermodynamically spontaneous process; however, at room temperatures, it is a very slow process. The high temperatures at which annealing occurs serve to accelerate this process.
The reaction that facilitates returning the cold-worked metal to its stress-free state has many reaction pathways, mostly involving the elimination of lattice vacancy gradients within the body of the metal. The creation of lattice vacancies is governed by the Arrhenius equation, and the migration/diffusion of lattice vacancies are governed by Fick's laws of diffusion.
In steel, there is a decarburization mechanism that can be described as three distinct events: the reaction at the steel surface, the interstitial diffusion of carbon atoms and the dissolution of carbides within the steel.
Stages
The three stages of the annealing process that proceed as the temperature of the material is increased are: recovery, recrystallization, and grain growth. The first stage is recovery, and it results in softening of the metal through removal of primarily linear defects called dislocations and the internal stresses they cause. Recovery occurs at the lower temperature stage of all annealing processes and before the appearance of new strain-free grains. The grain size and shape do not change. The second stage is recrystallization, where new strain-free grains nucleate and grow to replace those deformed by internal stresses. If annealing is allowed to continue once recrystallization has completed, then grain growth (the third stage) occurs. In grain growth, the microstructure starts to coarsen and may cause the metal to lose a substantial part of its original strength. This can however be regained with hardening.
Controlled atmospheres
The high temperature of annealing may result in oxidation of the metal's surface, resulting in scale. If scale must be avoided, annealing is carried out in a special atmosphere, such as with endothermic gas (a mixture of carbon monoxide, hydrogen gas, and nitrogen gas). Annealing is also done in forming gas, a mixture of hydrogen and nitrogen.
The magnetic properties of mu-metal (Espey cores) are introduced by annealing the alloy in a hydrogen atmosphere.
Setup and equipment
Typically, large ovens are used for the annealing process. The inside of the oven is large enough to place the workpiece in a position to receive maximum exposure to the circulating heated air. For high volume process annealing, gas fired conveyor furnaces are often used. For large workpieces or high quantity parts, car-bottom furnaces are used so workers can easily move the parts in and out. Once the annealing process is successfully completed, workpieces are sometimes left in the oven so the parts cool in a controllable way. While some workpieces are left in the oven to cool in a controlled fashion, other materials and alloys are removed from the oven. Once removed from the oven, the workpieces are often quickly cooled off in a process known as quench hardening. Typical methods of quench hardening materials involve media such as air, water, oil, or salt. Salt is used as a medium for quenching usually in the form of brine (salt water). Brine provides faster cooling rates than water. This is because when an object is quenched in water steam bubbles form on the surface of the object reducing the surface area the water is in contact with. The salt in the brine reduces the formation of steam bubbles on the object's surface, meaning there is a larger surface area of the object in contact with the water, thus facilitating better conduction of heat from the object to the surrounding water. Quench hardening is generally applicable to some ferrous alloys, but not copper alloys.
Diffusion annealing of semiconductors
In the semiconductor industry, silicon wafers are annealed to repair atomic level disorder from steps like ion implantation. In the process step, dopant atoms, usually boron, phosphorus or arsenic, move into substitutional positions in the crystal lattice, which allows these dopant atoms to function properly as dopants in the semiconducting material.
Specialized cycles
Normalization
Normalization is an annealing process applied to ferrous alloys to give the material a uniform fine-grained structure and to avoid excess softening in steel. It involves heating the steel to 20–50 °C above its upper critical point, soaking it for a short period at that temperature and then allowing it to cool in air. Heating the steel just above its upper critical point creates austenitic grains (much smaller than the previous ferritic grains), which during cooling, form new ferritic grains with a further refined grain size. The process produces a tougher, more ductile material, and eliminates columnar grains and dendritic segregation that sometimes occurs during casting. Normalizing improves machinability of a component and provides dimensional stability if subjected to further heat treatment processes.
Process annealing
Process annealing, also called intermediate annealing, subcritical annealing, or in-process annealing, is a heat treatment cycle that restores some of the ductility to a product being cold-worked so it can be cold-worked further without breaking.
The temperature range for process annealing ranges from 260 °C (500 °F) to 760 °C (1400 °F), depending on the alloy in question. This process is mainly suited for low-carbon steel. The material is heated up to a temperature just below the lower critical temperature of steel. Cold-worked steel normally tends to possess increased hardness and decreased ductility, making it difficult to work. Process annealing tends to improve these characteristics. This is mainly carried out on cold-rolled steel like wire-drawn steel, centrifugally cast ductile iron pipe etc.
Full annealing
A full annealing typically results in the second most ductile state a metal can assume for metal alloy. Its purpose is to originate a uniform and stable microstructure that most closely resembles the metal's phase diagram equilibrium microstructure, thus letting the metal attain relatively low levels of hardness, yield strength and ultimate strength with high plasticity and toughness. To perform a full anneal on a steel for example, steel is heated to slightly above the austenitic temperature and held for sufficient time to allow the material to fully form austenite or austenite-cementite grain structure. The material is then allowed to cool very slowly so that the equilibrium microstructure is obtained. In most cases this means the material is allowed to furnace cool (the furnace is turned off and the steel is let cool down inside) but in some cases it is air cooled. The cooling rate of the steel has to be sufficiently slow so as to not let the austenite transform into bainite or martensite, but rather have it completely transform to pearlite and ferrite or cementite. This means that steels that are very hardenable (i.e. tend to form martensite under moderately low cooling rates) have to be furnace cooled. The details of the process depend on the type of metal and the precise alloy involved. In any case the result is a more ductile material but a lower yield strength and a lower tensile strength. This process is also called LP annealing for lamellar pearlite in the steel industry as opposed to a process anneal, which does not specify a microstructure and only has the goal of softening the material. Often the material to be machined is annealed, and then subject to further heat treatment to achieve the final desired properties.
Short cycle anneal
Short cycle annealing is used for turning normal ferrite into malleable ferrite. It consists of heating, cooling and then heating again from 4 to 8 hours.
Resistive heating
Resistive heating can be used to efficiently anneal copper wire; the heating system employs a controlled electrical short circuit. It can be advantageous because it does not require a temperature-regulated furnace like other methods of annealing.
The process consists of two conductive pulleys (step pulleys), which the wire passes across after it is drawn. The two pulleys have an electrical potential across them, which causes the wire to form a short circuit. The Joule effect causes the temperature of the wire to rise to approximately 400 °C. This temperature is affected by the rotational speed of the pulleys, the ambient temperature, and the voltage applied. Where t is the temperature of the wire, K is a constant, V is the voltage applied, r is the number of rotations of the pulleys per minute, and ta is the ambient temperature,
The constant K depends on the diameter of the pulleys and the resistivity of the copper.
Purely in terms of the temperature of the copper wire, an increase in the speed of the wire through the pulley system has the same effect as a decrease in resistance.
See also
Annealing (glass)
Annealing by short circuit
Hollomon–Jaffe parameter
Low hydrogen annealing
Simulated annealing
Tempering (metallurgy)
References
Further reading
Thesis of Degree, Cable Manufacture and Tests of General Use and Energy. Jorge Luis Pedraz (1994), UNI, Files, Peru.
"Dynamic annealing of the Copper wire by using a Controlled Short circuit." Jorge Luis Pedraz (1999), Peru: Lima, CONIMERA 1999, INTERCON 99,
External links
Annealing – efunda – engineering fundamentals
Annealing – Aluminum and Aircraft Metal Alloys
Metal heat treatments
Materials science
Polymers | Annealing (materials science) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,413 | [
"Applied and interdisciplinary physics",
"Metallurgical processes",
"Materials science",
"Polymer chemistry",
"nan",
"Polymers",
"Metal heat treatments"
] |
3,424,660 | https://en.wikipedia.org/wiki/Input%20shaping | In control theory, input shaping is an open-loop control technique for reducing vibrations in computer-controlled machines. The method works by creating a command signal that cancels its own vibration. That is, a vibration excited by previous parts of the command signal is cancelled by vibration excited by latter parts of the command. Input shaping is implemented by convolving a sequence of impulses, known as an input shaper, with any arbitrary command. The shaped command that results from the convolution is then used to drive the system. If the impulses in the shaper are chosen correctly, then the shaped command will excite less residual vibration than the unshaped command. The amplitudes and time locations of the impulses are obtained from the system's natural frequencies and damping ratios. Shaping can be made very robust to errors in the system parameters.
References
External links
Input shaping simulator demonstrates the filter principle on a gantry crane control problem.
Control theory
Cybernetics
Dynamics (mechanics)
Mechanical vibrations | Input shaping | [
"Physics",
"Mathematics",
"Engineering"
] | 204 | [
"Structural engineering",
"Physical phenomena",
"Applied mathematics",
"Control theory",
"Classical mechanics",
"Motion (physics)",
"Dynamics (mechanics)",
"Mechanics",
"Mechanical vibrations",
"Dynamical systems"
] |
3,424,781 | https://en.wikipedia.org/wiki/Aerospace%20architecture | Aerospace architecture is broadly defined to encompass architectural design of non-habitable and habitable structures and living and working environments in aerospace-related facilities, habitats, and vehicles. These environments include, but are not limited to: science platform aircraft and aircraft-deployable systems; space vehicles, space stations, habitats and lunar and planetary surface construction bases; and Earth-based control, experiment, launch, logistics, payload, simulation and test facilities. Earth analogs to space applications may include Antarctic, desert, high altitude, underground, undersea environments and closed ecological systems.
The American Institute of Aeronautics and Astronautics (AIAA) Design Engineering Technical Committee (DETC) meets several times a year to discuss policy, education, standards, and practice issues pertaining to aerospace architecture.
The role of Appearance in Aerospace architecture
"The role of design creates and develops concepts and specifications that seek to simultaneously and synergistically optimize function, production, value and appearance." In connection with, and with respect to, human presence and interactions, appearance is a component of human factors and includes considerations of human characteristics, needs and interests.
Appearance in this context refers to all visual aspects – the statics and dynamics of form(s), color(s), patterns, and textures in respect to all products, systems, services, and experiences. Appearance/esthetics affects humans both psychologically and physiologically and can effect/improving both human efficiency, attitude, and well-being.
In reference to non-habitable design the influence of appearance is minimal if not non-existent. However, as the industry of aerospace continues to rapidly grow, and missions to put humans on Mars and back to the Moon are being announced. The role that appearance/esthetics to maintain crew well-being and health of multi-month or year missions becomes a monumental factor in mission success.
Habitable Structures within Earth's Atmosphere
Appearance/esthetics
Appearance/esthetics in aerospace design must at least co-exist, if not be synergistic, with the overall/societal fundamentals/metrics of aerospace engineering design. These metrics, for atmospheric flight consist of overall/societal factors directed toward productivity, safety, environmental issues such as noise/emissions and accessibly/ affordability. Furthermore, technological parameters such as space, weight and drag minimization and propulsion efficiency highly dictate and restrain the boundaries of appearance/esthetic design. Major factors that need to be considered in atmospheric flight design include producibility, maintainability, reliability, flyability, inspectability, flexibility, repairability, operability, durability, and airport compatibility.
Habitable Structures outside of Low-Earth Orbit (LEO)
What is different concerning space in reference to human-centered design thinking is the nearly complete lack of human presence. Human-centered design influence wholly operates within the context of human interactions; how operations/ missions are run (operability) or how products, systems, services, or experiences (PSSE's) affect end users (usability). Currently the human presence involves the space station and the relatively few international rocket systems.
Human-Centered Design
Due to the large space boom and technological advancements, over the past decade numerous countries and companies have released statements that human expeditions to our solar system are far from done. With long duration confinement in limited interior space in micro-g with little-to-no real variability in environment, attention towards user [crew] subjects well-being, and mental alertness will pose complex human-centered design issues. Mars transit vehicles and surface habitats will constitute highly confined, technical settings characterized by social, emotional and physical deprivation while affording little opportunity to experience privacy and environmental variation. And esthetic/appearance measures for human exploration will emphasize upon “naturalistic countermeasures” to the innate/multitudinous stresses of such expeditions.
Although human wants, needs, and limitations both physically and mentally need to be evaluated and address when designing for space. Design decisions must at least co-exist, if not be synergistic, with the overall metrics of aerospace engineering design. Ex. The International Space Station Toilet. Human factors and habitability design are important topics for all working and living spaces. For space exploration, they are vital. While human factors and certain habitability issues have been integrated into the design process of crewed spacecraft, there is a crucial need to move from mere survivability to factors that support thriving. As of today, the risk of an incompatible vehicle or habitat design has already been identified by NASA as recognized key risk to human health and performance in space. Habitability and human factors will become even more important determinants for the design of future long-term and commercial space facilities as larger and more diverse groups occupy off-earth habitats.
Past Examples
A study conducted in 1989 (reference 2) found that when given multiple photographs and paintings as potential decoration of the international space station. Test (crew) subjects all individually preferred those with naturalistic, irrespective themes, and a large depth of field. Other examples of human-centered design is using pastel paints on the International Space Station (ISS) to contrast and provide “up/down” cues in micro-g environments or the concept of dynamically and spatially adjusting lighting color and intensities to conform to daily and even seasonal biorhythms similar to earth to mitigate the societal separation effects experienced in space.
See also
Airborne observatory
Atmosphere of Venus
High Altitude Venus Operational Concept (HAVOC)
Colonization of Venus
Floating cities and islands in fiction
References
External links
American Institute of Aeronautics and Astronautics
Design Engineering Technical Committee of the AIAA
Spacearchitect.org
Sasakawa International Center for Space Architecture (SICSA)
MOTHER Aerospace Architecture consultancy
Architecture and Vision, Design Studio specializing on Aerospace Architecture and Technology Transfer
LIQUIFER Systems Group, interdisciplinary design team developing architecture, design and systems for Earth and Space
Synthesis, a fundamental design collaborative with experts from Space Architecture, Engineering and Industrial Design
Earth2Orbit, Satellite & Launch Services, Human Space Systems, Robotic Systems, Infrastructure and High-Tech Facilities, Consulting
The Galactic Suite Space Hotel
Galactic Suite Design Aerospace Architecture and Experiences
Architectural styles
Aerospace engineering
Architecture | Aerospace architecture | [
"Physics",
"Engineering"
] | 1,262 | [
"Spacetime",
"Space",
"Aerospace",
"Aerospace engineering"
] |
3,427,200 | https://en.wikipedia.org/wiki/Ytterbium-doped%20lutetium%20orthovanadate | Ytterbium-doped lutetium orthovanadate, typically abbreviated Yb:LuVO4, is an active laser medium. The peak absorption cross section for the pi-polarization is 8.42×10−20 cm2 at 985 nm, and the stimulated emission cross section at 1020 nm is 1.03×10−20 cm².
See also
List of laser types
Further reading
Laser gain media
Crystals
Ytterbium compounds
Lutetium compounds
Vanadates | Ytterbium-doped lutetium orthovanadate | [
"Chemistry",
"Materials_science"
] | 104 | [
"Crystallography",
"Crystals"
] |
3,427,657 | https://en.wikipedia.org/wiki/Hill%27s%20spherical%20vortex | Hill's spherical vortex is an exact solution of the Euler equations that is commonly used to model a vortex ring. The solution is also used to model the velocity distribution inside a spherical drop of one fluid moving at a constant velocity through another fluid at small Reynolds number. The vortex is named after Micaiah John Muller Hill who discovered the exact solution in 1894. The two-dimensional analogue of this vortex is the Lamb–Chaplygin dipole.
The solution is described in the spherical polar coordinates system with corresponding velocity components . The velocity components are identified from Stokes stream function as follows
The Hill's spherical vortex is described by
where is a constant freestream velocity far away from the origin and is the radius of the sphere within which the vorticity is non-zero. For , the vorticity is zero and the solution described above in that range is nothing but the potential flow past a sphere of radius . The only non-zero vorticity component for is the azimuthal component that is given by
Note that here the parameters and can be scaled out by non-dimensionalization.
Hill's spherical vortex with a swirling motion
The Hill's spherical vortex with a swirling motion is provided by Keith Moffatt in 1969. Earlier discussion of similar problems are provided by William Mitchinson Hicks in 1899. The solution was also discovered by Kelvin H. Pendergast in 1956, in the context of plasma physics, as there exists a direct connection between these fluid flows and plasma physics (see the connection between Hicks equation and Grad–Shafranov equation). The motion in the axial (or, meridional) plane is described by the Stokes stream function as before. The azimuthal motion is given by
where
where and are the Bessel functions of the first kind. Unlike the Hill's spherical vortex without any swirling motion, the problem here contains an arbitrary parameter . A general class of solutions of the Euler's equation describing propagating three-dimensional vortices without change of shape is provided by Keith Moffatt in 1986.
References
Fluid dynamics | Hill's spherical vortex | [
"Chemistry",
"Engineering"
] | 423 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
3,428,881 | https://en.wikipedia.org/wiki/Electron%E2%80%93nuclear%20dynamics | Electron–nuclear dynamics (END) covers a set of quantum chemical methods not using the Born-Oppenheimer representation. It considers the motion of the nuclei and the electrons on the same time scales. The method therefore considers the molecular Hamiltonian as a whole without trying to solve separately the Schrödinger equation associated to the electronic molecular Hamiltonian. Though the method is non-adiabatic it is distinguishable from most non-adiabatic methods for treating the molecular dynamics, which typically use the Born-Oppenheimer representation, but become non-adiabatic by considering vibronic coupling explicitly.
Electron–nuclear dynamics is applied in the modelling of high-speed atomic collisions (keV energies and above), where the nuclear motion may be comparable or faster than the electronic motion.
The group of Yngve Öhrn in Gainesville, Florida, has been a pioneer in this field. He applied the method to the collision between two hydrogen atoms.
References
Quantum chemistry | Electron–nuclear dynamics | [
"Physics",
"Chemistry"
] | 199 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
3,428,935 | https://en.wikipedia.org/wiki/Density%20on%20a%20manifold | In mathematics, and specifically differential geometry, a density is a spatially varying quantity on a differentiable manifold that can be integrated in an intrinsic manner. Abstractly, a density is a section of a certain line bundle, called the density bundle. An element of the density bundle at x is a function that assigns a volume for the parallelotope spanned by the n given tangent vectors at x.
From the operational point of view, a density is a collection of functions on coordinate charts which become multiplied by the absolute value of the Jacobian determinant in the change of coordinates. Densities can be generalized into s-densities, whose coordinate representations become multiplied by the s-th power of the absolute value of the jacobian determinant. On an oriented manifold, 1-densities can be canonically identified with the n-forms on M. On non-orientable manifolds this identification cannot be made, since the density bundle is the tensor product of the orientation bundle of M and the n-th exterior product bundle of TM (see pseudotensor).
Motivation (densities in vector spaces)
In general, there does not exist a natural concept of a "volume" for a parallelotope generated by vectors in a n-dimensional vector space V. However, if one wishes to define a function that assigns a volume for any such parallelotope, it should satisfy the following properties:
If any of the vectors vk is multiplied by , the volume should be multiplied by |λ|.
If any linear combination of the vectors v1, ..., vj−1, vj+1, ..., vn is added to the vector vj, the volume should stay invariant.
These conditions are equivalent to the statement that μ is given by a translation-invariant measure on V, and they can be rephrased as
Any such mapping is called a density on the vector space V. Note that if (v1, ..., vn) is any basis for V, then fixing μ(v1, ..., vn) will fix μ entirely; it follows that the set Vol(V) of all densities on V forms a one-dimensional vector space. Any n-form ω on V defines a density on V by
Orientations on a vector space
The set Or(V) of all functions that satisfy
if are linearly independent and otherwise
forms a one-dimensional vector space, and an orientation on V is one of the two elements such that for any linearly independent . Any non-zero n-form ω on V defines an orientation such that
and vice versa, any and any density define an n-form ω on V by
In terms of tensor product spaces,
s-densities on a vector space
The s-densities on V are functions such that
Just like densities, s-densities form a one-dimensional vector space Vols(V), and any n-form ω on V defines an s-density |ω|s on V by
The product of s1- and s2-densities μ1 and μ2 form an (s1+s2)-density μ by
In terms of tensor product spaces this fact can be stated as
Definition
Formally, the s-density bundle Vols(M) of a differentiable manifold M is obtained by an associated bundle construction, intertwining the one-dimensional group representation
of the general linear group with the frame bundle of M.
The resulting line bundle is known as the bundle of s-densities, and is denoted by
A 1-density is also referred to simply as a density.
More generally, the associated bundle construction also allows densities to be constructed from any vector bundle E on M.
In detail, if (Uα,φα) is an atlas of coordinate charts on M, then there is associated a local trivialization of
subordinate to the open cover Uα such that the associated GL(1)-cocycle satisfies
Integration
Densities play a significant role in the theory of integration on manifolds. Indeed, the definition of a density is motivated by how a measure dx changes under a change of coordinates .
Given a 1-density ƒ supported in a coordinate chart Uα, the integral is defined by
where the latter integral is with respect to the Lebesgue measure on Rn. The transformation law for 1-densities together with the Jacobian change of variables ensures compatibility on the overlaps of different coordinate charts, and so the integral of a general compactly supported 1-density can be defined by a partition of unity argument. Thus 1-densities are a generalization of the notion of a volume form that does not necessarily require the manifold to be oriented or even orientable. One can more generally develop a general theory of Radon measures as distributional sections of using the Riesz-Markov-Kakutani representation theorem.
The set of 1/p-densities such that is a normed linear space whose completion is called the intrinsic Lp space of M.
Conventions
In some areas, particularly conformal geometry, a different weighting convention is used: the bundle of s-densities is instead associated with the character
With this convention, for instance, one integrates n-densities (rather than 1-densities). Also in these conventions, a conformal metric is identified with a tensor density of weight 2.
Properties
The dual vector bundle of is .
Tensor densities are sections of the tensor product of a density bundle with a tensor bundle.
References
.
Differential geometry
Manifolds
Lp spaces | Density on a manifold | [
"Mathematics"
] | 1,132 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
4,623,947 | https://en.wikipedia.org/wiki/Singular%20cardinals%20hypothesis | In set theory, the singular cardinals hypothesis (SCH) arose from the question of whether the least cardinal number for which the generalized continuum hypothesis (GCH) might fail could be a singular cardinal.
According to Mitchell (1992), the singular cardinals hypothesis is:
If κ is any singular strong limit cardinal, then 2κ = κ+.
Here, κ+ denotes the successor cardinal of κ.
Since SCH is a consequence of GCH, which is known to be consistent with ZFC, SCH is consistent with ZFC. The negation of SCH has also been shown to be consistent with ZFC, if one assumes the existence of a sufficiently large cardinal number. In fact, by results of Moti Gitik, ZFC + ¬SCH is equiconsistent with ZFC + the existence of a measurable cardinal κ of Mitchell order κ++.
Another form of the SCH is the following statement:
2cf(κ) < κ implies κcf(κ) = κ+,
where cf denotes the cofinality function. Note that κcf(κ)= 2κ for all singular strong limit cardinals κ. The second formulation of SCH is strictly stronger than the first version, since the first one only mentions strong limits. From a model in which the first version of SCH fails at ℵω and GCH holds above ℵω+2, we can construct a model in which the first version of SCH holds but the second version of SCH fails, by adding ℵω Cohen subsets to ℵn for some n.
Jack Silver proved that if κ is singular with uncountable cofinality and 2λ = λ+ for all infinite cardinals λ < κ, then 2κ = κ+. Silver's original proof used generic ultrapowers. The following important fact follows from Silver's theorem: if the singular cardinals hypothesis holds for all singular cardinals of countable cofinality, then it holds for all singular cardinals. In particular, then, if is the least counterexample to the singular cardinals hypothesis, then .
The negation of the singular cardinals hypothesis is intimately related to violating the GCH at a measurable cardinal. A well-known result of Dana Scott is that if the GCH holds below a measurable cardinal on a set of measure one—i.e., there is normal -complete ultrafilter D on such that , then . Starting with a supercompact cardinal, Silver was able to produce a model of set theory in which is measurable and in which . Then, by applying Prikry forcing to the measurable , one gets a model of set theory in which is a strong limit cardinal of countable cofinality and in which —a violation of the SCH. Gitik, building on work of Woodin, was able to replace the supercompact in Silver's proof with measurable of Mitchell order . That established an upper bound for the consistency strength of the failure of the SCH. Gitik again, using results of inner model theory, was able to show that a measurable cardinal of Mitchell order is also the lower bound for the consistency strength of the failure of SCH.
A wide variety of propositions imply SCH. As was noted above, GCH implies SCH. On the other hand, the proper forcing axiom, which implies and hence is incompatible with GCH also implies SCH. Solovay showed that large cardinals almost imply SCH—in particular, if is strongly compact cardinal, then the SCH holds above . On the other hand, the non-existence of (inner models for) various large cardinals (below a measurable cardinal of Mitchell order ) also imply SCH.
References
Thomas Jech: Properties of the gimel function and a classification of singular cardinals, Fundamenta Mathematicae 81 (1974): 57–64.
William J. Mitchell, "On the singular cardinal hypothesis," Trans. Amer. Math. Soc., volume 329 (2): pp. 507–530, 1992.
Jason Aubrey, The Singular Cardinals Problem (PDF), VIGRE expository report, Department of Mathematics, University of Michigan.
Cardinal numbers | Singular cardinals hypothesis | [
"Mathematics"
] | 863 | [
"Numbers",
"Mathematical objects",
"Cardinal numbers",
"Infinity"
] |
4,623,969 | https://en.wikipedia.org/wiki/Calcium%20hydride | Calcium hydride is the chemical compound with the formula , an alkaline earth hydride. This grey powder (white if pure, which is rare) reacts vigorously with water, liberating hydrogen gas. is thus used as a drying agent, i.e. a desiccant.
is a saline hydride, meaning that its structure is salt-like. The alkali metals and the alkaline earth metals heavier than beryllium all form saline hydrides. A well-known example is sodium hydride, which crystallizes in the NaCl motif. These species are insoluble in all solvents with which they do not react. crystallizes in the (cotunnite) structure.
Preparation
Calcium hydride is prepared from its elements by direct combination of calcium and hydrogen at 300 to 400 °C.
Uses
Reduction of metal oxides
is a reducing agent for the production of metal from the metal oxides of Ti, V, Nb, Ta, and U. It is proposed to operate via its decomposition to Ca metal:
Hydrogen source
has been used for hydrogen production. In the 1940s, it was available under the trade name "Hydrolith" as a source of hydrogen:The trade name for this compound is "hydrolith"; in cases of emergency, it can be used as a portable source of hydrogen, for filling airships. It is rather expensive for this use.
The reference to "emergency" probably refers to wartime use. The compound has, however, been widely used for decades as a safe and convenient means to inflate weather balloons. Likewise, it is regularly used in laboratories to produce small quantities of highly pure hydrogen for experiments. The moisture content of diesel fuel is estimated by the hydrogen evolved upon treatment with CaH.
Desiccant
The reaction of with water can be represented as follows:
The two hydrolysis products, gaseous and , are readily separated from the dried solvent.
Calcium hydride is a relatively mild desiccant and, compared to molecular sieves, probably inefficient. Its use is safer than more reactive agents such as sodium metal or sodium-potassium alloy. Calcium hydride is widely used as a desiccant for basic solvents such as amines and pyridine. It is also used to dry alcohols.
Despite its convenience, has a few drawbacks:
It is insoluble in all solvents with which it does not react vigorously, in contrast to , thus the speed of its drying action can be slow.
Because and are almost indistinguishable in appearance, the quality of a sample of is not obvious visually.
History
During the Battle of the Atlantic, German submarines used calcium hydride as a sonar decoy called bold.
Other calcium hydrides
Although the term calcium hydride almost always refers to CaH, a number of molecular hydrides of calcium are known. One example is (Ca(μ-H)(thf)(nacnac)).
See also
Calcium monohydride
References
Metal hydrides
Calcium compounds
Desiccants
Hydrogen storage | Calcium hydride | [
"Physics",
"Chemistry"
] | 649 | [
"Inorganic compounds",
"Reducing agents",
"Desiccants",
"Materials",
"Metal hydrides",
"Matter"
] |
4,623,988 | https://en.wikipedia.org/wiki/Stemflow | In hydrology, stemflow is the flow of intercepted water down the trunk or stem of a plant. Stemflow, along with throughfall, is responsible for the transferral of precipitation and nutrients from the canopy to the soil. In tropical rainforests, where this kind of flow can be substantial, erosion gullies can form at the base of the trunk. However, in more temperate climates stemflow levels are low and have little erosional power.
Measurement
There are a variety of ways stemflow volume is measured in the field. The most common direct measurement currently used is the bonding of bisected PVC or other plastic tubing around the circumference of the tree trunk, connected and funneled into a graduated cylinder for manual or a tipping bucket rain gauge for automatic collection. At times the tubing is wrapped multiple times around the trunk is order to ensure more complete collection.
Determining factors
Precipitation
The primary meteorological characteristics of a rainfall event that influence stemflow are:
Rainfall continuity – the more frequent and extended are the gaps during the event where no rainfall occurs, the higher the likelihood that potential stemflow volume is lost to evapotranspiration; this is also governed by air temperature, relative humidity and most significantly, wind speed
Rainfall intensity – the amount of total stemflow is diminished when the amount of rain in a given period surpasses the capacity of the flow paths
Rain angle – stemflow generally starts earlier when rainfall is more horizontal; this is more of a determinant in an open forest with a lesser degree of crown closure
Species
The species of the tree affects the amount of timing and stemflow. The particular morphological characteristics that are key factors are:
Crown size – stemflow potential is greater as crown size relative to the diameter at breast height increases. However, the greater the DBH, the more incident rainfall is needed to start stemflow.
Leaf shape/orientation – leaves which are concave and elevated horizontally above the petiole are able to contribute to stemflow
Branch angle – stemflow potential heightens as the angle of the branches and twigs in relation to the trunk decreases.
Flow path obstructions – abnormalities on the flow path, such as detached pieces of bark or scars, on the underside of the branch can divert water from stemflow and become a component in throughfall
Bark – stemflow is affected by the degree of absorptive ability and smoothness of the bark alongside the branch and stem. This is measured by using the Bark Relief Index, or BRI, which is the difference between the circumference of the tree and what the circumference would be if the tree had no bark.
Stand characteristics
In addition to the effects of individual tree species, the overall structure of the forest stand also influences the amount of stemflow that will ultimately occur, these factors are:
Species composition – the total stemflow for the stand is determined by the contributions of individuals and their species-specific traits
Stand density – morphological characteristics such as branch angle and thickness are largely determined by the amount of density of competing trees in the stand
Canopy structure – individuals located in the understory in a stand with multiple vertically-stratified stories will have a lessened amount of total stemflow due to the interception of dominant and codominant individuals
Other
Seasonality – in the case of deciduous or mixed forests, stemflow rates are slightly higher in the dormant season when no leaves are present and evapotranspiration is reduced; this effect becomes more pronounced as the stem diameter increases
Diurnality – variations in branch weight influence the amount of stemflow; branches are heavier in the morning (with dew) and lighter in the afternoon
Influence on soil
Chemistry
Nutrients that have accumulated on the canopy from dry deposition or animal feces are able to directly enter the soil through stemflow. When precipitation occurs, canopy nutrients are leached into the water because of the differences in nutrient concentration between the tree and the rainfall. Conversely, nutrients are taken up by the tree when concentration is lower in the canopy than the rainfall, the presence of epiphytes or lichens also contributes to uptake. The nutrients that enter the soil can also reflect the particular environmental conditions around them, for example, plants located in industrialized areas exhibit higher rates of sulfur and nitrogen (from air pollution), whereas those located near the oceans have higher rates of sodium (from seawater).
Soil acidification can be seen around some stems, for example beech trees from dry deposition.
Precipitation and morphological factors that influence stemflow timing and volume also affect the chemical composition; in general, stemflow water becomes more dilute during the course of a storm event, and rough-barked species contain more nutrients than smooth-barked species.
Water distribution
In forested areas, stemflow is considered a point-source input of water into the soil, thus water is more able to effectively penetrate past the topsoil into deeper layers of the soil horizon along tree roots and their subsequent creation of macropores (termed preferential flow). The loosening of the soil can result in minor landslides.
See also
Throughfall
Canopy interception
Forest floor interception
References
Hydrology
Forest ecology | Stemflow | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,037 | [
"Hydrology",
"Environmental engineering"
] |
4,624,037 | https://en.wikipedia.org/wiki/Throughfall | In hydrology, throughfall is the process which describes how wet leaves shed excess water onto the ground surface. These drops have greater erosive power because they are heavier than rain drops. Furthermore, where there is a high canopy, falling drops may reach terminal velocity, about , thus maximizing the drop's erosive potential.
Rates of throughfall are higher in areas of forest where the leaves are broad-leaved. This is because the flat leaves allow water to collect. Drip-tips also facilitate throughfall. Rates of throughfall are lower in coniferous forests as conifers can only hold individual droplets of water on their needles.
Throughfall is a crucial process when designing pesticides for foliar application since it will condition their washing and the fate of potential pollutants in the environment.
See also
Stemflow
Canopy interception
Forest floor interception
Tree shape
Notes
Hydrology
Forest ecology | Throughfall | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 182 | [
"Hydrology",
"Environmental engineering"
] |
4,624,122 | https://en.wikipedia.org/wiki/Throughflow | In hydrology, throughflow, a subtype of interflow (percolation), is the lateral unsaturated flow of water in the soil zone, typically through a highly permeable geologic unit overlying a less permeable one. Water thus returns to the surface, as return flow, before or on entering a stream or groundwater. Once water infiltrates into the soil, it is still affected by gravity and infiltrates to the water table or if permeability varies laterally travels downslope. Throughflow usually occurs during peak hydrologic events (such as high precipitation). Flow rates are dependent on the hydraulic conductivity of the geologic medium.
References
Hydrology
Physical geography
Soil science
Hydrogeology | Throughflow | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 148 | [
"Hydrology",
"Hydrogeology",
"Environmental engineering"
] |
7,973,428 | https://en.wikipedia.org/wiki/Anfinsen%27s%20dogma | Anfinsen's dogma, also known as the thermodynamic hypothesis, is a postulate in molecular biology. It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence. The dogma was championed by the Nobel Prize Laureate Christian B. Anfinsen from his research on the folding of ribonuclease A. His research was based on previous studies by biochemist Lisa Steiner, whose superiors at the time did not recognize the significance. The postulate amounts to saying that, at the environmental conditions (temperature, solvent concentration and composition, etc.) at which folding occurs, the native structure is a unique, stable and kinetically accessible minimum of the free energy. In other words, there are three conditions for formation of a unique protein structure:
Uniqueness – Requires that the sequence does not have any other configuration with a comparable free energy. Hence the free energy minimum must be unchallenged.
Stability – Small changes in the surrounding environment cannot give rise to changes in the minimum configuration. This can be pictured as a free energy surface that looks more like a funnel (with the native state in the bottom of it) rather than like a soup plate (with several closely related low-energy states); the free energy surface around the native state must be rather steep and high, in order to provide stability.
Kinetical accessibility – Means that the path in the free energy surface from the unfolded to the folded state must be reasonably smooth or, in other words, that the folding of the chain must not involve highly complex changes in the shape (like knots or other high order conformations). Basic changes in the shape of the protein happen dependent on their environment, shifting shape to suit their place. This creates multiple configurations for biomolecules to shift into.
Challenges to Anfinsen's dogma
Protein folding in a cell is a highly complex process that involves transport of the newly synthesized proteins to appropriate cellular compartments through targeting, permanent misfolding, temporarily unfolded states, post-translational modifications, quality control, and formation of protein complexes facilitated by chaperones.
Some proteins need the assistance of chaperone proteins to fold properly. It has been suggested that this disproves Anfinsen's dogma. However, the chaperones do not appear to affect the final state of the protein; they seem to work primarily by preventing aggregation of several protein molecules prior to the final folded state of the protein. However, at least some chaperones are required for the proper folding of their subject proteins.
Many proteins can also undergo aggregation and misfolding. For example, prions are stable conformations of proteins which differ from the native folding state. In bovine spongiform encephalopathy, native proteins re-fold into a different stable conformation, which causes fatal amyloid buildup. Other amyloid diseases, including Alzheimer's disease and Parkinson's disease, are also exceptions to Anfinsen's dogma.
Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein complex switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of Protein Data Bank (PDB) proteins switch folds. The switching between alternative structures is driven by interactions of the protein with small ligands or other proteins, by chemical modifications (such as phosphorylation) or by changed environmental conditions, such as temperature, pH or membrane potential. Each alternative structure may either correspond to the global minimum of free energy of the protein at the given conditions or be kinetically trapped in a higher local minimum of free energy.
References
Further reading
Profiles in Science: The Christian B. Anfinsen Papers-Articles
Molecular biology
Protein structure
Hypotheses | Anfinsen's dogma | [
"Chemistry",
"Biology"
] | 797 | [
"Biochemistry",
"Protein structure",
"Structural biology",
"Molecular biology"
] |
7,974,982 | https://en.wikipedia.org/wiki/Inversion%20transformation | In mathematical physics, inversion transformations are a natural extension of Poincaré transformations to include all conformal, one-to-one transformations on coordinate space-time. They are less studied in physics because, unlike the rotations and translations of Poincaré symmetry, an object cannot be physically transformed by the inversion symmetry. Some physical theories are invariant under this symmetry, in these cases it is what is known as a 'hidden symmetry'. Other hidden symmetries of physics include gauge symmetry and general covariance.
Early use
In 1831 the mathematician Ludwig Immanuel Magnus began to publish on transformations of the plane generated by inversion in a circle of radius R. His work initiated a large body of publications, now called inversive geometry. The most prominently named mathematician became August Ferdinand Möbius once he reduced the planar transformations to complex number arithmetic. In the company of physicists employing the inversion transformation early on was Lord Kelvin, and the association with him leads it to be called the Kelvin transform.
Transformation on coordinates
In the following we shall use imaginary time () so that space-time is Euclidean and the equations are simpler. The Poincaré transformations are given by the coordinate transformation on space-time parametrized by the 4-vectors V
where is an orthogonal matrix and is a 4-vector. Applying this transformation twice on a 4-vector gives a third transformation of the same form. The basic invariant under this transformation is the space-time length given by the distance between two space-time points given by 4-vectors x and y:
These transformations are subgroups of general 1-1 conformal transformations on space-time. It is possible to extend these transformations to include all 1-1 conformal transformations on space-time
We must also have an equivalent condition to the orthogonality condition of the Poincaré transformations:
Because one can divide the top and bottom of the transformation by we lose no generality by setting to the unit matrix. We end up with
Applying this transformation twice on a 4-vector gives a transformation of the same form. The new symmetry of 'inversion' is given by the 3-tensor This symmetry becomes Poincaré symmetry if we set When the second condition requires that is an orthogonal matrix. This transformation is 1-1 meaning that each point is mapped to a unique point only if we theoretically include the points at infinity.
Invariants
The invariants for this symmetry in 4 dimensions is unknown however it is known that the invariant requires a minimum of 4 space-time points. In one dimension, the invariant is the well known cross-ratio from Möbius transformations:
Because the only invariants under this symmetry involve a minimum of 4 points, this symmetry cannot be a symmetry of point particle theory. Point particle theory relies on knowing the lengths of paths of particles through space-time (e.g., from to ). The symmetry can be a symmetry of a string theory in which the strings are uniquely determined by their endpoints. The propagator for this theory for a string starting at the endpoints and ending at the endpoints is a conformal function of the 4-dimensional invariant. A string field in endpoint-string theory is a function over the endpoints.
Physical evidence
Although it is natural to generalize the Poincaré transformations in order to find hidden symmetries in physics and thus narrow down the number of possible theories of high-energy physics, it is difficult to experimentally examine this symmetry as it is not possible to transform an object under this symmetry. The indirect evidence of this symmetry is given by how accurately fundamental theories of physics that are invariant under this symmetry make predictions. Other indirect evidence is whether theories that are invariant under this symmetry lead to contradictions such as giving probabilities greater than 1. So far there has been no direct evidence that the fundamental constituents of the Universe are strings. The symmetry could also be a broken symmetry meaning that although it is a symmetry of physics, the Universe has 'frozen out' in one particular direction so this symmetry is no longer evident.
See also
Rotation group SO(3)
Coordinate rotations and reflections
Spacetime symmetries
CPT symmetry
Field (physics)
superstrings
References
Symmetry
Conservation laws
Functions and mappings | Inversion transformation | [
"Physics",
"Mathematics"
] | 852 | [
"Functions and mappings",
"Mathematical analysis",
"Equations of physics",
"Conservation laws",
"Mathematical objects",
"Mathematical relations",
"Geometry",
"Symmetry",
"Physics theorems"
] |
7,975,189 | https://en.wikipedia.org/wiki/Completely%20metrizable%20space | In mathematics, a completely metrizable space (metrically topologically complete space) is a topological space (X, T) for which there exists at least one metric d on X such that (X, d) is a complete metric space and d induces the topology T. The term topologically complete space is employed by some authors as a synonym for completely metrizable space, but sometimes also used for other classes of topological spaces, like completely uniformizable spaces or Čech-complete spaces.
Difference between complete metric space and completely metrizable space
The distinction between a completely metrizable space and a complete metric space lies in the words there exists at least one metric in the definition of completely metrizable space, which is not the same as there is given a metric (the latter would yield the definition of complete metric space). Once we make the choice of the metric on a completely metrizable space (out of all the complete metrics compatible with the topology), we get a complete metric space. In other words, the category of completely metrizable spaces is a subcategory of that of topological spaces, while the category of complete metric spaces is not (instead, it is a subcategory of the category of metric spaces). Complete metrizability is a topological property while completeness is a property of the metric.
Examples
The space , the open unit interval, is not a complete metric space with its usual metric inherited from , but it is completely metrizable since it is homeomorphic to .
The space of rational numbers with the subspace topology inherited from is metrizable but not completely metrizable.
Properties
A topological space X is completely metrizable if and only if X is metrizable and a Gδ in its Stone–Čech compactification βX.
A subspace of a completely metrizable space is completely metrizable if and only if it is in .
A countable product of nonempty metrizable spaces is completely metrizable in the product topology if and only if each factor is completely metrizable. Hence, a product of nonempty metrizable spaces is completely metrizable if and only if at most countably many factors have more than one point and each factor is completely metrizable.
For every metrizable space there exists a completely metrizable space containing it as a dense subspace, since every metric space has a completion. In general, there are many such completely metrizable spaces, since completions of a topological space with respect to different metrics compatible with its topology can give topologically different completions.
Completely metrizable abelian topological groups
When talking about spaces with more structure than just topology, like topological groups, the natural meaning of the words “completely metrizable” would arguably be the existence of a complete metric that is also compatible with that extra structure, in addition to inducing its topology. For abelian topological groups and topological vector spaces, “compatible with the extra structure” might mean that the metric is invariant under translations.
However, no confusion can arise when talking about an abelian topological group or a topological vector space being completely metrizable: it can be proven that every abelian topological group (and thus also every topological vector space) that is completely metrizable as a topological space (i. e., admits a complete metric that induces its topology) also admits an invariant complete metric that induces its topology.
This implies e. g. that every completely metrizable topological vector space is complete. Indeed, a topological vector space is called complete iff its uniformity (induced by its topology and addition operation) is complete; the uniformity induced by a translation-invariant metric that induces the topology coincides with the original uniformity.
See also
Complete metric space
Completely uniformizable space
Metrizable space
Notes
References
General topology | Completely metrizable space | [
"Mathematics"
] | 790 | [
"General topology",
"Topology"
] |
7,976,609 | https://en.wikipedia.org/wiki/Sphygmograph | The sphygmograph ( ) was a mechanical device used to measure blood pressure in the mid-19th century. It was developed in 1854 by German physiologist Karl von Vierordt (1818–1884). It is considered the first external, non-intrusive device used to estimate blood pressure.
The device was a system of levers hooked to a scale-pan in which weights were placed to determine the amount of external pressure needed to stop blood flow in the radial artery. Although the instrument was cumbersome and its measurements imprecise, the basic concept of Vierordt's sphygmograph eventually led to the blood pressure cuff used today.
In 1863, Étienne-Jules Marey (1830–1904) improved the device by making it portable. Also he included a specialized instrument to be placed above the radial artery that was able to magnify pulse waves and record them on paper with an attached pen.
In 1872, Frederick Akbar Mahomed published a description of a modified sphygmograph. This modified version made the sphygmograph quantitative, so that it was able to measure arterial blood pressure.
In 1880, Samuel von Basch (1837–1905) invented the sphygmomanometer, which was then improved by Scipione Riva-Rocci (1863–1937) in the 1890s. In 1901 Harvey Williams Cushing improved it further, and Heinrich von Recklinghausen (1867–1942) used a wider cuff, and so it became the first accurate and practical instrument for measuring blood pressure.
References
External links
R.E. Dudgeon M.D. The sphygmograph : its history and use as an aid to diagnosis in ordinary practice (1882). The Medical Heritage Library.
Drawing of Vierordt's Sphygmograph.
Medical equipment
Blood pressure
Physiological instruments | Sphygmograph | [
"Technology",
"Engineering",
"Biology"
] | 381 | [
"Physiological instruments",
"Medical equipment",
"Measuring instruments",
"Medical technology"
] |
7,977,203 | https://en.wikipedia.org/wiki/Engineering%20economics | Engineering economics, previously known as engineering economy, is a subset of economics concerned with the use and "...application of economic principles" in the analysis of engineering decisions. As a discipline, it is focused on the branch of economics known as microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment. It is pragmatic by nature, integrating economic theory with engineering practice. But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources. As a discipline though, it is closely related to others such as statistics, mathematics and cost accounting. It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics.
Engineers seek solutions to problems, and along with the technical aspects, the economic viability of each potential solution is normally considered from a specific viewpoint that reflects its economic utility to a constituency.
Fundamentally, engineering economics involves formulating, estimating, and evaluating the economic outcomes when alternatives to accomplish a defined purpose are available.
In some U.S. undergraduate civil engineering curricula, engineering economics is a required course. It is a topic on the Fundamentals of Engineering examination, and questions might also be asked on the Principles and Practice of Engineering examination; both are part of the Professional Engineering registration process.
Considering the time value of money is central to most engineering economic analyses. Cash flows are discounted using an interest rate, except in the most basic economic studies.
For each problem, there are usually many possible alternatives. One option that must be considered in each analysis, and is often the choice, is the do nothing alternative. The opportunity cost of making one choice over another must also be considered. There are also non-economic factors to be considered, like color, style, public image, etc.; such factors are termed attributes.
Costs as well as revenues are considered, for each alternative, for an analysis period that is either a fixed number of years or the estimated life of the project. The salvage value is often forgotten, but is important, and is either the net cost or revenue for decommissioning the project.
Some other topics that may be addressed in engineering economics are inflation, uncertainty, replacements, depreciation, resource depletion, taxes, tax credits, accounting, cost estimations, or capital financing. All these topics are primary skills and knowledge areas in the field of cost engineering.
Since engineering is an important part of the manufacturing sector of the economy, engineering industrial economics is an important part of industrial or business economics. Major topics in engineering industrial economics are:
The economics of the management, operation, and growth and profitability of engineering firms;
Macro-level engineering economic trends and issues;
Engineering product markets and demand influences; and
The development, marketing, and financing of new engineering technologies and products.
Benefit–cost ratio
Examples of usage
Some examples of engineering economic problems range from value analysis to economic studies. Each of these is relevant in different situations, and most often used by engineers or project managers. For example, engineering economic analysis helps a company not only determine the difference between fixed and incremental costs of certain operations, but also calculates that cost, depending upon a number of variables. Further uses of engineering economics include:
Value analysis
Linear programming
Critical path economy
Interest and money - time relationships
Depreciation and valuation
Capital budgeting
Risk, uncertainty, and sensitivity analysis
Fixed, incremental, and sunk costs
Replacement studies
Minimum cost formulas
Various economic studies in relation to both public and private ventures
Each of the previous components of engineering economics is critical at certain junctures, depending on the situation, scale, and objective of the project at hand. Critical path economy, as an example, is necessary in most situations as it is the coordination and planning of material, labor, and capital movements in a specific project. The most critical of these "paths" are determined to be those that have an effect upon the outcome both in time and cost. Therefore, the critical paths must be determined and closely monitored by engineers and managers alike. Engineering economics helps provide the Gantt charts and activity-event networks to ascertain the correct use of time and resources.
Value Analysis
Proper value analysis finds its roots in the need for industrial engineers and managers to not only simplify and improve processes and systems, but also the logical simplification of the designs of those products and systems. Though not directly related to engineering economy, value analysis is nonetheless important, and allows engineers to properly manage new and existing systems/processes to make them more simple and save money and time. Further, value analysis helps combat common "roadblock excuses" that may trip up managers or engineers. Sayings such as "The customer wants it this way" are retorted by questions such as; has the customer been told of cheaper alternatives or methods? "If the product is changed, machines will be idle for lack of work" can be combated by; can management not find new and profitable uses for these machines? Questions like these are part of engineering economy, as they preface any real studies or analyses.
Linear Programming
Linear programming is the use of mathematical methods to find optimized solutions, whether they be minimized or maximized in nature. This method uses a series of lines to create a polygon then to determine the largest, or smallest, point on that shape. Manufacturing operations often use linear programming to help mitigate costs and maximize profits or production.
Interest and Money – Time Relationships
Considering the prevalence of capital to be lent for a certain period of time, with the understanding that it will be returned to the investor, money-time relationships analyze the costs associated with these types of actions. Capital itself must be divided into two different categories, equity capital and debt capital. Equity capital is money already at the disposal of the business, and mainly derived from profit, and therefore is not of much concern, as it has no owners that demand its return with interest. Debt capital does indeed have owners, and they require that its usage be returned with "profit", otherwise known as interest. The interest to be paid by the business is going to be an expense, while the capital lenders will take interest as a profit, which may confuse the situation. To add to this, each will change the income tax position of the participants.
Interest and money time relationships come into play when the capital required to complete a project must be either borrowed or derived from reserves. To borrow brings about the question of interest and value created by the completion of the project. While taking capital from reserves also denies its usage on other projects that may yield more results. Interest in the simplest terms is defined by the multiplication of the principle, the units of time, and the interest rate. The complexity of interest calculations, however, becomes much higher as factors such as compounding interest or annuities come into play.
Engineers often utilize compound interest tables to determine the future or present value of capital. These tables can also be used to determine the effect annuities have on loans, operations, or other situations. All one needs to utilize a compound interest table is three things; the time period of the analysis, the minimum attractive rate of return (MARR), and the capital value itself. The table will yield a multiplication factor to be used with the capital value, this will then give the user the proper future or present value.
Examples of Present, Future, and Annuity Analysis
Using the compound interest tables mentioned above, an engineer or manager can quickly determine the value of capital over a certain time period. For example, a company wishes to borrow $5,000.00 to finance a new machine, and will need to repay that loan in 5 years at 7%. Using the table, 5 years and 7% gives the factor of 1.403, which will be multiplied by $5,000.00. This will result in $7,015.00. This is of course under the assumption that the company will make a lump payment at the conclusion of the five years, not making any payments prior.
A much more applicable example is one with a certain piece of equipment that will yield benefit for a manufacturing operation over a certain period of time. For instance, the machine benefits the company $2,500.00 every year, and has a useful life of 8 years. The MARR is determined to be roughly 5%. The compound interest tables yield a different factor for different types of analysis in this scenario. If the company wishes to know the Net Present Benefit (NPB) of these benefits; then the factor is the P/A of 8 yrs at 5%. This is 6.463. If the company wishes to know the future worth of these benefits; then the factors is the F/A of 8 yrs at 5%; which is 9.549. The former gives a NPB of $16,157.50, while the latter gives a future value of $23,872.50.
These scenarios are extremely simple in nature, and do not reflect the reality of most industrial situations. Thus, an engineer must begin to factor in costs and benefits, then find the worth of the proposed machine, expansion, or facility.
Depreciation and Valuation
The fact that assets and material in the real world eventually wear down, and thence break, is a situation that must be accounted for. Depreciation itself is defined by the decreasing of value of any given asset, though some exceptions do exist. Valuation can be considered the basis for depreciation in a basic sense, as any decrease in value would be based on an original value. The idea and existence of depreciation becomes especially relevant to engineering and project management is the fact that capital equipment and assets used in operations will slowly decrease in worth, which will also coincide with an increase in the likelihood of machine failure. Hence the recording and calculation of depreciation is important for two major reasons.
To give an estimate of "recovery capital" that has been put back into the property.
To enable depreciation to be charged against profits that, like other costs, can be used for income taxation purposes.
Both of these reasons, however, cannot make up for the "fleeting" nature of depreciation, which make direct analysis somewhat difficult. To further add to the issues associated with depreciation, it must be broken down into three separate types, each having intricate calculations and implications.
Normal depreciation, due to physical or functional losses.
Price depreciation, due to changes in market value.
Depletion, due to the use of all available resources.
Calculation of depreciation also comes in a number of forms; straight line, declining balance, sum-of-the-year's, and service output. The first method being perhaps the easiest to calculate, while the remaining have varying levels of difficulty and utility. Most situations faced by managers in regards to depreciation can be solved using any of these formulas, however, company policy or preference of individual may affect the choice of model.
The main form of depreciation used inside the U.S. is the Modified Accelerated Capital Recovery System (MACRS), and it is based on a number of tables that give the class of asset, and its life. Certain classes are given certain lifespans, and these affect the value of an asset that can be depreciated each year. This does not necessarily mean that an asset must be discarded after its MACRS life is fulfilled, just that it can no longer be used for tax deductions.
Capital Budgeting
Capital budgeting, in relation to engineering economics, is the proper usage and utilization of capital to achieve project objectives. It can be fully defined by the statement; "... as the series of decisions by individuals and firms concerning how much and where resources will be obtained and expended to meet future objectives." This definition almost perfectly explains capital and its general relation to engineering, though some special cases may not lend themselves to such a concise explanation. The actual acquisition of that capital has many different routes, from equity to bonds to retained profits, each having unique strengths and weakness, especially when in relation to income taxation. Factors such as risk of capital loss, along with possible or expected returns must also be considered when capital budgeting is underway. For example, if a company has $20,000 to invest in a number of high, moderate, and low risk projects, the decision would depend upon how much risk the company is willing to take on, and if the returns offered by each category offset this perceived risk. Continuing with this example, if the high risk offered only 20% return, while the moderate offered 19% return, engineers and managers would most likely choose the moderate risk project, as its return is far more favorable for its category. The high risk project failed to offer proper returns to warrant its risk status. A more difficult decision may be between a moderate risk offering 15% while a low risk offering 11% return. The decision here would be much more subject to factors such as company policy, extra available capital, and possible investors. "In general, the firm should estimate the project opportunities, including investment requirements and prospective rates of return for each, expected to be available for the coming period. Then the available capital should be tentatively allocated to the most favorable projects. The lowest prospective rate of return within the capital available then becomes the minimum acceptable rate of return for analyses of any projects during that period."
Minimum Cost Formulas
Being one of the most important and integral operations in the engineering economic field is the minimization of cost in systems and processes. Time, resources, labor, and capital must all be minimized when placed into any system, so that revenue, product, and profit can be maximized. Hence, the general equation;
where C is total cost, a b and k are constants, and x is the variable number of units produced.
There are a great number of cost analysis formulas, each for particular situations and are warranted by the policies of the company in question, or the preferences of the engineer at hand.
Economic Studies, both Private and Public in Nature
Economic studies, which are much more common outside of engineering economics, are still used from time to time to determine feasibility and utility of certain projects. They do not, however, truly reflect the "common notion" of economic studies, which is fixated upon macroeconomics, something engineers have little interaction with. Therefore, the studies conducted in engineering economics are for specific companies and limited projects inside those companies. At most one may expect to find some feasibility studies done by private firms for the government or another business, but these again are in stark contrast to the overarching nature of true economic studies. Studies have a number of major steps that can be applied to almost every type of situation, those being as follows;
Planning and screening - Mainly reviewing objectives and issues that may be encountered.
Reference to standard economic studies - Consultation of standard forms.
Estimating - Speculating as to the magnitude of costs and other variables.
Reliability - The ability to properly estimate.
Comparison between actual and projected performance - Verify savings, review failures, to ensure that proposals were valid, and to add to future studies.
Objectivity of the analyst - To ensure the individual that advanced proposals or conducted analysis was not biased toward certain outcomes.
References
Further reading
Business economics
Cost engineering
Civil engineering | Engineering economics | [
"Engineering"
] | 3,144 | [
"Construction",
"Civil engineering",
"Engineering economics",
"Cost engineering"
] |
7,977,991 | https://en.wikipedia.org/wiki/Oily%20water%20separator%20%28marine%29 | An oily water separator (OWS) (marine) is a piece of equipment specific to the shipping or marine industry. It is used to separate oil and water mixtures into their separate components. This page refers exclusively to oily water separators aboard marine vessels. They are found on board ships where they are used to separate oil from oily waste water such as bilge water before the waste water is discharged into the environment. These discharges of waste water must comply with the requirements laid out in Marpol 73/78.
Bilge water is a nearly-unavoidable byproduct of shipboard operations. Oil leaks from running machinery such as diesel generators, air compressors, and the main propulsion engine. Modern OWSs have alarms and automatic closure devices which are activated when the oil storage content of the waste water exceeds a certain limit(15ppm : 15 cm3 of oil in 1m3 of water).
Purpose
The primary purpose of a shipboard oily water separator (OWS) is to separate oil and other contaminants that could be harmful for the oceans. The International Maritime Organization (IMO) publishes regulations through the Marine Environment Protection Committee (MEPC). On July 18, 2003, the MEPC issued new regulations that each vessel built after this date had to follow. This document is known as MEPC 107(49) and it details revised guidelines and specifications for pollution prevention equipment for machinery space bilges of ships. Each OWS must be able to achieve clean bilge water under 15 ppm of type C oil or heavily emulsified oil, and any other contaminants that may be found. All oil content monitors (OCM) must be tamper-proof. Also whenever the OWS is being cleaned out the OCM must be active. An OWS must be able to clear out contaminants as well as oil. Some of these contaminating agents include lubricating oil, cleaning product, soot from combustion, fuel oil, rust, sewage, and several other things that can be harmful to the ocean environment.
Bilge content
The bilge area is the lowest area on a ship. The bilge water that collects here include drain water or leftover water from the boilers, water collecting tanks, drinking water and other places where water can not overflow. However, bilge water doesn't just include water drainage. Another system that drains into the Bilge system comes from the propulsion area of the ship. Here fuels, lubricants, hydraulic fluid, antifreeze, solvents, and cleaning chemicals drain into the engine room bilges in small quantities. The OWS is intended to remove a large proportion of these contaminants before discharge to the environment (overboard to the sea).
Design and operation
All OWS equipment, new or old, can, in a laboratory setting, automatically separate oil and water to produce clean water for discharge overboard that contains no more than 15 parts per million oil. OWS equipment is approved by testing it with specific cocktails of mixed oil and water. Initially these combinations were very simple, basically no more than a mixture of clean water and diesel fuel, but they have become more sophisticated under MARPOL MEPC 107(49). The vast majority of these many equipment models, manufacturers, and types start with some sort of gravity separation of bilge water. Simply letting oil and water sit is called decanting, and this does not always meet the 15 ppm criterion, which is why each manufacturer has added additional features to his equipment to ensure that this criterion can be met. The separation that takes place inside the OWS allows oil that floats to the top to be automatically skimmed off to a sludge tank or dirty oil holding tank. There is no official standard for tank naming convention but there are some proposals for that.
An OWS needs to be fitted with an oil content meter (OCM) that samples the OWS overboard discharge water for oil content. If the oil content is less than 15 ppm, the OCM allows the water to be discharged overboard. If the oil content is higher than 15 ppm, the OCM will activate an alarm and move a three-way valve that, within a short period of time, will recirculate the overboard discharge water to a tank on the OWS suction side.
An OCM takes a trickle sample from the OWS overboard discharge line and shines a light through the sample to an optical sensor. Since small oil droplets will diffract and diffuse light, a change in signal at the sensor will indicate the presence of oil. At a certain signal setting that is roughly equivalent to 15 ppm, the sensor will conclude that there is too much oil going through the discharge line. This calibration generally takes place in a lab, but can be tested by use of a three-sample liquid aboard the vessel. If the OCM ends up sampling a certain amount of heavy oil, the OCM will be fouled and it will need to be flushed or cleaned.
The cleaning can be done by running fresh water through the OCM via a permanent connection or can be performed by opening the OCM sample area and scrubbing the sample area with a bottle brush.
The water removed by the OWS flows to oil collecting spaces. There can be two stages. The first-stage filter removes physical impurities present and promotes some fine separation. The second-stage filter uses coalescer inserts to achieve the final de-oiling. Coalescence is the breakdown of surface tension between oil droplets in an oil/water mixture which causes them to join and increase in size. The oil from the collecting spaces is drained away automatically or manually. In most modern ships, the oil from collecting spaces is drained away automatically.
Oil record book
All Cargo vessels where MARPOL Convention is applicable must have an oil record book where the chief engineer will record all oil or sludge transfers and discharges within the vessel. This is necessary in order for authorities to be able to monitor if a vessel's crew has performed any illegal oil discharges at sea.
When making entries in the oil record book Part I, the date, operational code, and item number are inserted in the appropriate columns and the required particulars shall be recorded in chronological order as they have been executed on board. Each operation is to be fully recorded without delay so that all the entries in the book appropriate to that operation are completed.
History of regulations for treated water discharge
In 1948 in the US, a Water Pollution Control Act (WPA) was passed by the federal government. This act gave rights to the surgeon general of the public health service to make programs to decrease the amount of pollution in the world's waters. The main concern was to save water, protect fish, and have clean water for agricultural usage. The WPA also helped to start the process of building water treatment plants. This is to guard against sewage from polluting drinking water. In 1972 the WPA was amended to include more requirements in order to insure that the water is chemically sound. This amendment also furthered regulations to insure the quality of the water was up to par. In 1987 the WPA was amended again to put an even more strict control on water supply pollution. With this new amendment water sources had to fit a specific set of criteria to fight against pollution.
MARPOL
Marpol 73/78 is the International Convention for the Prevention of Pollution from Ships, 1973 as modified by the Protocol of 1978. ("Marpol" is short for marine pollution and 73/78 short for the years 1973 and 1978.)
Marpol 73/78 is one of the most important international marine environmental conventions. It was designed to minimize pollution of the seas, including dumping, oil and exhaust pollution. Its stated object is to preserve the marine environment through the complete elimination of pollution by oil and other harmful substances and the minimization of accidental discharge of such substances.
Current regulations
United States
The regulations in the Clean Water Act limit what may be discharged to sea from an OWS in USA waters. Current limits are oil for discharges within 12 nautical miles of shore or 100 mg/L outside that limit.
Europe and Canada
European countries and Canada have stricter rules on discharge and discharges must contain less than 5 mg/L of contaminants.
The discharge of oil contaminated waters are also subject to international controls such as the International Convention for the Prevention of Pollution from Ships (MARPOL), and International Maritime Organization (IMO). These organizations set strict limits to protect marine life and coastal environments. These agencies require logs to be kept of any discharges of contaminated water.
Types
Gravity plate separator
A gravity plate separator contains a series of plates through which the contaminated water flows. The oil in the water coalesces on the underside of the plate eventually forming droplets before coalescing into liquid oil which floats off the plates and accumulates at the top of the chamber. The oil accumulating at the top is then transferred to waste oil tank on the vessel where it is later discharged to a treatment facility ashore. This type of Oily Water Separator is common in ships, but it has flaws that decrease efficiency. Oil particles that are twenty micrometers or smaller are not separated. The variety of oily wastes in bilge water can limit removal efficiency especially when very dense and highly viscous oils such as bunker oil are present. Plates must be replaced when fouled, which increases the costs of operation.
Electrochemical
Wastewater purification of oils and contaminants by electrochemical emulsification is actively in research and development. Electrochemical emulsification involves the generation of electrolytic bubbles that attract pollutants such as sludge and carry them to the top of the treatment chamber. Once at the top of the treatment chamber the oil and other pollutants are transferred to a waste oil tank.
Bioremediation
Bioremediation is the use of microorganisms to treat contaminated water. A carefully managed environment is needed for the microorganisms which includes nutrients and hydrocarbons such as oil or other contaminants, and oxygen.
In pilot scale studies, bio-remediation was used as one stage in a multi-stage purification process involving a plate separator to remove the majority of the contaminants and was able to treat pollutants at very low concentrations including organic contaminants such as glycerol, solvents, jet fuel, detergents, and phosphates. After treatment of contaminated water, carbon dioxide, water and an organic sludge were the only residual products.
Centrifugal
A centrifugal water-oil separator, centrifugal oil-water separator or centrifugal liquid-liquid separator is a device designed to separate oil and water by centrifugation. It generally contains a cylindrical container that rotates inside a larger stationary container. The denser liquid, usually water, accumulates at the periphery of the rotating container and is collected from the side of the device, whereas the less dense liquid, usually oil, accumulates at the rotation axis and is collected from the center. Centrifugal oil-water separators are used for waste water processing and for cleanup of oil spills on sea or on lake. Centrifugal oil-water separators are also used for filtering diesel and lubricating oils by removing the waste particles and impurity from them.
Problems
On a properly operated vessel only small amounts of bilge would be present as long as there are no equipment failures. But even the best-operated vessels suffer equipment failures, which then quickly results in contaminated bilges. Sometimes these contaminants are massive and pose a serious challenge to the crew to deal with in a legal fashion.
An ideal OWS system will make it clear and easy for regulatory enforcement agencies to determine if OWS system regulations are being violated. At present, there is no clear and efficient method of determining whether regulations are violated or not. At the most basic level, the absolute absence of any type of standardization of OWS systems makes the initial investigation confusing, dirty, time-consuming and sometimes plain incorrect. In the marine industry there is a long-standing and important tradition of "jointness" in marine forensic investigations, where all parties at interest examine the same things at the same time. However, due to the criminal character of OWS violations the jointness concept is abandoned, which leads to very poor technical investigative methods and severe unnecessary disruptions to vessel operations.
Various efforts are being made to improve the overall OWS system approach. In 2015, at the MAX1 Studies Conference held in Wilmington, North Carolina, maritime leaders from many different sectors gathered to discuss problems potential solutions regarding waste stream management.
See also
API oil-water separator
Wastewater treatment plant
Magic pipe
Oily water separators
Oil content meter
Marpol Annex I
Marpol 73/78
Oil discharge monitoring equipment
Port Reception Facilities
References
Watercraft components
Waste treatment technology
Liquid-liquid separation
Ocean pollution | Oily water separator (marine) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,669 | [
"Ocean pollution",
"Separation processes by phases",
"Water treatment",
"Liquid-liquid separation",
"Water pollution",
"Environmental engineering",
"Waste treatment technology"
] |
7,978,745 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93air%20battery | Aluminium–air batteries (Al–air batteries) produce electricity from the reaction of oxygen in the air with aluminium. They have one of the highest energy densities of all batteries, but they are not widely used because of problems with high anode cost and byproduct removal when using traditional electrolytes. This has restricted their use to mainly military applications. However, an electric vehicle with aluminium batteries has the potential for up to eight times the range of a lithium-ion battery with a significantly lower total weight.
Aluminium–air batteries are primary cells, i.e., non-rechargeable. Once the aluminium anode is consumed by its reaction with atmospheric oxygen at a cathode immersed in a water-based electrolyte to form hydrated aluminium oxide, the battery will no longer produce electricity. However, it is possible to mechanically recharge the battery with new aluminium anodes made from recycling the hydrated aluminium oxide. Such recycling would be essential if aluminium–air batteries were to be widely adopted.
Aluminium-powered vehicles have been under discussion for some decades. Hybridisation mitigates the costs, and in 1989 road tests of a hybridised aluminium–air/lead–acid battery in an electric vehicle were reported. An aluminium-powered plug-in hybrid minivan was demonstrated in Ontario in 1990.
In March 2013, Phinergy released a video demonstration of an electric car using aluminium–air cells driven 330 km using a special cathode and potassium hydroxide. On May 27, 2013, the Israeli channel 10 evening news broadcast showed a car with Phinergy battery in the back, claiming range before replacement of the aluminium anodes is necessary.
Electrochemistry
The anode oxidation half-reaction is Al + → + 3e− +2.31 V.
The cathode reduction half-reaction is + + 4e− → +0.40 V.
The total reaction is 4Al + + → +2.71 V.
About 1.2 volts potential difference is created by these reactions and is achievable in practice when potassium hydroxide is used as the electrolyte. Saltwater electrolyte achieves approximately 0.7 volts per cell.
The specific voltage of the cell can vary depending upon the composition of the electrolyte as well as the structure and materials of the cathode.
Other metals can be used in a similar way, such as lithium-air, zinc-air, manganese-air, and sodium-air, some with a higher energy density. However, aluminium is attractive as the most stable metal.
Anode
Aluminium (Al) has been widely used as an anode material in metal-air batteries due to its high energy density, recyclability, and abundance. However, challenges with Al anodes include corrosion and passivation. Impurities in commercially available aluminium lead to the formation of layers that impair performance. Corrosion reactions produce hydrogen and form aluminium hydroxides, while the formation of an oxide film upon exposure to air or water further limits functionality.
Improving Al anode performance involves optimizing grain size and crystal orientation, as finer grain structures enhance corrosion resistance and electrochemical activity. The study done by Fan and Lu examined the relation between the grain size and the anode performance. In this study, aluminium anodes with finer grain sizes were created using a method called Equal Channel Angular Pressing (ECAP). As the number of extrusion passes increased, the grains became smaller and more uniform. However, the process had limitations due to heat from deformation causing some grain growth. The results showed that refining the grain size improved the anode’s electrochemical activity, reduced corrosion, and increased polarization and charge-transfer resistance. Tests confirmed that the anode with fine grains performed better than one with larger grains. The fine-grain structure also provided better anti-corrosion properties and enhanced battery performance in a 4 mol/L NaOH solution. At a current density of 10 mA/cm², the fine-grain anode showed a 41.5% increase in capacity density and a 55.5% increase in energy density compared to the coarse-grain anode. Besides microstructure optimization, processing of the anodes can also impact the performance. Anodes fabricated using laser sintering show increased capacity compared to non-sintered samples, which highlights the importance of processing of the anode in terms of the anode performance.
In addition to refining the microstructure and developing better processing methods, alloying Al with elements like Ga, Zn, and Sn helps mitigate corrosion and hydrogen evolution. Zinc, in particular, is widely recognized as a beneficial alloying element in Al-air battery anodes because it helps reduce the self-corrosion rate and increases the nominal cell voltage. However, study done by Park, Choi, and Kim highlights a drawback: the addition of Zn can actually decrease the discharge performance of the anode in alkaline solutions due to passivation effects during anodic polarization. Specifically, Zn promotes the formation of two types of oxidation films. The first, Type 1 film, is a porous layer composed of Zn(OH)₂ and defective ZnO, which forms when dissolved Zn(OH)₄²⁻ precipitates from the bulk electrolyte. The second, Type 2 film, is a compact, protective layer of ZnO that forms directly from the oxidation of the metal surface and is more stable. This Type 2 film creates a passivation layer that impairs the discharge performance of the Al-air battery. However, the addition of indium (In) helps break down and destabilize this Zn passive layer. The In ions repeatedly create defects within the Type 2 film through a cycle of breakdown and re-passivation, effectively weakening the protective barrier and enhancing the battery's discharge efficiency. As a result, using an Al-Zn-In ternary alloy anode, produced from commercially available aluminium rather than expensive high-purity aluminium, presents a cost-effective solution with improved performance.
Copper-deposited Al alloys have also shown promise as an anode material, forming protective layers that decrease hydrogen evolution and enhance discharge performance. A study done by Mutlu and Yazıcı shows that copper electrodeposition helps lower the charge-transfer resistance of aluminium anodes. This is because certain compounds (like Al(OH)₂⁺, Al₇(OH)₁₇⁴⁺, Al₂(OH)₂⁴⁺, and Al₁₃(OH)₃₄⁵⁺) build up on the surface and create resistance. In contrast, Al(OH)₃ dissolves in alkaline solutions, forming Al(OH)₄⁻, which has a lower dissolution rate and maintains a balance between Al(OH) and Al(OH)₄⁻. Copper helps remove these compounds from the surface, reducing resistance and improving discharge performance. Additionally, the resistance of aluminium oxide is higher than that of the copper-aluminium combination, so copper reduces the film’s resistance and makes it more durable. Overall, advancements in alloy composition and fabrication methods are critical for maximizing the efficiency and cost-effectiveness of Al anodes.
Commercialization
Issues
Aluminium as a "fuel" for vehicles has been studied by Yang and Knickle. In 2002, they concluded:
Technical problems remain to be solved to make Al–air batteries suitable for electric vehicles. Anodes made of pure aluminium are corroded by the electrolyte, so the aluminium is usually alloyed with tin or other elements. The hydrated alumina that is created by the cell reaction forms a gel-like substance at the anode and reduces the electricity output. This is an issue being addressed in the development work on Al–air cells. For example, additives that form the alumina as a powder rather than a gel have been developed.
Modern air cathodes consist of a reactive layer of carbon with a nickel-grid current collector, a catalyst (e.g., cobalt), and a porous hydrophobic polytetrafluoroethylene film that prevents electrolyte leakage. The oxygen in the air passes through the polytetrafluoroethylene then reacts with the water to create hydroxide ions. These cathodes work well, but they can be expensive.
Traditional Al–air batteries had a limited shelf life, because the aluminium reacted with the electrolyte and produced hydrogen when the battery was not in use; this is no longer the case with modern designs. The problem can be avoided by storing the electrolyte in a tank outside the battery and transferring it to the battery when it is required for use.
These batteries can be used as reserve batteries in telephone exchanges and as backup power sources.
Another problem is the cost of materials that need to be added to the battery to avoid power dropping. Aluminium is still very cheap compared to other elements used to build batteries. Aluminium costs $2.51 per kilogram while lithium and nickel cost $12.59 and $17.12 per kilogram respectively. However, one other element typically used in aluminium air as a catalyst in the cathode is silver, which costs about $922 per kilogram (2024 prices).
Aluminium–air batteries may become an effective solution for marine applications due to their high energy density, low cost, and the abundance of aluminium, with no emissions at the point of use in boats and ships.
AlumaPower, Phinergy Marine, Log 9 Materials, RiAlAiR and several other commercial companies are working on commercial and military applications in the marine environment.
Research and development is taking place on alternative, safer, and higher performance electrolytes such as organic solvents and ionic liquids. Others such as AlumaPower are focusing on mechanical methods to mitigate many of the historical issues with Al-air batteries. AlumaPower's patent () illustrates a method that rotates the anode which eliminates wear patterns and corrosion of the anode. The patent further claims that the design can use any scrap aluminium, including remelted soda cans and engine blocks.
See also
List of battery types
Zinc–air battery
Potassium-ion battery
Metal–air electrochemical cell
Aluminium-ion battery
Aluminium battery
References
External links
Aluminium battery from Stanford offers safe alternative to conventional batteries
Aluminium battery can charge phone in one minute, scientists say
Simple homemade aluminum-air battery
Electrochemical cells
Aluminium
Metal–air batteries
Disposable batteries | Aluminium–air battery | [
"Chemistry"
] | 2,141 | [
"Electrochemistry",
"Electrochemical cells"
] |
7,981,806 | https://en.wikipedia.org/wiki/Code%20rate | In telecommunication and information theory, the code rate (or information rate) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is for every bits of useful information, the coder generates a total of bits of data, of which are redundant.
If is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is .
For example: The code rate of a convolutional code will typically be , , , , , etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that redundant octets (or bytes) are added to each block of 188 octets of useful information.
A few error correction codes do not have a fixed code rate—rateless erasure codes.
Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with net bit rate or useful bit rate exclusive of error-correction codes.
See also
Entropy rate
Information rate
Punctured code
References
Information theory
Rates | Code rate | [
"Mathematics",
"Technology",
"Engineering"
] | 265 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science stubs",
"Computer science",
"Information theory",
"Computing stubs"
] |
14,296,821 | https://en.wikipedia.org/wiki/Global%20meteoric%20water%20line | The Global Meteoric Water Line (GMWL) describes the global annual average relationship between hydrogen and oxygen isotope (oxygen-18 [O] and deuterium [H]) ratios in natural meteoric waters. The GMWL was first developed in 1961 by Harmon Craig, and has subsequently been widely used to track water masses in environmental geochemistry and hydrogeology.
Development and definition of GMWL
When working on the global annual average isotopic composition of O and H in meteoric water, geochemist Harmon Craig observed a correlation between these two isotopes, and subsequently developed and defined the equation for GMWL:
Where δO and δH (aka δD) are the ratio of heavy to light isotopes (e.g. O/O, H/H).
The relationship of δO and δH in meteoric water is caused by mass dependent fractionation of oxygen and hydrogen isotopes between evaporation from ocean seawater and condensation from vapor. As oxygen isotopes (O) and hydrogen isotopes (H) have different masses, they behave differently in the evaporation and condensation processes, and thus result in the fractionation between O and O as well as H and H. Equilibrium fractionation causes the isotope ratios of δO and δH to vary between localities within the area. The fractionation processes can be influenced by a number of factors including: temperature, latitude, continentality, and most importantly, humidity.
Applications
Craig observed that δO and δH isotopic composition of cold meteoric water from sea ice in the Arctic and Antarctica are much more negative than that in warm meteoric water from the tropic. A correlation between temperature (T) and δO was proposed later in the 1970s. Such correlation is then applied to study surface temperature change over time. The δO of ancient meteoric water, preserved in ice cores, can also be collected and applied to reconstruct paleoclimate.
A meteoric water line can be calculated for a given area, named as local meteoric water line (LMWL), and used as a baseline within that area. Local meteoric water line can differ from the global meteoric water line in slope and intercept. Such deviated slope and intercept is a result largely from humidity. In 1964, the concept of deuterium excess d (d = δH - 8δO) was proposed. Later, a parameter of deuterium excess as a function of humidity has been established, as such the isotopic composition in local meteoric water can be applied to trace local relative humidity, study local climate and used as a tracer of climate change.
In hydrogeology, the δO and δH of groundwater are often used to study the origin of groundwater and groundwater recharge.
It has been shown that, even taking into account the standard deviation related to instrumental errors and the natural variability of the amount-weighted precipitations, the LMWL calculated with the EIV (error in variable regression) method has no differences on the slope compared to classic OLSR (ordinary least square regression) or other regression methods. However, for certain purposes such as the evaluation of the shifts from the line of the geothermal waters, it would be more appropriate to calculate the so-called "prediction interval" or "error wings" related to LMWL.
See also
Isotope fractionation
Meteoric water
Water cycle
References
Precipitation
Deuterium
Isotopes of hydrogen
Isotopes of oxygen
Hydrology | Global meteoric water line | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 711 | [
"Hydrology",
"Isotopes of hydrogen",
"Isotopes",
"Environmental engineering",
"Isotopes of oxygen"
] |
14,297,190 | https://en.wikipedia.org/wiki/Robert%20N.%20Klein%20II | Robert Nicholas "Bob" Klein II is a stem cell advocate. He initiated California Proposition 71, which succeeded in establishing the California Institute for Regenerative Medicine, of which Klein was the chairman of the governing board.
Before getting involved in stem cell advocacy, he was a housing developer and lawyer. He lives in Portola Valley, California and works in Palo Alto, where he used to live.
Stem cell advocacy
He was a chief author of Proposition 71 and was the chair of the Yes on 71 campaign. He donated $3 million to the cause, the largest donation, and ran the campaign from the Klein Financial Corporation.
After the election, Proposition 71 became Article XXXV of the California Constitution and the Yes on 71 campaign became the California Research and Cures Coalition, a stem cell advocacy organization. Klein was the head of that organization until he took the position at the California Institute for Regenerative Medicine, the organization created by the ballot initiative. In 2005, he was named as one of TIME Magazine's 100 Most Influential People; and, that same year Scientific American named Klein one of “The Scientific American 50” as a leader shaping the future of science. Klein was honored at the 2010 BIO International Convention as the second annual Biotech Humanitarian. Also, in 2010, Klein received the 2010 Research!America Gordon and Llura Gund Leadership Award for his advocacy of stem cell and diabetes research.
In 2020, the original funding for the Institute for Regenerative Medicine had run out, so Klein spearheaded another initiative to fund it, known as Proposition 14.
Early career
Klein has a Bachelor of Arts in History with Honors from Stanford University and a Juris Doctor from Stanford University Law School, 1970. Additional education includes: Executive Summer Finance Program at Stanford University Business School and an internship with the United Nations Economic and Social Council in Switzerland on Economic Development Policy.
Soon after graduating law school, he joined the firm of William Glikbarg, a Southern California housing developer who also taught housing law at Stanford.
He made his multimillion-dollar fortune primarily in the Modesto area, of the Central Valley, CA, developing low-income housing. He included market-rate units within subsidized projects to help generate financing for projects.
When Nixon administration housing secretary George W. Romney ended public housing subsidies in January 1973, Klein and an associate, Michael J. BeVier, successfully persuaded the California legislature to create the California Housing Finance Agency, which subsidizes housing developments with low-interest bonds. (Klein did not use CHFA money in his real estate deals to eliminate the potential for a conflict of interest.) BeVier wrote about this in the book "Politics Backstage."
Personal life
Robert lives in Portola Valley with his wife Danielle Guttman Klein, as well as her daughter Alyssa. He has two sons and a daughter: Robert, Jordan, and Lauren. Lauren and her husband, Daryl Baltazar have one son named Bennett. Robert cites his son Jordan's autoimmune-mediated (type 1) diabetes as a primary source of his involvement in stem cell research.
Klein's father Robert Klein Sr. (Harvard, UCLA) was an administrator of San Jose, Fresno, Santa Cruz and Menlo Park.
See also
Proposition 71
California Institute for Regenerative Medicine
References
External links
Bob Klein Public Policy Profile
Klein Financial Corporation
California Institute for Regenerative Medicine homepage
Stem cell research
Living people
People from Portola Valley, California
American health activists
Year of birth missing (living people) | Robert N. Klein II | [
"Chemistry",
"Biology"
] | 708 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
14,297,624 | https://en.wikipedia.org/wiki/Y%20box%20binding%20protein%201 | Y box binding protein 1 also known as Y-box transcription factor or nuclease-sensitive element-binding protein 1 is a protein that in humans is encoded by the YBX1 gene. YBX1 is an RNA binding protein that stabilises messenger RNAs modified with N6-methyladenosine.
Clinical significance
YBX1 is a potential drug target in cancer therapy. YB-1 helps the replication of adenovirus type 5, a commonly used vector in gene therapy. Thus, YB-1 can cause an "oncolytic" effect in YB-1 positive cancer cells treated with adenoviruses.
Interactions
Y box binding protein 1 has been shown to interact with:
ANKRD2,
CTCF,
P53,
PCNA,
RBBP6, and
SFRS9.
References
Further reading
External links
Transcription factors | Y box binding protein 1 | [
"Chemistry",
"Biology"
] | 180 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,300,597 | https://en.wikipedia.org/wiki/The%20Steam%20House | The Steam House () is an 1880 Jules Verne novel recounting the travels of a group of British colonists in the Raj in a wheeled house pulled by a steam-powered mechanical elephant. Verne uses the mechanical house as a plot device to have the reader travel in nineteenth-century India. The descriptions are interspersed with historical information and social commentary.
The book takes place in the aftermath of the Indian Rebellion of 1857 against British rule, with the passions and traumas aroused still very much alive among Indians and British alike. An alternate title by which the book was known—"The End of Nana Sahib"—refers to the appearance in the book of the historical figure—rebel leader Nana Sahib—who disappeared after the crushing of the rebellion, his ultimate fate unknown. Verne offers a fictional explanation to his disappearance.
Plot
Part I – The Demon of Cawnpore
In the summer of 1866, in Aurangabad, the British colonial government announces a bounty on the head of Nana Sahib, who is supposed to be hiding in that presidency. Nana Sahib, disguised as a sage, stalks and kills the man who claims to know face of Nana Sahib. Nana Sahib escapes from Aurangabad the same night and, taking his brother Bala Rao and followers, hidden in Ajanta and Ellora caves respectively, retreats to the Vindhiyanchal mountains to hide from colonial forces.
Nana Sahib, along with his brother and followers, hides in various small fortresses called Pals, and mostly inside Pal of Tandil. His brother Bala Rao, who is extremely similar to Nana sahib in physical appearance, inquires about the inhabitants of the fortress and learns from locals that none except local outlaws, insurgents and a mad woman knows about the place. The mad woman is known as Rowing Flame as she carries a burning torch and roams the wilderness in the valley of Narmada. The locals respect the mysterious lady and feed and cloth her. From this hiding place, Nana Sahib launches an underground movement and secretly visit local chieftains for persuading them for an uprising.
Meanwhile, in Calcutta, a group of Europeans is planning for a voyage through India. The group consist of Banks, a railroad engineer; Maucler, the French adventurer and narrator for most of the story; Captain Hood, a hunter craving his half century of tigers, retired Colonel Sir Edward Munro, whose motive behind joining this expedition is to find and kill Nana Sahib to avenge his wife, who supposedly died in the Cawnpore massacre. Servants accompanying them include Sergeant McNeil, Munro's faithful servant; Fox, the faithful servant of Captain Hood and fellow hunter, who has killed 37 tigers; Monsieur Parazard; a Negro cook of French origins; Storr, a British Engine driver; Kilouth, an Adivasi coal shoveler and Gotimi, the faithful Gurkha servant of Colonel Munro.
Banks, the Engineer, introduces the machine he invented, a Steam powered mechanical elephant, which pulls two comfortable carriages having all the comforts of a 19th-century house. The machine can walk across land and float across rivers using embedded paddle wheels. The steam elephant is named Behemoth and together with two carriages, it is called the Steam House. The first carriage is used by the gentlemen, while the other is reserved for the servants. They start from Calcutta, and travel around the French town of Chandannagar, and Burdawan, Patna and Chitra, reach Gaya, where they visit various Hindu and Buddhist temples and bathing Ghat. On the way to Banaras they are interrupted by Hindu fanatics who consider the Steam House to be the chariot of their deity. Banks frightens them away by directing steam exhaust at them. In Banaras, Banks and Maucler notice a man spying on them but resolve not to tell the Colonel. From Banaras, they travel to Allahabad, where they learn that Nana Sahib has been declared dead after a skirmish in the defiles of Satpura. Colonel Munro is shocked by this news, as he wanted to take revenge himself. After Munro's request, they decides to pass through Kanpur, where an emotional Colonel visits his old house and the well which is supposedly the grave of Mrs. Munro and other victims of the massacre. The group decides to journey towards a northern forest, and pass the Monsoon season there, hunting wild animals. On the way to Terai, they defeat three elephants of an arrogant Gujarati Prince in a competition with Behemoth. Near Terai, they are caught in a violent thunderstorm and Gautami narrowly survives after being struck by lightning. The man who was spying on the Steam House meets Nana in Bhopal and informs him of further plans of the inhabitants of the Steam House. Nana orders his faithful follower Kalagni to infiltrate the Steam House and lure them near Nana Sahib's hiding place. While returning to their hiding place, near the Pal of Tandil, they are ambushed by British forces, who were directed unwittingly by the madwoman Rowing Flame. A body matching the description of Nana Sahib is found and he is declared dead by the British authorities.
Part II – Tigers and Traitors
The inhabitants of the Steam House camp on a plateau in Terai. During a hunting expedition, they rescue Mathias Van Guitt, an animal purveyor, from his own trap. They visit the kraal of Van Guitt, where Colonel Munro is saved from a poisonous snake by one of Van Guitt's servants, Kalagni. The Steam House dwellers frequently visit the kraal and invite Van Guitt to the Steam House. Van Guitt tries to capture animals, while the inhabitants of the Steam House hunt animals. One night, tigers and other predatory animals attack the kraal. The protagonists narrowly escape death but many Indian servants are killed. The buffaloes are either killed by animals or driven away into the jungle. Consequently, Van Guitt has the protagonists drag his caravan of cages to the nearest railway station. After reaching the station and loading his cargo, Van Guitt and the protagonists part ways. The protagonists employ Kalagni as guide and servants and head for Bombay through Central India. During the journey through jungles, they encounter a herd of monkeys and a grain transport caravan. Kalagni meets an old acquaintance in the caravan and chats mysteriously with him. On their way to Jabalpur in the jungle, they are cornered and attacked by a herd of elephants, which results in the loss of the second carriage. To escape from the herd, Banks drives the Steam House into Lake Puturia. All the food and provisions are lost with the second carriage and after some time, the fuel is exhausted, resulting in the Steam House floating in the middle of the lake. Kalagni volunteers to swim to shore and fetch help. Colonel Munro, suspecting him, sends his faithful servant Gautami with him. Both swim to shore while the steam House slowly drift in the fog. As soon as they reach the shore, Kalagni meets Nassim, a follower of Nana, and tries to attack Goumi, who swiftly escapes. With the morning breeze, the Steam House drifts towards the bank. As the protagonists land, they are attacked by a group of men led by Kalagni and Nassim who attack and kidnap Colonel Munro, leaving the others bonded with ropes. Colonel Munro is taken to an abandoned fort, where Nana Sahib shows up and reveals the reality of the news of his death. The dead person who was identified as Nana Sahib was actually his look-alike brother, Balao Rao. Due to their physical similarity, the British authorities mistook Balao Rao for Nana Sahib. Nana Sahib proclaims death for Colonel Munro To avenge death of his brother, members of the royal family of the last Mughal emperorss Bahadur Shah II and other victims of British suppuration of the Indian Rebellion of 1857. Colonel Munro is tied on the mouth of a large cannon, to be shot at sunrise. Nana leaves for a meeting in a nearby village. Near dawn, Munro is rescued by Goumi, who had hid himself inside the cannon after running away from Lake Puturia and overhearing the plans of the rebels. As they are escaping, they encounter Rowing Flame. Colonel Munro recognizes her as his wife Lady Munro, but she has lost her sanity and doesn't recognize him and refuses to go with him. Sparks from her torch cause the canon to go off. Munro and Goumi escape with Lady Munro while the people in the fort are confused. But soon they are spotted by Kalagni and his men and encounter Nana Sahib on his way back to fort. Goumi and Munro quickly overpowers Nana and his assistant. As they are being chased by the men led by Kalagni, they are rescued by other protagonists riding on Behemoth. They take Nana Sahib as prisoner and they are chased through the jungle. Capt Hood and Sgt. McNeil shoot down many of their adversaries, including Kalagni. As they near a military outpost, Banks supercharges the boiler and the protagonists escape the Behemoth, leaving bounded Nana Sahib inside the machine. As the men approach the machine, the boiler bursts, leaving everyone near it dead, although Nana's body is not found. The protagonists are rescued by the stationed regiment as the rest of the insurgents flee to inner country. They head for Mumbai via railway and then to Calcutta. In the care of Colonel Munro, Lady Munro regains her sanity and memory. When Munro tell Hood about not being able to achieve his target of killing 50 tigers, Hood replies that Kalagni was his 50th tiger.
Alternative titles
The novel is usually published in two volumes or parts.
Demon of Cawnpore (Part 1 of 2)
Demon of the Cawnpore (Part 1 of 2)
Steam House (Part I) The Demon of Cawnpore
Steam House (Part II) Tigers and Traitors
Tigers and Traitors (Part 2 of 2)
Tigers and Traitors, Steam House (Part 2 of 2)
The End of Nana Sahib
See also
History of steam road vehicles
History of The League of Extraordinary Gentlemen
External links
The End Of Nana Sahib The Steam House
Maison a vapeur - 1880
1880 French novels
Steam power
Novels about elephants
Novels about the Indian Rebellion of 1857
Novels by Jules Verne
Novels set in British India
Novels set in India
Novels set in Kolkata
Novels set in Patna
Novels set in Varanasi
Novels set in Mumbai
Aurangabad, Maharashtra
Kanpur
Chandannagar
Bardhaman
Gaya, India
Ghats of India
Bhopal
Jabalpur
Cultural depictions of Indian people
Animal trapping
Adivasi literature
Religion in science fiction
Steampunk novels | The Steam House | [
"Physics"
] | 2,167 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
14,303,643 | https://en.wikipedia.org/wiki/C8H6 | The molecular formula C8H6 (molar mass: 102.13 g/mol, exact mass: 102.0470 u) may refer to:
Benzocyclobutadiene
Pentalene
Phenylacetylene
Calicene, or triapentafulvalene
Cubene
Molecular formulas | C8H6 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
6,074,076 | https://en.wikipedia.org/wiki/Viral%20entry | Viral entry is the earliest stage of infection in the viral life cycle, as the virus comes into contact with the host cell and introduces viral material into the cell. The major steps involved in viral entry are shown below. Despite the variation among viruses, there are several shared generalities concerning viral entry.
Reducing cellular proximity
How a virus enters a cell is different depending on the type of virus it is. A virus with a nonenveloped capsid enters the cell by attaching to the attachment factor located on a host cell. It then enters the cell by endocytosis or by making a hole in the membrane of the host cell and inserting its viral genome.
Cell entry by enveloped viruses is more complicated. Enveloped viruses enter the cell by attaching to an attachment factor located on the surface of the host cell. They then enter by endocytosis or a direct membrane fusion event. The fusion event is when the virus membrane and the host cell membrane fuse together allowing a virus to enter. It does this by attachment – or adsorption – onto a susceptible cell; a cell which holds a receptor that the virus can bind to, akin to two pieces of a puzzle fitting together. The receptors on the viral envelope effectively become connected to complementary receptors on the cell membrane. This attachment causes the two membranes to remain in mutual proximity, favoring further interactions between surface proteins. This is also the first requisite that must be satisfied before a cell can become infected. Satisfaction of this requisite makes the cell susceptible. Viruses that exhibit this behavior include many enveloped viruses such as HIV and herpes simplex virus.
These basic ideas extend to viruses that infect bacteria, known as bacteriophages (or simply phages). Typical phages have long tails used to attach to receptors on the bacterial surface and inject their viral genome.
Overview
Prior to entry, a virus must attach to a host cell. Attachment is achieved when specific proteins on the viral capsid or viral envelope bind to specific proteins called receptor proteins on the cell membrane of the target cell. A virus must now enter the cell, which is covered by a phospholipid bilayer, a cell's natural barrier to the outside world. The process by which this barrier is breached depends upon the virus. Types of entry are:
Membrane fusion or Hemifusion state: The cell membrane is punctured and made to further connect with the unfolding viral envelope.
Endocytosis: The host cell takes in the viral particle through the process of endocytosis, essentially engulfing the virus like it would a food particle.
Viral penetration: The viral capsid or genome is injected into the host cell's cytoplasm.
Through the use of green fluorescent protein (GFP), virus entry and infection can be visualized in real-time. Once a virus enters a cell, replication is not immediate and indeed takes some time (seconds to hours).
Entry via membrane fusion
The most well-known example is through membrane fusion. In a number of viruses with a viral envelope, viral receptors attach to the receptors on the surface of the cell and secondary receptors may be present to initiate the puncture of the membrane or fusion with the host cell. Following attachment, the viral envelope fuses with the host cell membrane, causing the virus to enter. Viruses that enter a cell in this manner included HIV, KSHV and herpes simplex virus.
In SARS-CoV-2 and similar viruses, entry occurs through membrane fusion mediated by the spike protein, either at the cell surface or in vesicles. Research efforts have focused on the spike protein's interaction with its cell-surface receptor, angiotensin-converting enzyme 2 (ACE2). The evolved, high level of activity to mediate cell to cell fusion has resulted in an enhanced fusion capacity. Current prophylaxis against SARS-2 infection targets the spike (S) proteins that harbor the capacity for membrane fusion. Vaccinations are based on the blocking the viral S glycoprotein with the cell, thus stopping the fusion of the virus and its host cell membranes. The fusion mechanism is also studied as a potential target for antiviral development.
Entry via endocytosis
Viruses with no viral envelope enter the cell generally through endocytosis; they “trick” the host cell to ingest the virions through the cell membrane. Cells can take in resources from the environment outside of the cell, and these mechanisms may be exploited by viruses to enter a cell in the same manner as ordinary resources. Once inside the cell, the virus leaves the host vesicle by which it was taken up and thus gains access to the cytoplasm. Examples of viruses that enter this way include the poliovirus, hepatitis C virus, and foot-and-mouth disease virus.
Many enveloped viruses, such as SARS-CoV-2, also enter the cell through endocytosis. Entry via the endosome guarantees low pH and exposure to proteases which are needed to open the viral capsid and release the genetic material inside the host cytoplasm. Further, endosomes transport the virus through the cell and ensure that no trace of the virus is left on the surface, which could otherwise trigger immune recognition by the host.
Entry via genetic injection
A third method is by simply attaching to the surface of the host cell via receptors on the cell with the virus injecting only its genome into the cell, leaving the rest of the virus on the surface. This is restricted to viruses in which only the genome is required for infection of a cell (for example positive-strand RNA viruses because they can be immediately translated) and is further restricted to viruses that actually exhibit this behavior. The best studied example includes the bacteriophages; for example, when the tail fibers of the T2 phage land on a cell, its central sheath pierces the cell membrane and the phage injects DNA from the head capsid directly into the cell.
Outcomes
Once a virus is in a cell, it will activate formation of proteins (either by itself or using the host’s machinery) to gain full control of the host cell, if possible. Control mechanisms include the suppression of intrinsic cell defenses, suppression of cell signaling and suppression of host cellular transcription and translation. Often, these cytotoxic effects lead to the death and decline of a cell infected by a virus.
A cell is classified as susceptible to a virus if the virus is able to enter the cell. After the introduction of the viral particle, unpacking of the contents (viral proteins in the tegument and the viral genome via some form of nucleic acid) occurs as preparation of the next stage of viral infection: viral replication.
References
Virology
Viral life cycle | Viral entry | [
"Biology"
] | 1,390 | [
"Viral life cycle"
] |
6,074,570 | https://en.wikipedia.org/wiki/International%20Association%20of%20Oil%20%26%20Gas%20Producers | The International Association of Oil & Gas Producers (IOGP) is the petroleum industry's global forum in which members identify and share best practices to achieve improvements in health, safety, the environment, security, social responsibility, engineering and operations.
The association was formed in London in 1974 to develop effective communications between the upstream industry and the network of international regulators. Originally called the E&P Forum (for oil and gas exploration and production), in 1999 the current name was adopted. Most of the world’s leading publicly traded, private and state-owned oil & gas companies, oil & gas associations and major upstream service companies are members. The IOGP claims its members produce 40% of the world’s oil and gas.
Co-operation with other bodies
IOGP also represent the interests of the upstream industry before international regulators and legislators in UN bodies such as the International Maritime Organization and the Commission for Sustainable Development. IOGP also works with the World Bank and with the International Organization for Standardization (ISO). It is also accredited to a range of regional bodies that include OSPAR, the Helsinki Commission and the Barcelona Convention, and provides a conduit for advocacy and debate between the upstream industry and the European Union (EU). This involves regular contact with the European Commission and the European Parliament.
IOGP data reports
Every year, IOGP collects and publishes data on upstream operations worldwide, both onshore and offshore, from participating member companies and their contractor employees. The reports are free and publicly available. The data covers:
Occupational safety
Environmental performance
Process safety events
Health management
Land transport safety
Aviation safety
Occupational safety:
Since 1985, when IOGP started reporting annual trends in upstream safety data, there have been considerable improvements in industry performance.
Today, it is the industry’s largest database of safety performance, covering participating member company employees and their contractors onshore and offshore, worldwide.
Fatal incidents are analysed by incident category, activity and associated causal factors, and incident descriptions are provided for fatal incidents and high potential events.
Environmental performance:
IOGP has collected and published environmental data from its participating member companies on an annual basis since 2001. The objectives of this programme are to allow member companies to compare their performance with other companies in the sector; and increase transparency of industry operations.
The reports aggregate information at both global and regional levels, expressed within six environmental indicator categories:
Gaseous emissions
Energy consumption
Flaring
Aqueous discharges
Non-aqueous drilling fluids retained on cuttings discharged to sea
Spills of oil and chemicals
Process safety events:Process safety is a disciplined framework for managing the integrity of operating systems and processes that handle hazardous substances. It relies on good design principles, engineering and operating and maintenance practices. The process safety events (PSE) data are based on the numbers of Tier 1 and Tier 2 process safety events reported by participating IOGP member companies, separately for:
Onshore and offshore
Drilling and production
Activities
Consequences
Material released
The data are normalized against work hours associated with drilling and production activities to provide PSE rates.
Health management:IOGP (with IPIECA) has developed two tools to assess health leading performance indicators within individual companies. These enable performance comparison between different parts of a company and between participating companies. The annual health leading performance indicators report illustrates the results submitted by participating companies for both tools and includes actual anonymous results for the year by company, trends over time and the potential benefits to health management in the industry.
Land transport safety
In April 2005, IOGP published Report No. 365, Land transportation safety recommended practice, a guideline designed to be applicable to all land transportation activities in the upstream oil and gas industry, including operators, contractors and subcontractors. IOGP collects data on motor vehicle crashes and information submitted by participating member companies are published from 2008 onwards. Data are broken down by region and crash category. Data are further grouped to indicate the number of crashes that resulted in a rollover. This includes:
Number of Motor Vehicle Crash (MVC) fatalities
Number of Motor Vehicle Crashes for each reporting group and category
Motor vehicle crash rate (Motor Vehicle Crashes per million kilometres) for each reporting group and category.
Global Production Report
IOGP publishes a Global Production Report. First published in 2018 it is updated annually. It is based on the latest BP Statistical Review of World Energy and establishes an IOGP Production Indicator© (PI) – the level at which a region is able to meet its own oil or gas demand – for seven regions across the world. A PI higher than 100% means the region produces more than it needs to meet its own requirements and so can export.
The main conclusion of the report is that demand growth and the annual depletion rate of 6% of existing fields are driving the need for investment to gain additional volumes. Such investment will depend on regional and local policies that encourage responsible resource development.
European Petroleum Survey Group
In 2005, IOGP absorbed the European Petroleum Survey Group or EPSG (1986–2005) into its structure as IOGP Geomatics Committee. EPSG was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. The EPSG Geodetic Parameter Dataset is a widely used database of Earth ellipsoids, geodetic datums, geographic and projected coordinate systems, units of measurement, etc, which was originally created by EPSG and still carries the EPSG initials to this day.
IOGP Outstanding Young Professional Award (OYPA)
The award, in association with the biennial SPE HSSE-SR International Conference, recognizes the achievements of an individual with fewer than 10 years of professional E&P experience, who demonstrates professional accomplishments and evidence of outstanding talent, dedication and leadership in at least one aspect of health, safety, security, the environment and/or social responsibility.
2020 winner:
David Ochanda, Biodiversity Coordinator, Total (Kampala, Uganda)
Finalists:
Kelly Giang, Subsea Engineer, BP (Houston, USA)
Delina Lyon, Ecotoxicologist, Shell (Houston, USA)
Saul Moorhouse, Technology Capability Lead, BP (London, UK)
Lauren Prigent, HSE Advisor, Neptune Energy (Aberdeen, UK)
Stephanie Seewald, Drilling & Completions HSE Team Lead, Chevron (Houston, USA)
Xeniya Yurkavets, Lead HSE Specialist, KazMunayGas (Nur-Sultan, Kazakhstan)
Extracts of their presentations can be viewed at IOGP's OYPA webpage.
2018 winner: Marcin Nazaruk, BP. Finalists: Mohammed A. Al-Ghazal, Saudi Aramco; Jessica Guzzetta-King, Genesis; Cedric Michel, Total; Natasha Sihota, Chevron; Josh R Townsend, BP.
2016 winner: Muriel Barnier, Schlumberger.
Finalists: Yu Chen, CNOOC; Bev Coleman, Chevron; Omar De Leon, ExxonMobil; and Emma Thomson, BP.
References
External links
IOGP website
EPSG Geodetic Parameter online Registry
International energy organizations
International organisations based in London
Organisations based in the City of London
Petroleum industry
Petroleum organizations
Lobbying organizations in Europe | International Association of Oil & Gas Producers | [
"Chemistry",
"Engineering"
] | 1,454 | [
"Petroleum industry",
"Petroleum",
"Petroleum organizations",
"International energy organizations",
"Chemical process engineering",
"Energy organizations"
] |
12,634,546 | https://en.wikipedia.org/wiki/Critical%20Reviews%20in%20Biomedical%20Engineering | Critical Reviews in Biomedical Engineering is a bimonthly peer-reviewed scientific journal published by Begell House covering biomedical engineering, bioengineering, clinical engineering, and related subjects. The editor-in-chief is Chenzhong Li.
External links
Biomedical engineering journals
Bimonthly journals
English-language journals
Begell House academic journals | Critical Reviews in Biomedical Engineering | [
"Engineering",
"Biology"
] | 68 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
12,636,409 | https://en.wikipedia.org/wiki/Nuclear%20pore%20glycoprotein%20p62 | Nuclear pore glycoprotein p62
is a protein complex associated with the nuclear envelope. The p62 protein remains associated with the nuclear pore complex-lamina fraction. p62 is synthesized as a soluble cytoplasmic precursor of 61 kDa followed by modification that involve addition of N-acetylglucosamine residues, followed by association with other complex proteins. In humans it is encoded by the NUP62 gene.
The nuclear pore complex is a massive structure that extends across the nuclear envelope, forming a gateway that regulates the flow of macromolecules between the nucleus and the cytoplasm. Nucleoporins are the main components of the nuclear pore complex in eukaryotic cells. The protein encoded by this gene is a member of the FG repeat containing nucleoporins and is localized to the nuclear pore central plug. This protein associates with the importin alpha/beta complex which is involved in the import of proteins containing nuclear localization signals. Multiple transcript variants of this gene encode a single protein isoform.
Structure
P62 is a serine/threonine rich protein of ~520 amino acids, with tetrapeptide repeats on the amino terminus and a series of alpha-helical regions with hydrophobic heptad repeats forming beta-propeller domain. P62 assembles into a complex containing 3 addition proteins, p60, p54 and p45 forming the p62 complex of ~235 kDa. O-GlcNAcylation appears to be involved in the assembly and disassembly of p62 into higher order complexes, and a serine/threonine rich linker region between Ser270 to Thr294 appear to be regulatory. The p62 complex is localized to both the nucleoplasmic and cytoplasmic sides of the pore complex and the relative diameter of p62 complex relative to the nuclear pore complex suggests it interacts in pore gating.
Function
P62 appears to interact with mRNA during transport out of the nucleus. P62 also interacts with a nuclear transport factor (NTF2) protein that is involved in trafficking proteins between cytoplasm and nucleus. Another protein, importin (beta) binds to the helical rod section of p62, which also
binds NTF2 suggesting the formation of a higher order gating complex. Karyopherin beta2 (transportin), a riboprotein transporter also interacts with p62. P62 also interacts with Nup93, and when Nup98 is depleted p62 fails to assemble with nuclear pore complexes. Mutant pores could not dock/transport proteins with nuclear localization signals or M9 import signals.
Pathology
Antibodies to p62 complex are involved in one or more autoimmune diseases. P62 glycosylation is increased in diabetes and may influence its association with other diseases. p62 is also more frequent in Stage IV primary biliary cirrhosis and is prognostic for severe disease.
Interactions
Nucleoporin 62 has been shown to interact with:
HSF2,
KPNB1,
NUTF2,
TRAF3, and
XPO1,
Nup93.
References
Further reading
Autoantigens
Glycoproteins | Nuclear pore glycoprotein p62 | [
"Chemistry"
] | 707 | [
"Glycoproteins",
"Glycobiology"
] |
12,637,359 | https://en.wikipedia.org/wiki/Carbon%20dioxide%20scrubber | A carbon dioxide scrubber is a piece of equipment that absorbs carbon dioxide (CO2). It is used to treat exhaust gases from industrial plants or from exhaled air in life support systems such as rebreathers or in spacecraft, submersible craft or airtight chambers. Carbon dioxide scrubbers are also used in controlled atmosphere (CA) storage and carbon capture and storage processes.
Technologies
Amine scrubbing
The primary application for CO2 scrubbing is for removal of CO2 from the exhaust of coal- and gas-fired power plants and from the enclosed atmosphere of nuclear submarines. The technology being involves the use of various amines, e.g. monoethanolamine. Cold solutions of these organic compounds bind CO2, but the binding is reversed at higher temperatures:
CO2 + 2 ↔ +
, this technology has only been lightly implemented in coal-fired power plants because of capital costs of installing the facility and the operating costs of utilizing it. However, the technology has been utilized as a primary part of atmosphere control in nuclear submarines since the late 1950s.
Minerals and zeolites
Several minerals and mineral-like materials reversibly bind CO2. Most often, these minerals are oxides or hydroxides, and often the CO2 is bound as carbonate. Carbon dioxide reacts with quicklime (calcium oxide) to form limestone (calcium carbonate), in a process called carbonate looping. Other minerals include serpentinite, a magnesium silicate hydroxide, and olivine. Molecular sieves also function in this capacity.
Various (cyclical) scrubbing processes have been proposed to remove CO2 from the air or from flue gases and release them in a controlled environment, reverting the scrubbing agent. These usually involve using a variant of the Kraft process which may be based on sodium hydroxide. The CO2 is absorbed into such a solution, transfers to lime (via a process called causticization) and is released again through the use of a kiln. With some modifications to the existing processes (mainly changing to an oxygen-fired kiln) the resulting exhaust becomes a concentrated stream of CO2, ready for storage or use in fuels. An alternative to this thermo-chemical process is an electrical one which releases the CO2 through electrolyzing of the carbonate solution. While simpler, this electrical process consumes more energy as electrolysis, also splits water. Early incarnations of environmentally motivated CO2 capture used electricity as the energy source and were therefore dependent on green energy. Some thermal CO2 capture systems use heat generated on-site, which reduces the inefficiencies resulting from off-site electricity production, but it still needs a source of (green) heat, which nuclear power or concentrated solar power could provide.
Sodium hydroxide
Zeman and Lackner outlined a specific method of air capture.
First, CO2 is absorbed by an alkaline NaOH solution to produce dissolved sodium carbonate. The absorption reaction is a gas liquid reaction, strongly exothermic, here:
2NaOH(aq) + CO2(g) → (aq) + (l)
(aq) + (s) → 2NaOH(aq) + (s)
ΔH° = −114.7 kJ/mol
Causticization is performed ubiquitously in the pulp and paper industry and readily transfers 94% of the carbonate ions from the sodium to the calcium cation. Subsequently, the calcium carbonate precipitate is filtered from solution and thermally decomposed to produce gaseous CO2. The calcination reaction is the only endothermic reaction in the process and is shown here:
(s) → CaO(s) + CO2(g)
ΔH° = +179.2 kJ/mol
The thermal decomposition of calcite is performed in a lime kiln fired with oxygen in order to avoid an additional gas separation step. Hydration of the lime (CaO) completes the cycle. Lime hydration is an exothermic reaction that can be performed with water or steam. Using water, it is a liquid/solid reaction as shown here:
CaO(s) + (l) → (s)
ΔH° = −64.5 kJ/mol
Lithium hydroxide
Other strong bases such as soda lime, sodium hydroxide, potassium hydroxide, and lithium hydroxide are able to remove carbon dioxide by chemically reacting with it. In particular, lithium hydroxide was used aboard spacecraft, such as in the Apollo program, to remove carbon dioxide from the atmosphere. It reacts with carbon dioxide to form lithium carbonate. Recently lithium hydroxide absorbent technology has been adapted for use in anesthesia machines. Anesthesia machines which provide life support and inhaled agents during surgery typically employ a closed circuit necessitating the removal of carbon dioxide exhaled by the patient. Lithium hydroxide may offer some safety and convenience benefits over the older calcium based products.
2 LiOH(s) + 2 (g) → 2 LiOH·(s)
2 LiOH·(s) + CO2(g) → (s) + 3 (g)
The net reaction being:
2LiOH(s) + CO2(g) → (s) + (g)
Lithium peroxide can also be used as it absorbs more CO2 per unit weight with the added advantage of releasing oxygen.
In recent years lithium orthosilicate has attracted much attention towards CO2 capture, as well as energy storage. This material offers considerable performance advantages although it requires high temperatures for the formation of carbonate to take place.
Regenerative carbon dioxide removal system
The regenerative carbon dioxide removal system (RCRS) on the Space Shuttle orbiter used a two-bed system that provided continuous removal of carbon dioxide without expendable products. Regenerable systems allowed a shuttle mission a longer stay in space without having to replenish its sorbent canisters. Older lithium hydroxide (LiOH)-based systems, which are non-regenerable, were replaced by regenerable metal-oxide-based systems. A system based on metal oxide primarily consisted of a metal oxide sorbent canister and a regenerator assembly. It worked by removing carbon dioxide using a sorbent material and then regenerating the sorbent material. The metal-oxide sorbent canister was regenerated by pumping air at approximately through it at a standard flow rate of for 10 hours.
Activated carbon
Activated carbon can be used as a carbon dioxide scrubber. Air with high carbon dioxide content, such as air from fruit storage locations, can be blown through beds of activated carbon and the carbon dioxide will adhere to the activated carbon [adsorption]. Once the bed is saturated it must then be "regenerated" by blowing low carbon dioxide air, such as ambient air, through the bed. This will release the carbon dioxide from the bed, and it can then be used to scrub again, leaving the net amount of carbon dioxide in the air the same as when the process was started.
Metal-organic frameworks (MOFs)
Metal-organic frameworks are well-studied for carbon dioxide capture and sequestration via adsorption. No large-scale commercial technology exists. In one set of tests MOFs were able to separate 90% of the CO2 from the flue gas stream using a vacuum pressure swing process. The cost of energy is estimated to increase by 65% if MOFs were used vs an increase of 81% for amines as the capturing agent.
Extend air cartridge
An extend air cartridge (EAC) is a make or type of pre-loaded one-use absorbent canister that can be fitted into a recipient cavity in a suitably-designed rebreather.
Other methods
Many other methods and materials have been discussed for scrubbing carbon dioxide.
Adsorption
Regenerative carbon dioxide removal system (RCRS)
Algae filled bioreactors
Membrane gas separations
Reversing heat exchangers
See also
References
Scrubbers
Carbon dioxide
Space suit components
Spacecraft life support systems
Gas technologies
Carbon capture and storage | Carbon dioxide scrubber | [
"Chemistry",
"Engineering"
] | 1,661 | [
"Chemical equipment",
"Geoengineering",
"Scrubbers",
"Greenhouse gases",
"Carbon capture and storage",
"Carbon dioxide"
] |
12,637,384 | https://en.wikipedia.org/wiki/Morin%20transition | The Morin transition (also known as a spin-flop transition) is a magnetic phase transition in α-Fe2O3 hematite where the antiferromagnetic ordering is reorganized from being aligned perpendicular to the c-axis to be aligned parallel to the c-axis below TM.
TM = 260K for Fe3+ in α-Fe2O3.
A change in magnetic properties takes place at the Morin transition temperature.
See also
References
Magnetic ordering
Phase transitions | Morin transition | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 101 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
12,641,022 | https://en.wikipedia.org/wiki/Second-harmonic%20imaging%20microscopy | Second-harmonic imaging microscopy (SHIM) is based on a nonlinear optical effect known as second-harmonic generation (SHG). SHIM has been established as a viable microscope imaging contrast mechanism for visualization of cell and tissue structure and function. A second-harmonic microscope obtains contrasts from variations in a specimen's ability to generate second-harmonic light from the incident light while a conventional optical microscope obtains its contrast by detecting variations in optical density, path length, or refractive index of the specimen. SHG requires intense laser light passing through a material with a noncentrosymmetric molecular structure, either inherent or induced externally, for example by an electric field.
Second-harmonic light emerging from an SHG material is exactly half the wavelength (frequency doubled) of the light entering the material. While two-photon-excited fluorescence (TPEF) is also a two photon process, TPEF loses some energy during the relaxation of the excited state, while SHG is energy conserving. Typically, an inorganic crystal is used to produce SHG light such as lithium niobate (LiNbO3), potassium titanyl phosphate (KTP = KTiOPO4), or lithium triborate (LBO = LiB3O5). Though SHG requires a material to have specific molecular orientation in order for the incident light to be frequency doubled, some biological materials can be highly polarizable, and assemble into fairly ordered, large noncentrosymmetric structures. While some biological materials such as collagen, microtubules, and muscle myosin can produce SHG signals, even water can become ordered and produce second-harmonic signal under certain conditions, which allows SH microscopy to image surface potentials without any labeling molecules. The SHG pattern is mainly determined by the phase matching condition. A common setup for an SHG imaging system will have a laser scanning microscope with a titanium sapphire mode-locked laser as the excitation source. The SHG signal is propagated in the forward direction. However, some experiments have shown that objects on the order of about a tenth of the wavelength of the SHG produced signal will produce nearly equal forward and backward signals.
Advantages
SHIM offers several advantages for live cell and tissue imaging. SHG does not involve the excitation of molecules like other techniques such as fluorescence microscopy therefore, the molecules shouldn't suffer the effects of phototoxicity or photobleaching. Also, since many biological structures produce strong SHG signals, the labeling of molecules with exogenous probes is not required which can also alter the way a biological system functions. By using near infrared wavelengths for the incident light, SHIM has the ability to construct three-dimensional images of specimens by imaging deeper into thick tissues.
Difference and complementarity with two-photon fluorescence (2PEF)
Two-photons fluorescence (2PEF) is a very different process from SHG: it involves excitation of electrons to higher energy levels, and subsequent de-excitation by photon emission (unlike SHG, although it is also a 2-photon process). Thus, 2PEF is a non coherent process, spatially (emitted isotropically) and temporally (broad, sample-dependent spectrum). It is also not specific to certain structure, unlike SHG.
It can therefore be coupled to SHG in multiphoton imaging to reveal some molecules that do produce autofluorescence, like elastin in tissues (while SHG reveals collagen or myosin for instance).
History
Before SHG was used for imaging, the first demonstration of SHG was performed in 1961 by P. A. Franken, G. Weinreich, C. W. Peters, and A. E. Hill at the University of Michigan, Ann Arbor using a quartz sample. In 1968, SHG from interfaces was discovered by Bloembergen and has since been used as a tool for characterizing surfaces and probing interface dynamics. In 1971, Fine and Hansen reported the first observation of SHG from biological tissue samples. In 1974, Hellwarth and Christensen first reported the integration of SHG and microscopy by imaging SHG signals from polycrystalline ZnSe. In 1977, Colin Sheppard imaged various SHG crystals with a scanning optical microscope. The first biological imaging experiments were done by Freund and Deutsch in 1986 to study the orientation of collagen fibers in rat tail tendon. In 1993, Lewis examined the second-harmonic response of styryl dyes in electric fields. He also showed work on imaging live cells. In 2006, Goro Mizutani group developed a non-scanning SHG microscope that significantly shortens the time required for observation of large samples, even if the two-photons wide-field microscope was published in 1996 and could have been used to detect SHG. The non-scanning SHG microscope was used for observation of plant starch, megamolecule, spider silk and so on. In 2010 SHG was extended to whole-animal in vivo imaging. In 2019, SHG applications widened when it was applied to the use of selectively imaging agrochemicals directly on leaf surfaces to provide a way to evaluate the effectiveness of pesticides.
Quantitative measurements
Orientational anisotropy
SHG polarization anisotropy can be used to determine the orientation and degree of organization of proteins in tissues since SHG signals have well-defined polarizations. By using the anisotropy equation:
and acquiring the intensities of the polarizations in the parallel and perpendicular directions. A high value indicates an anisotropic orientation whereas a low value indicates an isotropic structure. In work done by Campagnola and Loew, it was found that collagen fibers formed well-aligned structures with an value.
Forward over backward SHG
SHG being a coherent process (spatially and temporally), it keeps information on the direction of the excitation and is not emitted isotropically. It is mainly emitted in forward direction (same as excitation), but can also be emitted in backward direction depending on the phase-matching condition. Indeed, the coherence length beyond which the conversion of the signal decreases is:
with for forward, but for backward such that >> . Therefore, thicker structures will appear preferentially in forward, and thinner ones in backward: since the SHG conversion depends at first approximation on the square of the number of nonlinear converters, the signal will be higher if emitted by thick structures, thus the signal in forward direction will be higher than in backward. However, the tissue can scatter the generated light, and a part of the SHG in forward can be retro-reflected in the backward direction.
Then, the forward-over-backward ratio F/B can be calculated, and is a metric of the global size and arrangement of the SHG converters (usually collagen fibrils). It can also be shown that the higher the out-of-plane angle of the scatterer, the higher its F/B ratio (see fig. 2.14 of ).
Polarization-resolved SHG
The advantages of polarimetry were coupled to SHG in 2002 by Stoller et al. Polarimetry can measure the orientation and order at molecular level, and coupled to SHG it can do so with the specificity to certain structures like collagen: polarization-resolved SHG microscopy (p-SHG) is thus an expansion of SHG microscopy.
p-SHG defines another anisotropy parameter, as:
which is, like r, a measure of the principal orientation and disorder of the structure being imaged. Since it is often performed in long cylindrical filaments (like collagen), this anisotropy is often equal to
, where is the nonlinear susceptibility tensor and X the direction of the filament (or main direction of the structure), Y orthogonal to X and Z the propagation of the excitation light.
The orientation ϕ of the filaments in the plane XY of the image can also be extracted from p-SHG by FFT analysis, and put in a map.
Fibrosis quantization
Collagen (particular case, but widely studied in SHG microscopy), can exist in various forms : 28 different types, of which 5 are fibrillar. One of the challenge is to determine and quantify the amount of fibrillar collagen in a tissue, to be able to see its evolution and relationship with other non-collagenous materials.
To that end, a SHG microscopy image has to be corrected to remove the small amount of residual fluorescence or noise that exist at the SHG wavelength. After that, a mask can be applied to quantify the collagen inside the image. Among other quantization techniques, it is probably the one with the highest specificity, reproductibility and applicability despite being quite complex.
Others
It has also been used to prove that backpropagating action potentials invade dendritic spines without voltage attenuation, establishing a sound basis for future work on Long-term potentiation. Its use here was that it provided a way to accurately measure the voltage in the tiny dendritic spines with an accuracy unattainable with standard two-photon microscopy. Meanwhile, SHG can efficiently convert near-infrared light to visible light to enable imaging-guided photodynamic therapy, overcoming the penetration depth limitations.
Materials that can be imaged
SHG microscopy and its expansions can be used to study various tissues: some example images are reported in the figure below: collagen inside the extracellular matrix remains the main application. It can be found in tendon, skin, bone, cornea, aorta, fascia, cartilage, meniscus, intervertebral disks...
Myosin can also be imaged in skeletal muscle or cardiac muscle.
Coupling with THG microscopy
Third-Harmonic Generation (THG) microscopy can be complementary to SHG microscopy, as it is sensitive to the transverse interfaces, and to the 3rd order nonlinear susceptibility
Applications
Cancer progression, tumor characterization
The mammographic density is correlated with the collagen density, thus SHG can be used for identifying breast cancer. SHG is usually coupled to other nonlinear techniques such as Coherent anti-Stokes Raman Scattering or Two-photon excitation microscopy, as part of a routine called multiphoton microscopy (or tomography) that provides a non-invasive and rapid in vivo histology of biopsies that may be cancerous.
Breast cancer
The comparison of forward and backward SHG images gives insight about the microstructure of collagen, itself related to the grade and stage of a tumor, and its progression in breast. Comparison of SHG and 2PEF can also show the change of collagen orientation in tumors.
Even if SHG microscopy has contributed a lot to breast cancer research, it is not yet established as a reliable technique in hospitals, or for diagnostic of this pathology in general.
Ovarian cancer
Healthy ovaries present in SHG a uniform epithelial layer and well-organized collagen in their stroma, whereas abnormal ones show an epithelium with large cells and a changed collagen structure. The r ratio is also used to show that the alignment of fibrils is slightly higher for cancerous than for normal tissues.
Skin cancer
SHG is, again, combined to 2PEF is used to calculate the ratio:
where shg (resp. tpef) is the number of thresholded pixels in the SHG (resp. 2PEF) image, a high MFSI meaning a pure SHG image (with no fluorescence). The highest MFSI is found in cancerous tissues, which provides a contrast mode to differentiate from normal tissues.
SHG was also combined to Third-Harmonic Generation (THG) to show that backward THG is higher in tumors.
Pancreatic cancer
Changes in collagen ultrastructure in pancreatic cancer can be investigated by multiphoton fluorescence and polarization-resolved SHIM.
Other cancers
SHG microscopy was reported for the study of lung, colonic, esophageal stroma and cervical cancers.
Pathologies detection
Alterations in the organization or polarity of the collagen fibrils can be signs of pathology,.
In particular, the anisotropic alignment of collagen fibers allowed the discrimination of healthy dermis from pathological scars in skin. Also, pathologies in cartilage such as osteoarthritis can be probed by polarization-resolved SHG microscopy,. SHIM was later extended to fibro-cartilage (meniscus).
Tissue engineering
The ability of SHG to image specific molecules can reveal the structure of a certain tissue one material at a time, and at various scales (from macro to micro) using microscopy. For instance, the collagen (type I) is specifically imaged from the extracellular matrix (ECM) of cells, or when it serves as a scaffold or conjonctive material in tissues. SHG also reveals fibroin in silk, myosin in muscles and biosynthetized cellulose.
All of this imaging capability can be used to design artificials tissues, by targeting specific points of the tissue : SHG can indeed quantitatively measure some orientations, and material quantity and arrangement. Also, SHG coupled to other multiphoton techniques can serve to monitor the development of engineered tissues, when the sample is relatively thin however. Of course, they can finally be used as a quality control of the fabricated tissues.
Structure of the eye
Cornea, at the surface of the eye, is considered to be made of plywood-like structure of collagen, due to the self-organization properties of sufficiently dense collagen. Yet, the collagenous orientation in lamellae is still under debate in this tissue.
Keratoconus cornea can also be imaged by SHG to reveal morphological alterations of the collagen.
Third-Harmonic Generation (THG) microscopy is moreover used to image the cornea, which is complementary to SHG signal as THG and SHG maxima in this tissue are often at different places.
See also
Nonlinear optics
Second-harmonic generation
Two-photon excitation microscopy
Sources
References
Microscopy
Cell imaging
Laboratory equipment
Optical microscopy | Second-harmonic imaging microscopy | [
"Chemistry",
"Biology"
] | 3,002 | [
"Optical microscopy",
"Cell imaging",
"Microscopy"
] |
2,497,875 | https://en.wikipedia.org/wiki/Alternating%20algebra | In mathematics, an alternating algebra is a -graded algebra for which for all nonzero homogeneous elements and (i.e. it is an anticommutative algebra) and has the further property that (nilpotence) for every homogeneous element of odd degree.
Examples
The differential forms on a differentiable manifold form an alternating algebra.
The exterior algebra is an alternating algebra.
The cohomology ring of a topological space is an alternating algebra.
Properties
The algebra formed as the direct sum of the homogeneous subspaces of even degree of an anticommutative algebra is a subalgebra contained in the centre of , and is thus commutative.
An anticommutative algebra over a (commutative) base ring in which 2 is not a zero divisor is alternating.
See also
Alternating multilinear map
Exterior algebra
Graded-symmetric algebra
Supercommutative algebra
References
Algebraic geometry | Alternating algebra | [
"Mathematics"
] | 185 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
2,498,708 | https://en.wikipedia.org/wiki/Platinum%20hexafluoride | Platinum hexafluoride is the chemical compound with the formula PtF6, and is one of seventeen known binary hexafluorides. It is a dark-red volatile solid that forms a red gas. The compound is a unique example of platinum in the +6 oxidation state. With only four d-electrons, it is paramagnetic with a triplet ground state. PtF6 is a strong fluorinating agent and one of the strongest oxidants, capable of oxidising xenon and O2. PtF6 is octahedral in both the solid state and in the gaseous state. The Pt-F bond lengths are 185 picometers.
Synthesis
PtF6 was first prepared by reaction of fluorine with platinum metal. This route remains the method of choice.
Pt + 3 F2 → PtF6
PtF6 can also be prepared by disproportionation of the pentafluoride (PtF5), with the tetrafluoride (PtF4) as a byproduct. The required PtF5 can be obtained by fluorinating PtCl2:
2 PtCl2 + 5 F2 → 2 PtF5 + 2 Cl2
2 PtF5 → PtF6 + PtF4
Hexafluoroplatinates
Platinum hexafluoride can gain an electron to form the hexafluoroplatinate anion, . It is formed by reacting platinum hexafluoride with relatively uncationisable elements and compounds, for example with xenon to form "" (actually a mixture of , , and ), known as xenon hexafluoroplatinate. The discovery of this reaction in 1962 proved that noble gases form chemical compounds. Previous to the experiment with xenon, had been shown to react with oxygen to form [O2]+[PtF6]−, dioxygenyl hexafluoroplatinate.
See also
Hexafluoride
Chloroplatinic acid
References
General reading
Holleman, A. F.; Wiberg, E. "Inorganic Chemistry" Academic Press: San Diego, 2001. .
Fluorides,6
Hexafluorides
Platinum group halides
Fluorinating agents
Octahedral compounds
Gases with color | Platinum hexafluoride | [
"Chemistry"
] | 484 | [
"Fluorinating agents",
"Reagents for organic chemistry"
] |
2,498,855 | https://en.wikipedia.org/wiki/Supermatrix | In mathematics and theoretical physics, a supermatrix is a Z2-graded analog of an ordinary matrix. Specifically, a supermatrix is a 2×2 block matrix with entries in a superalgebra (or superring). The most important examples are those with entries in a commutative superalgebra (such as a Grassmann algebra) or an ordinary field (thought of as a purely even commutative superalgebra).
Supermatrices arise in the study of super linear algebra where they appear as the coordinate representations of a linear transformations between finite-dimensional super vector spaces or free supermodules. They have important applications in the field of supersymmetry.
Definitions and notation
Let R be a fixed superalgebra (assumed to be unital and associative). Often one requires R be supercommutative as well (for essentially the same reasons as in the ungraded case).
Let p, q, r, and s be nonnegative integers. A supermatrix of dimension (r|s)×(p|q) is a matrix with entries in R that is partitioned into a 2×2 block structure
with r+s total rows and p+q total columns (so that the submatrix X00 has dimensions r×p and X11 has dimensions s×q). An ordinary (ungraded) matrix can be thought of as a supermatrix for which q and s are both zero.
A square supermatrix is one for which (r|s) = (p|q). This means that not only is the unpartitioned matrix X square, but the diagonal blocks X00 and X11 are as well.
An even supermatrix is one for which the diagonal blocks (X00 and X11) consist solely of even elements of R (i.e. homogeneous elements of parity 0) and the off-diagonal blocks (X01 and X10) consist solely of odd elements of R.
An odd supermatrix is one for which the reverse holds: the diagonal blocks are odd and the off-diagonal blocks are even.
If the scalars R are purely even there are no nonzero odd elements, so the even supermatices are the block diagonal ones and the odd supermatrices are the off-diagonal ones.
A supermatrix is homogeneous if it is either even or odd. The parity, |X|, of a nonzero homogeneous supermatrix X is 0 or 1 according to whether it is even or odd. Every supermatrix can be written uniquely as the sum of an even supermatrix and an odd one.
Algebraic structure
Supermatrices of compatible dimensions can be added or multiplied just as for ordinary matrices. These operations are exactly the same as the ordinary ones with the restriction that they are defined only when the blocks have compatible dimensions. One can also multiply supermatrices by elements of R (on the left or right), however, this operation differs from the ungraded case due to the presence of odd elements in R.
Let Mr|s×p|q(R) denote the set of all supermatrices over R with dimension (r|s)×(p|q). This set forms a supermodule over R under supermatrix addition and scalar multiplication. In particular, if R is a superalgebra over a field K then Mr|s×p|q(R) forms a super vector space over K.
Let Mp|q(R) denote the set of all square supermatices over R with dimension (p|q)×(p|q). This set forms a superring under supermatrix addition and multiplication. Furthermore, if R is a commutative superalgebra, then supermatrix multiplication is a bilinear operation, so that Mp|q(R) forms a superalgebra over R.
Addition
Two supermatrices of dimension (r|s)×(p|q) can be added just as in the ungraded case to obtain a supermatrix of the same dimension. The addition can be performed blockwise since the blocks have compatible sizes. It is easy to see that the sum of two even supermatrices is even and the sum of two odd supermatrices is odd.
Multiplication
One can multiply a supermatrix with dimensions (r|s)×(p|q) by a supermatrix with dimensions (p|q)×(k|l) as in the ungraded case to obtain a matrix of dimension (r|s)×(k|l). The multiplication can be performed at the block level in the obvious manner:
Note that the blocks of the product supermatrix Z = XY are given by
If X and Y are homogeneous with parities |X| and |Y| then XY is homogeneous with parity |X| + |Y|. That is, the product of two even or two odd supermatrices is even while the product of an even and odd supermatrix is odd.
Scalar multiplication
Scalar multiplication for supermatrices is different than the ungraded case due to the presence of odd elements in R. Let X be a supermatrix. Left scalar multiplication by α ∈ R is defined by
where the internal scalar multiplications are the ordinary ungraded ones and denotes the grade involution in R. This is given on homogeneous elements by
Right scalar multiplication by α is defined analogously:
If α is even then and both of these operations are the same as the ungraded versions. If α and X are homogeneous then α⋅X and X⋅α are both homogeneous with parity |α| + |X|. Furthermore, if R is supercommutative then one has
As linear transformations
Ordinary matrices can be thought of as the coordinate representations of linear maps between vector spaces (or free modules). Likewise, supermatrices can be thought of as the coordinate representations of linear maps between super vector spaces (or free supermodules). There is an important difference in the graded case, however. A homomorphism from one super vector space to another is, by definition, one that preserves the grading (i.e. maps even elements to even elements and odd elements to odd elements). The coordinate representation of such a transformation is always an even supermatrix. Odd supermatrices correspond to linear transformations that reverse the grading. General supermatrices represent an arbitrary ungraded linear transformation. Such transformations are still important in the graded case, although less so than the graded (even) transformations.
A supermodule M over a superalgebra R is free if it has a free homogeneous basis. If such a basis consists of p even elements and q odd elements, then M is said to have rank p|q. If R is supercommutative, the rank is independent of the choice of basis, just as in the ungraded case.
Let Rp|q be the space of column supervectors—supermatrices of dimension (p|q)×(1|0). This is naturally a right R-supermodule, called the right coordinate space. A supermatrix T of dimension (r|s)×(p|q) can then be thought of as a right R-linear map
where the action of T on Rp|q is just supermatrix multiplication (this action is not generally left R-linear which is why we think of Rp|q as a right supermodule).
Let M be free right R-supermodule of rank p|q and let N be a free right R-supermodule of rank r|s. Let (ei) be a free basis for M and let (fk) be a free basis for N. Such a choice of bases is equivalent to a choice of isomorphisms from M to Rp|q and from N to Rr|s. Any (ungraded) linear map
can be written as a (r|s)×(p|q) supermatrix relative to the chosen bases. The components of the associated supermatrix are determined by the formula
The block decomposition of a supermatrix T corresponds to the decomposition of M and N into even and odd submodules:
Operations
Many operations on ordinary matrices can be generalized to supermatrices, although the generalizations are not always obvious or straightforward.
Supertranspose
The supertranspose of a supermatrix is the Z2-graded analog of the transpose. Let
be a homogeneous (r|s)×(p|q) supermatrix. The supertranspose of X is the (p|q)×(r|s) supermatrix
where At denotes the ordinary transpose of A. This can be extended to arbitrary supermatrices by linearity. Unlike the ordinary transpose, the supertranspose is not generally an involution, but rather has order 4. Applying the supertranspose twice to a supermatrix X gives
If R is supercommutative, the supertranspose satisfies the identity
Parity transpose
The parity transpose of a supermatrix is a new operation without an ungraded analog. Let
be a (r|s)×(p|q) supermatrix. The parity transpose of X is the (s|r)×(q|p) supermatrix
That is, the (i,j) block of the transposed matrix is the (1−i,1−j) block of the original matrix.
The parity transpose operation obeys the identities
as well as
where st denotes the supertranspose operation.
Supertrace
The supertrace of a square supermatrix is the Z2-graded analog of the trace. It is defined on homogeneous supermatrices by the formula
where tr denotes the ordinary trace.
If R is supercommutative, the supertrace satisfies the identity
for homogeneous supermatrices X and Y.
Berezinian
The Berezinian (or superdeterminant) of a square supermatrix is the Z2-graded analog of the determinant. The Berezinian is only well-defined on even, invertible supermatrices over a commutative superalgebra R. In this case it is given by the formula
where det denotes the ordinary determinant (of square matrices with entries in the commutative algebra R0).
The Berezinian satisfies similar properties to the ordinary determinant. In particular, it is multiplicative and invariant under the supertranspose. It is related to the supertrace by the formula
References
Matrices
Super linear algebra | Supermatrix | [
"Physics",
"Mathematics"
] | 2,261 | [
"Super linear algebra",
"Mathematical objects",
"Matrices (mathematics)",
"Supersymmetry",
"Symmetry"
] |
2,500,686 | https://en.wikipedia.org/wiki/Magnitude%20%28astronomy%29 | In astronomy, magnitude is a measure of the brightness of an object, usually in a defined passband. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus.
Magnitude values do not have a unit. The scale is logarithmic and defined such that a magnitude 1 star is exactly 100 times brighter than a magnitude 6 star. Thus each step of one magnitude is times brighter than the magnitude 1 higher. The brighter an object appears, the lower the value of its magnitude, with the brightest objects reaching negative values.
Astronomers use two different definitions of magnitude: apparent magnitude and absolute magnitude. The apparent magnitude () is the brightness of an object and depends on an object's intrinsic luminosity, its distance, and the extinction reducing its brightness. The absolute magnitude () describes the intrinsic luminosity emitted by an object and is defined to be equal to the apparent magnitude that the object would have if it were placed at a certain distance, 10 parsecs for stars. A more complex definition of absolute magnitude is used for planets and small Solar System bodies, based on its brightness at one astronomical unit from the observer and the Sun.
The Sun has an apparent magnitude of −27 and Sirius, the brightest visible star in the night sky, −1.46. Venus at its brightest is -5. The International Space Station (ISS) sometimes reaches a magnitude of −6.
Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. At a dark site, it is usual for people to see stars of 6th magnitude or fainter.
Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux.
History
The Greek astronomer Hipparchus produced a catalogue which noted the apparent brightness of stars in the second century BCE. In the second century CE the Alexandrian astronomer Ptolemy classified stars on a six-point scale, and originated the term magnitude. To the unaided eye, a more prominent star such as Sirius or Arcturus appears larger than a less prominent star such as Mizar, which in turn appears larger than a truly faint star such as Alcor. In 1736, the mathematician John Keill described the ancient naked-eye magnitude system in this way:
The fixed Stars appear to be of different Bignesses, not because they really are so, but because they are not all equally distant from us. Those that are nearest will excel in Lustre and Bigness; the more remote Stars will give a fainter Light, and appear smaller to the Eye. Hence arise the Distribution of Stars, according to their Order and Dignity, into Classes; the first Class containing those which are nearest to us, are called Stars of the first Magnitude; those that are next to them, are Stars of the second Magnitude ... and so forth, 'till we come to the Stars of the sixth Magnitude, which comprehend the smallest Stars that can be discerned with the bare Eye. For all the other Stars, which are only seen by the Help of a Telescope, and which are called Telescopical, are not reckoned among these six Orders. Altho' the Distinction of Stars into six Degrees of Magnitude is commonly received by Astronomers; yet we are not to judge, that every particular Star is exactly to be ranked according to a certain Bigness, which is one of the Six; but rather in reality there are almost as many Orders of Stars, as there are Stars, few of them being exactly of the same Bigness and Lustre. And even among those Stars which are reckoned of the brightest Class, there appears a Variety of Magnitude; for Sirius or Arcturus are each of them brighter than Aldebaran or the Bull's Eye, or even than the Star in Spica; and yet all these Stars are reckoned among the Stars of the first Order: And there are some Stars of such an intermedial Order, that the Astronomers have differed in classing of them; some putting the same Stars in one Class, others in another. For Example: The little Dog was by Tycho placed among the Stars of the second Magnitude, which Ptolemy reckoned among the Stars of the first Class: And therefore it is not truly either of the first or second Order, but ought to be ranked in a Place between both.
Note that the brighter the star, the smaller the magnitude: Bright "first magnitude" stars are "1st-class" stars, while stars barely visible to the naked eye are "sixth magnitude" or "6th-class".
The system was a simple delineation of stellar brightness into six distinct groups but made no allowance for the variations in brightness within a group.
Tycho Brahe attempted to directly measure the "bigness" of the stars in terms of angular size, which in theory meant that a star's magnitude could be determined by more than just the subjective judgment described in the above quote. He concluded that first magnitude stars measured 2 arc minutes (2′) in apparent diameter ( of a degree, or the diameter of the full moon), with second through sixth magnitude stars measuring ′, ′, ′, ′, and ′, respectively. The development of the telescope showed that these large sizes were illusory—stars appeared much smaller through the telescope. However, early telescopes produced a spurious disk-like image of a star that was larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, and thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star. Johannes Hevelius produced a very precise table of star sizes measured telescopically, but now the measured diameters ranged from just over six seconds of arc for first magnitude down to just under 2 seconds for sixth magnitude. By the time of William Herschel astronomers recognized that the telescopic disks of stars were spurious and a function of the telescope as well as the brightness of the stars, but still spoke in terms of a star's size more than its brightness. Even into the early nineteenth century, the magnitude system continued to be described in terms of six classes determined by apparent size.
However, by the mid-nineteenth century astronomers had measured the distances to stars via stellar parallax, and so understood that stars are so far away as to essentially appear as point sources of light. Following advances in understanding the diffraction of light and astronomical seeing, astronomers fully understood both that the apparent sizes of stars were spurious and how those sizes depended on the intensity of light coming from a star (this is the star's apparent brightness, which can be measured in units such as watts per square metre) so that brighter stars appeared larger.
Modern definition
Early photometric measurements (made, for example, by using a light to project an artificial “star” into a telescope's field of view and adjusting it to match real stars in brightness) demonstrated that first magnitude stars are about 100 times brighter than sixth magnitude stars.
Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of or roughly 2.512 times. Consequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on.
This is the modern magnitude system, which measures the brightness, not the apparent size, of stars. Using this logarithmic scale, it is possible for a star to be brighter than “first class”, so Arcturus or Vega are magnitude 0, and Sirius is magnitude −1.46.
Scale
As mentioned above, the scale appears to work 'in reverse', with objects with a negative magnitude being brighter than those with a positive magnitude. The more negative the value, the brighter the object.
Objects appearing farther to the left on this line are brighter, while objects appearing farther to the right are dimmer. Thus zero appears in the middle, with the brightest objects on the far left, and the dimmest objects on the far right.
Apparent and absolute magnitude
Two of the main types of magnitudes distinguished by astronomers are:
Apparent magnitude, the brightness of an object as it appears in the night sky.
Absolute magnitude, which measures the luminosity of an object (or reflected light for non-luminous objects like asteroids); it is the object's apparent magnitude as seen from a specific distance, conventionally 10 parsecs (32.6 light years).
The difference between these concepts can be seen by comparing two stars. Betelgeuse (apparent magnitude 0.5, absolute magnitude −5.8) appears slightly dimmer in the sky than Alpha Centauri A (apparent magnitude 0.0, absolute magnitude 4.4) even though it emits thousands of times more light, because Betelgeuse is much farther away.
Apparent magnitude
Under the modern logarithmic magnitude scale, two objects, one of which is used as a reference or baseline, whose flux (i.e., brightness, a measure of power per unit area) in units such as watts per square metre (W m−2) are and , will have magnitudes and related by
Astronomers use the term "flux" for what is often called "intensity" in physics, in order to avoid confusion with the specific intensity. Using this formula, the magnitude scale can be extended beyond the ancient magnitude 1–6 range, and it becomes a precise measure of brightness rather than simply a classification system. Astronomers now measure differences as small as one-hundredth of a magnitude. Stars that have magnitudes between 1.5 and 2.5 are called second-magnitude; there are some 20 stars brighter than 1.5, which are first-magnitude stars (see the list of brightest stars). For example, Sirius is magnitude −1.46, Arcturus is −0.04, Aldebaran is 0.85, Spica is 1.04, and Procyon is 0.34. Under the ancient magnitude system, all of these stars might have been classified as "stars of the first magnitude".
Magnitudes can also be calculated for objects far brighter than stars (such as the Sun and Moon), and for objects too faint for the human eye to see (such as Pluto).
Absolute magnitude
Often, only apparent magnitude is mentioned since it can be measured directly. Absolute magnitude can be calculated from apparent magnitude and distance from:
because intensity falls off proportionally to distance squared. This is known as the distance modulus, where is the distance to the star measured in parsecs, is the apparent magnitude, and is the absolute magnitude.
If the line of sight between the object and observer is affected by extinction due to absorption of light by interstellar dust particles, then the object's apparent magnitude will be correspondingly fainter. For magnitudes of extinction, the relationship between apparent and absolute magnitudes becomes
Stellar absolute magnitudes are usually designated with a capital M with a subscript to indicate the passband. For example, MV is the magnitude at 10 parsecs in the V passband. A bolometric magnitude (Mbol) is an absolute magnitude adjusted to take account of radiation across all wavelengths; it is typically smaller (i.e. brighter) than an absolute magnitude in a particular passband, especially for very hot or very cool objects. Bolometric magnitudes are formally defined based on stellar luminosity in watts, and are normalised to be approximately equal to MV for yellow stars.
Absolute magnitudes for Solar System objects are frequently quoted based on a distance of 1 AU. These are referred to with a capital H symbol. Since these objects are lit primarily by reflected light from the Sun, an H magnitude is defined as the apparent magnitude of the object at 1 AU from the Sun and 1 AU from the observer.
Examples
The following is a table giving apparent magnitudes for celestial objects and artificial satellites ranging from the Sun to the faintest object visible with the James Webb Space Telescope (JWST):
Other scales
Any magnitude systems must be calibrated to define the brightness of magnitude zero. Many magnitude systems, such as the Johnson UBV system, assign the average brightness of several stars to a certain number to by definition, and all other magnitude measurements are compared to that reference point. Other magnitude systems calibrate by measuring energy directly, without a reference point, and these are called "absolute" reference systems. Current absolute reference systems include the AB magnitude system, in which the reference is a source with a constant flux density per unit frequency, and the STMAG system, in which the reference source is instead defined to have constant flux density per unit wavelength.
Decibel
Another logarithmic measure for intensity is the level, in decibel. Although it is more commonly used for sound intensity, it is also used for light intensity. It is a parameter for photomultiplier tubes and similar camera optics for telescopes and microscopes. Each factor of 10 in intensity corresponds to 10 decibels. In particular, a multiplier of 100 in intensity corresponds to an increase of 20 decibels and also corresponds to a decrease in magnitude by 5. Generally, the change in level is related to a change in magnitude by
dB
For example, an object that is 1 magnitude larger (fainter) than a reference would produce a signal that is smaller (weaker) than the reference, which might need to be compensated by an increase in the capability of the camera by as many decibels.
See also
AB magnitude
Color–color diagram
List of brightest stars
Photometric-standard star
UBV photometric system
Notes
References
External links
Observational astronomy
Units of measurement in astronomy
Logarithmic scales of measurement
Concepts in astronomy
la:Magnitudo (astronomia) | Magnitude (astronomy) | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,896 | [
"Units of measurement",
"Physical quantities",
"Concepts in astronomy",
"Quantity",
"Observational astronomy",
"Units of measurement in astronomy",
"Logarithmic scales of measurement",
"Astronomical sub-disciplines"
] |
2,501,388 | https://en.wikipedia.org/wiki/Graphite%20furnace%20atomic%20absorption | Graphite furnace atomic absorption spectroscopy (GFAAS), also known as electrothermal atomic absorption spectroscopy (ETAAS), is a type of spectrometry that uses a graphite-coated furnace to vaporize the sample. Briefly, the technique is based on the fact that free atoms will absorb light at frequencies or wavelengths characteristic of the element of interest (hence the name atomic absorption spectrometry). Within certain limits, the amount of light absorbed can be linearly correlated to the concentration of analyte present. Free atoms of most elements can be produced from samples by the application of high temperatures. In GFAAS, samples are deposited in a small graphite or pyrolytic carbon coated graphite tube, which can then be heated to vaporize and atomize the analyte. The atoms absorb ultraviolet or visible light and make transitions to higher electronic energy levels. Applying the Beer-Lambert law directly in AA spectroscopy is difficult due to variations in the atomization efficiency from the sample matrix, and nonuniformity of concentration and path length of analyte atoms (in graphite furnace AA). Concentration measurements are usually determined from a working curve after calibrating the instrument with standards of known concentration.
The main advantages of the graphite furnace comparing to aspiration atomic absorption are the following:
The detection limits for the graphite furnace fall in the ppb range for most elements
Interference problems are minimized with the development of improved instrumentation
The graphite furnace can determine most elements measurable by aspiration atomic absorption in a wide variety of matrices.
System components
GFAA spectrometry instruments have the following basic features: 1. a source of light (lamp) that emits resonance line radiation; 2. an atomization chamber (graphite tube) in which the sample is vaporized; 3. a monochromator for selecting only one of the characteristic wavelengths (visible or ultraviolet) of the element of interest; 4. a detector, generally a photomultiplier tube (light detectors that are useful in low-intensity applications), that measures the amount of absorption; 5. a signal processor-computer system (strip chart recorder, digital display, meter, or printer).
Mode of operation
Most currently available GFAAs are fully controlled from a personal computer that has Windows-compatible software. The software easily optimizes run parameters, such as ramping cycles or calibration dilutions. Aqueous samples should be acidified (typically with nitric acid, HNO3) to a pH of 2.0 or less. GFAAs are more sensitive than flame atomic absorption spectrometers, and have a smaller dynamic range. This makes it necessary to dilute aqueous samples into the dynamic range of the specific analyte. GFAAS with automatic software can also pre-dilute samples before analysis.
After the instrument has warmed up and been calibrated, a small aliquot (usually less than 100 microliters (μL) and typically 20 μL) is placed, either manually or through an automated sampler, into the opening in the graphite tube. The sample is vaporized in the heated graphite tube; the amount of light energy absorbed in the vapor is proportional to atomic concentrations. Analysis of each sample takes from 1 to 5 minutes, and the results for a sample is the average of triplicate analysis. Faster graphite furnace techniques have been developed utilising the injection of samples into a pre-heated graphite tube.
Standards
ASTM E1184-10: "Standard Practice for Determination of Elements by Graphite Furnace Atomic Absorption Spectrometry."
ASTM D3919-08: "Standard Practice for Measuring Trace Elements in Water by Graphite Furnace Atomic Absorption Spectrophotometry."
ASTM D6357-11: "Test Methods for Determination of Trace Elements in Coal, Coke, & Combustion Residues from Coal Utilization Processes by Inductively Coupled Plasma Atomic Emission, Inductively Coupled Plasma Mass, & Graphite Furnace Atomic Absorption Spectrometry."
See also
Atomic absorption spectroscopy
References
EPA Analytic Technology Encyclopedia
Research Group of Atomic Spectrometry
Absorption spectroscopy | Graphite furnace atomic absorption | [
"Physics",
"Chemistry"
] | 850 | [
"Spectroscopy",
"Spectrum (physical sciences)",
"Absorption spectroscopy"
] |
2,501,818 | https://en.wikipedia.org/wiki/Robotech%20Defenders | The Robotech Defenders are a line of scale model kits released by Revell during the early 1980s with an accompanying limited comic series published by DC Comics. Contrary to what their name seems to imply, the "'Robotech Defenders'" are not part of the Robotech anime universe adapted by Carl Macek and released by Harmony Gold USA, but they did adopt the same moniker and logo.
The "Robotech Defenders" were one of two "Robotech" lines released by Revell, the other being the "Robotech Changers". The "Robotech Changers" line initially consisted of three models based on the Valkyrie Variable fighter designs from Macross, and the NEBO model, based upon the Orguss of Super Dimension Century Orguss.
The "Robotech Defenders" model line was tied into a two-issue limited series of the same name, published by DC comics. It shares many common themes with other science fiction series of that time, including invading aliens, and giant mechanical war machines.
Model review
Seeking to capitalize on the Mecha craze of the early 1980s, Model Company Revell went to Japan to look for suitable mecha models prior to 1984. They eventually licensed a number of Takara's Fang of the Sun Dougram models for the "Defenders" line. These models were repackaged with the "Robotech" moniker, and released in North America and Europe.
The humanoid Mech models had an average size of 30 cm, the in-scale humans were about 2 cm.
One of the features of the models (excepting the Human and Grelon miniatures) was that they were not static, but had fully movable joints and removable equipment. Because of the complexity, details and parts they can be challenging and require adult skill level even though they were sold with "ages 12 and up" on their packaging. Experienced modelers found them challenging, especially the Humans and Grelons.
In the North American market, the models met with some success, appealing to both fans of Robotech and the players of Battletech tabletop strategy game. In Europe, however, model sales were disappointing, possibly due to the non-existent background story included with the models, and the relatively high prices.
Model details
Robotech Defenders
Listed below are the Revell Robotech Defenders model kits by number and the source of the model (as well as the corresponding BattleTech name, if known):
1150 "Thoren" & 1151 "Zoltek" models are 1/48 scale though marked on the box as 1/72.
1152 "Condar" model kit was boxed in two versions, one stating scale as 1/72(wrong) and one as 1/48(correct) though both were same kit.
Robotech Defender "Exaxes" (1145: 1/48 Scale) is the Orguss RSG-21A-1 "Ishfon" walker - not transformable.
Robotech Defender "Decimax" (1146: 1/48 Scale) is the Orguss MBG-24C "Nikick" - not transformable.
Robotech Defender "Aqualo" (1148: 1/72 Scale) is the Dougram H404S "Mackerel" Marine Combat Armor.
Robotech Defender "Ziyon" (1149: 1/72 Scale) is the Dougram Soltic HT-128 "Bigfoot" Combat Armor-not transformable (Battletech "Battlemaster").
Robotech Defender "Thoren" (1150: 1/48 Scale) is the Dougram Soltic H8 "Roundfacer" Combatc Armor-not transformable (Battletech "Griffon").
Robotech Defender "Zoltek" (1151: 1/48 Scale) is the Dougram D7 "Dougram" Combat Armor-not transformable (Battletech "Shadow Hawk").
Robotech Defender "Condar" (1152: 1/48 Scale) is the Dougram Soltic H-102 "Bushman" Combat Armor-not transformable.
Robotech Defender "Talos" (1153: 1/48 Scale) is the Dougram T-10B "Blockhead" Combat Armor-not transformable (Battletech "Wolverine").
Robotech Defender "Gartan" (1154: 1/48 Scale) is the Dougram Ironfoot F4X "Hasty" Combat Armor-not transformable (Battletech "Thunderbolt").
Robotech Defender "Ice Rover" (1161: 1/48 Scale) is the Dougram Eastland ARH-52 "Groundsearch", an air-cushioned hovercraft vehicle - not transformable.
Robotech Defender "Terrattacker" (1162: 1/48 Scale) is the Dougram Bromry JRS "Native Dancer", a light 6-wheeled AFV/jeep - not transformable.
Robotech Defender "Sand Stalker" (1187: 1/72 Scale) is the Dougram Abitate F44S "Desert Gunner" 6-legged Walker Tank-not transformable.
Robotech Defender "Armored Combat Team" (1191: 1/72 Scale) is the Dougram Soltic H8 "Roundfacer" Combat Armor-not transformable, with infantry jeeps (Robotech Defender "Thoren").
Robotech Defender "Strike Force" (1192: 1/72 Scale) is a Dougram Ironfoot F4X "Hasty" Combat Armor and a Culailles MP-2 "Dewey" attack helicopter-not transformable (Robotech Defender "Gartan").
Robotech Defender "Assault Squad" (1193: 1/72 Scale) is a Dougram Abitate F35C "Blizzard Gunner" Walker Tank and an ARMC Instead AFV light attack vehicle (Walker Tank used as Battletech "Scorpion").
Robotech Defender "Robot Recovery Unit" (1194: 1/72 Scale) is a Dougram Bromry "Eyevan" DT-2 Trailer Truck.
Robotech Defender "Airborne Attacker" (1197: 1/72 Scale) is a Dougram Soltic H-102 "Bushman" Combat Armor and an Eastland WE-211 "Mavellic" cargo-lifting helicopter-not transformable (Robotech Defender "Condar").
Robotech Defender "Commando" (1199: 1/48 Scale) is the Dougram Abitate F44B "Tequila Gunner" 4-legged Walker Tank-not transformable.
Robotech Changers
Listed below are the Revell Robotech Changers model kits by number and the source of the model (as well as the corresponding BattleTech name, if known):
Robotech
Listed below are the Revell Robotech model kits by number and the source of the model (as well as the corresponding BattleTech name, if known):
Revell Robotech models from the Fang of the Sun Dougram line seem to be repacks of model kits made by Takara. The models from the Super Dimension Fortress Macross line seem to be repacks of model kits made by Imai. The models from the Super Dimension Century Orguss line seem to be repacks of model kits made by Arii.
Marketing confusion
Release of the "'Robotech Defenders'" and "'Robotech Changers'" model lines caused problems for media company Harmony Gold USA, who licensed the North American video rights to the Japanese Macross anime series, combining it with two other series to produce an 85 episode series they hoped to market direct to video. Since Revell was already distributing the models, Harmony Gold could not support the show with merchandising. In the end, both companies decided to enter into a co-licensing agreement and the name Robotech was eventually adopted for the syndicated television show that the home video line had transformed into.
Players of FASA's BattleTech tabletop strategy game universe will instantly recognize many of the Revell models as Mechs from the original Role Playing Game sourcebooks. The reason for this is that all of the original edition's 'Mech visuals were based on designs from a variety of anime series, including Macross, Dougram and Crusher Joe, some of which Revell kits are sourced. FASA eventually became embroiled in a lawsuit with Harmony Gold regarding the use of Macross images, and after which FASA removed all Macross related images along with any other images not created in house from their Sourcebooks. Those 'Mechs would later be known by BattleTech fans as 'The Unseen'.
Comic books
The eponymous comic book, a two-issue mini-series, was published by DC Comics in 1984. It was originally intended to be a trilogy, but was reduced to the first normal-sized issue and a 32-page second issue with no advertisement. The universe of the "'Robotech Defenders'" comic book series bears no resemblance at all to the Robotech universe adapted by Harmony Gold USA. The Robotech Defenders comic predates the conception of the original Robotech cartoon show by about a year.
The story followed the battles of a team of pilots who fight a savage race of aliens, called "Grelons", who have invaded all planets of a star system using superior technology. They plan to colonize the planets, using their titanic war machines to eliminate all resistance. The heroes, a small combat unit, are losing badly when their leader accidentally activates one of the Robotech Defenders. She then learns of the existence of the other machines, which are scattered on the other pilots' home planets. Each of these units has a unique range of abilities and environmental specialties (e.g., Aqualo was capable of diving and sea-based activities, Ziyon's Element was cold and snow, Thoren's heat and magma, Gartan's urban combat).
By the end of the first issue, the team have managed to recover all the robots and engage the enemy in battle, but are still defeated and get captured. They escape by pushing a big red button which releases the Defenders' minds, unleashing the latter's' full combat capabilities. The pilots then track down the controllers of the savage aliens. They defeat them by causing the evil alien energy siphon to suck the energy from the sun, causing their space ship to explode.
Revell comic
Revell's division in West Germany, Revell Plastic, GmbH, published a one-shot promotional issue of Robotech Defenders with a subtitle translating to "The Defenders of the Cosmos". Written by W. Spiegel with Artwork by W. Neugebauer, this original comic was not a reprint of the DC Comics series and was not connected to its continuity. It was translated to Swedish and packaged with the model. Like the DC Comics series, it also had no connection to the TV series.
References
External links
Robotech Defenders Model Sheets and some history
Stupidcomics #73 Describes the disaster the comic was.
Gundam.com Discussion a detailed list of the early Revell models, their details, and sources.
1985 comics debuts
DC Comics limited series
Scale modeling
ja:ロボテック#成立経緯 | Robotech Defenders | [
"Physics"
] | 2,306 | [
"Scale modeling"
] |
2,503,009 | https://en.wikipedia.org/wiki/Parallel%20projection | In three-dimensional geometry, a parallel projection (or axonometric projection) is a projection of an object in three-dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other. It is a basic tool in descriptive geometry. The projection is called orthographic if the rays are perpendicular (orthogonal) to the image plane, and oblique or skew if they are not.
Overview
A parallel projection is a particular case of projection in mathematics and graphical projection in technical drawing. Parallel projections can be seen as the limit of a central or perspective projection, in which the rays pass through a fixed point called the center or viewpoint, as this point is moved towards infinity. Put differently, a parallel projection corresponds to a perspective projection with an infinite focal length (the distance between the lens and the focal point in photography) or "zoom". Further, in parallel projections, lines that are parallel in three-dimensional space remain parallel in the two-dimensionally projected image.
A perspective projection of an object is often considered more realistic than a parallel projection, since it more closely resembles human vision and photography. However, parallel projections are popular in technical applications, since the parallelism of an object's lines and faces is preserved, and direct measurements can be taken from the image. Among parallel projections, orthographic projections are seen as the most realistic, and are commonly used by engineers. On the other hand, certain types of oblique projections (for instance cavalier projection, military projection) are very simple to implement, and are used to create quick and informal pictorials of objects.
The term parallel projection is used in the literature to describe both the procedure itself (a mathematical mapping function) as well as the resulting image produced by the procedure.
Properties
Every parallel projection has the following properties:
It is uniquely defined by its projection plane Π and the direction of the (parallel) projection lines. The direction must not be parallel to the projection plane.
Any point of the space has a unique image in the projection plane Π, and the points of Π are fixed.
Any line not parallel to direction is mapped onto a line; any line parallel to is mapped onto a point.
Parallel lines are mapped on parallel lines, or on a pair of points (if they are parallel to ).
The ratio of the length of two line segments on a line stays unchanged. As a special case, midpoints are mapped on midpoints.
The length of a line segment parallel to the projection plane remains unchanged. The length of any line segment is shortened if the projection is an orthographic one.
Any circle that lies in a plane parallel to the projection plane is mapped onto a circle with the same radius. Any other circle is mapped onto an ellipse or a line segment (if direction is parallel to the circle's plane).
Angles in general are not preserved. But right angles with one line parallel to the projection plane remain unchanged.
Any rectangle is mapped onto a parallelogram or a line segment (if is parallel to the rectangle's plane).
Any figure in a plane that is parallel to the image plane is congruent to its image.
Types
Orthographic projection
Orthographic projection is derived from the principles of descriptive geometry, and is a type of parallel projection where the projection rays are perpendicular to the projection plane. It is the projection type of choice for working drawings. The term orthographic is sometimes reserved specifically for depictions of objects where the principal axes or planes of the object are also parallel with the projection plane (or the paper on which the orthographic or parallel projection is drawn). However, the term primary view is also used. In multiview projections, up to six pictures of an object are produced, with each projection plane perpendicular to one of the coordinate axes. However, when the principal planes or axes of an object are not parallel with the projection plane, but are rather tilted to some degree to reveal multiple sides of the object, they are called auxiliary views or pictorials. Sometimes, the term axonometric projection is reserved solely for these views, and is juxtaposed with the term orthographic projection. But axonometric projection might be more accurately described as being synonymous with parallel projection, and orthographic projection a type of axonometric projection.
The primary views include plans, elevations and sections; and the isometric, dimetric and trimetric projections could be considered auxiliary views. A typical (but non-obligatory) characteristic of multiview orthographic projections is that one axis of space usually is displayed as vertical.
When the viewing direction is perpendicular to the surface of the depicted object, regardless of the object's orientation, it is referred to as a normal projection. Thus, in the case of a cube oriented with a space's coordinate system, the primary views of the cube would be considered normal projections.
Oblique projection
In an oblique projection, the parallel projection rays are not perpendicular to the viewing plane, but strike the projection plane at an angle other than ninety degrees. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image. Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the displayed angles separating the coordinate axes as well as the foreshortening factors (scaling) are arbitrary. The distortion created thereby is usually attenuated by aligning one plane of the imaged object to be parallel with the plane of projection, creating a truly-formed, full-size image of the chosen plane. Special types of oblique projections include military, cavalier and cabinet projection.
Analytic representation
If the image plane is given by equation and the direction of projection by , then the projection line through the point is parametrized by
with .
The image of is the intersection of line with plane ; it is given by the equation
In several cases, these formulas can be simplified.
(S1) If one can choose the vectors and such that , the formula for the image simplifies to
(S2) In an orthographic projection, the vectors and are parallel. In this case, one can choose and one gets
(S3) If one can choose the vectors and such that , and if the image plane contains the origin, one has and the parallel projection is a linear mapping:
(Here is the identity matrix and the outer product.)
From this analytic representation of a parallel projection one can deduce most of the properties stated in the previous sections.
History
Axonometry originated in China. Its function in Chinese art was unlike the linear perspective in European art since its perspective was not objective, or looking from the outside. Instead, its patterns used parallel projections within the painting that allowed the viewer to consider both the space and the ongoing progression of time in one scroll. According to science author and Medium journalist Jan Krikke, axonometry, and the pictorial grammar that goes with it, had taken on a new significance with the introduction of visual computing and engineering drawing.
The concept of isometry had existed in a rough empirical form for centuries, well before Professor William Farish (1759–1837) of Cambridge University was the first to provide detailed rules for isometric drawing.
Farish published his ideas in the 1822 paper "On Isometric Perspective", in which he recognized the "need for accurate technical working drawings free of optical distortion. This would lead him to formulate isometry. Isometry means "equal measures" because the same scale is used for height, width, and depth".
From the middle of the 19th century, according to Jan Krikke (2006) isometry became an "invaluable tool for engineers, and soon thereafter axonometry and isometry were incorporated in the curriculum of architectural training courses in Europe and the U.S. The popular acceptance of axonometry came in the 1920s, when modernist architects from the Bauhaus and De Stijl embraced it". De Stijl architects like Theo van Doesburg used axonometry for their architectural designs, which caused a sensation when exhibited in Paris in 1923".
Since the 1920s axonometry, or parallel perspective, has provided an important graphic technique for artists, architects, and engineers. Like linear perspective, axonometry helps depict three-dimensional space on a two-dimensional picture plane. It usually comes as a standard feature of CAD systems and other visual computing tools.
Limitations
Objects drawn with parallel projection do not appear larger or smaller as they lie closer to or farther away from the viewer. While advantageous for architectural drawings, where measurements must be taken directly from the image, the result is a perceived distortion, since unlike perspective projection, this is not how human vision or photography normally works. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right.
This visual ambiguity has been exploited in op art, as well as "impossible object" drawings. Though not strictly parallel, M. C. Escher's Waterfall (1961) is a well-known image, in which a channel of water seems to travel unaided along a downward path, only to then paradoxically fall once again as it returns to its source. The water thus appears to disobey the law of conservation of energy. Oscar Reutersvard is credited with discovery of the impossible object, an example of the impossible triangle (top) shown in this mural by Paul Kuniholm.
See also
Projection (linear algebra)
References
Schaum's Outline: Descriptive Geometry, McGraw-Hill, (June 1, 1962),
Graphical projections | Parallel projection | [
"Mathematics"
] | 1,967 | [
"Mathematical objects",
"Functions and mappings",
"Graphical projections",
"Mathematical relations"
] |
20,556,903 | https://en.wikipedia.org/wiki/Higgs%20boson | The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. In the Standard Model, the Higgs particle is a massive scalar boson with zero spin, even (positive) parity, no electric charge, and no colour charge that couples to (interacts with) mass. It is also very unstable, decaying into other particles almost immediately upon generation.
The Higgs field is a scalar field with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU(2) symmetry. Its "Sombrero potential" leads it to take a nonzero value everywhere (including otherwise empty space), which breaks the weak isospin symmetry of the electroweak interaction and, via the Higgs mechanism, gives a rest mass to all massive elementary particles of the Standard Model, including the Higgs boson itself. The existence of the Higgs field became the last unverified part of the Standard Model of particle physics, and for several decades was considered "the central problem in particle physics".
Both the field and the boson are named after physicist Peter Higgs, who in 1964, along with five other scientists in three teams, proposed the Higgs mechanism, a way for some particles to acquire mass. All fundamental particles known at the time should be massless at very high energies, but fully explaining how some particles gain mass at lower energies had been extremely difficult. If these ideas were correct, a particle known as a scalar boson (with certain properties) should also exist. This particle was called the Higgs boson and could be used to test whether the Higgs field was the correct explanation.
After a 40-year search, a subatomic particle with the expected properties was discovered in 2012 by the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN near Geneva, Switzerland. The new particle was subsequently confirmed to match the expected properties of a Higgs boson. Physicists from two of the three teams, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics in 2013 for their theoretical predictions. Although Higgs's name has come to be associated with this theory, several researchers between about 1960 and 1972 independently developed different parts of it.
In the media, the Higgs boson has often been called the "God particle" after the 1993 book The God Particle by Nobel Laureate Leon Lederman. The name has been criticised by physicists, including Peter Higgs.
Introduction
Standard Model
Physicists explain the fundamental particles and forces of our universe in terms of the Standard Model – a widely accepted framework based on quantum field theory that predicts almost all known particles and forces aside from gravity with great accuracy. (A separate theory, general relativity, is used for gravity.) In the Standard Model, the particles and forces in nature (aside from gravity) arise from properties of quantum fields known as gauge invariance and symmetries. Forces in the Standard Model are transmitted by particles known as gauge bosons.
Gauge invariant theories and symmetries
"It is only slightly overstating the case to say that physics is the study of symmetry" – Philip Anderson, Nobel Prize Physics
Gauge invariant theories are theories which have a useful feature, i.e., some kinds of changes to the value of certain items do not make any difference to the outcomes or the measurements we make. For example, changing voltages in an electromagnet by +100 volts does not cause any change to the magnetic field it produces. Similarly, measuring the speed of light in vacuum seems to give the identical result, whatever the location in time and space, and whatever the local gravitational field.
In these kinds of theories, the gauge is an item whose value we can change. The fact that some changes leave the results we measure unchanged means it is a gauge invariant theory, and symmetries are the specific kinds of changes to the gauge which have the effect of leaving measurements unchanged. Symmetries of this kind are powerful tools for a deep understanding of the fundamental forces and particles of our physical world. Gauge invariance is therefore an important property within particle physics theory. They are closely connected to conservation laws and are described mathematically using group theory. Quantum field theory and the Standard Model are both gauge invariant theories – meaning they focus on properties of our universe, demonstrating this property of gauge invariance and the symmetries which are involved.
Gauge boson (rest) mass problem
Quantum field theories based on gauge invariance had been used with great success in understanding the electromagnetic and strong forces, but by around 1960, all attempts to create a gauge invariant theory for the weak force (and its combination with the electromagnetic force, known together as the electroweak interaction) had consistently failed. As a result of these failures, gauge theories began to fall into disrepute. The problem was that symmetry requirements for these two forces incorrectly predicted the weak force's gauge bosons (W and Z) would have "zero mass" (in the specialized terminology of particle physics, "mass" refers specifically to a particle's rest mass). But experiments showed the W and Z gauge bosons had non-zero (rest) mass.
Further, many promising solutions seemed to require the existence of extra particles known as Goldstone bosons, but evidence suggested these did not exist either. This meant either gauge invariance was an incorrect approach, or something unknown was giving the weak force's W and Z bosons their mass, and doing it in a way that did not create Goldstone bosons. By the late 1950s and early 1960s, physicists were at a loss as to how to resolve these issues, or how to create a comprehensive theory for particle physics.
Symmetry breaking
In the late 1950s, Yoichiro Nambu recognised that spontaneous symmetry breaking, a process where a symmetric system becomes asymmetric, could occur under certain conditions. Symmetry breaking is when some variable that previously didn't affect the measured results (it was originally a "symmetry") now does affect the measured results (it's now "broken" and no longer a symmetry). In 1962 physicist Philip Anderson, an expert in condensed matter physics, observed that symmetry breaking played a role in superconductivity, and suggested it could also be part of the answer to the problem of gauge invariance in particle physics.
Specifically, Anderson suggested that the Goldstone bosons that would result from symmetry breaking might instead, in some circumstances, be "absorbed" by the massless W and Z bosons. If so, perhaps the Goldstone bosons would not exist, and the W and Z bosons could gain mass, solving both problems at once. Similar behaviour was already theorised in superconductivity. In 1964, this was shown to be theoretically possible by physicists Abraham Klein and Benjamin Lee, at least for some limited (non-relativistic) cases.
Higgs mechanism
Following the 1963 and early 1964 papers, three groups of researchers independently developed these theories more completely, in what became known as the 1964 PRL symmetry breaking papers. All three groups reached similar conclusions and for all cases, not just some limited cases. They showed that the conditions for electroweak symmetry would be "broken" if an unusual type of field existed throughout the universe, and indeed, there would be no Goldstone bosons and some existing bosons would acquire mass.
The field required for this to happen (which was purely hypothetical at the time) became known as the Higgs field (after Peter Higgs, one of the researchers) and the mechanism by which it led to symmetry breaking became known as the Higgs mechanism. A key feature of the necessary field is that it would take less energy for the field to have a non-zero value than a zero value, unlike all other known fields; therefore, the Higgs field has a non-zero value (or vacuum expectation) everywhere. This non-zero value could in theory break electroweak symmetry. It was the first proposal capable of showing how the weak force gauge bosons could have mass despite their governing symmetry, within a gauge invariant theory.
Although these ideas did not gain much initial support or attention, by 1972 they had been developed into a comprehensive theory and proved capable of giving "sensible" results that accurately described particles known at the time, and which, with exceptional accuracy, predicted several other particles discovered during the following years. During the 1970s these theories rapidly became the Standard Model of particle physics.
Higgs field
To allow symmetry breaking, the Standard Model includes a field of the kind needed to "break" electroweak symmetry and give particles their correct mass. This field, which became known as the "Higgs Field", was hypothesized to exist throughout space, and to break some symmetry laws of the electroweak interaction, triggering the Higgs mechanism. It would therefore cause the W and Z gauge bosons of the weak force to be massive at all temperatures below an extremely high value. When the weak force bosons acquire mass, this affects the distance they can freely travel, which becomes very small, also matching experimental findings. Furthermore, it was later realised that the same field would also explain, in a different way, why other fundamental constituents of matter (including electrons and quarks) have mass.
Unlike all other known fields, such as the electromagnetic field, the Higgs field is a scalar field, and has a non-zero average value in vacuum.
The "central problem"
There was not yet any direct evidence that the Higgs field existed, but even without direct proof, the accuracy of its predictions led scientists to believe the theory might be true. By the 1980s, the question of whether the Higgs field existed, and therefore whether the entire Standard Model was correct, had come to be regarded as one of the most important unanswered questions in particle physics. The existence of the Higgs field became the last unverified part of the Standard Model of particle physics, and for several decades was considered "the central problem in particle physics".
For many decades, scientists had no way to determine whether the Higgs field existed because the technology needed for its detection did not exist at that time. If the Higgs field did exist, then it would be unlike any other known fundamental field, but it also was possible that these key ideas, or even the entire Standard Model, were somehow incorrect.
The hypothesised Higgs theory made several key predictions. One crucial prediction was that a matching particle, called the "Higgs boson", should also exist. Proving the existence of the Higgs boson would prove whether the Higgs field existed, and therefore finally prove whether the Standard Model's explanation was correct. Therefore, there was an extensive search for the Higgs boson as a way to prove the Higgs field itself existed.
Search and discovery
Although the Higgs field would exist everywhere, proving its existence was far from easy. In principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles (the Higgs boson), but these are extremely difficult to produce and detect due to the energy required to produce them and their very rare production even if the energy is sufficient. It was, therefore, several decades before the first evidence of the Higgs boson could be found. Particle colliders, detectors, and computers capable of looking for Higgs bosons took more than 30 years to develop. The importance of this fundamental question led to a 40-year search, and the construction of one of the world's most expensive and complex experimental facility to date, CERN's Large Hadron Collider (LHC), in an attempt to create Higgs bosons and other particles for observation and study.
On 4 July 2012, the discovery of a new particle with a mass between was announced; physicists suspected that it was the Higgs boson. Since then, the particle has been shown to behave, interact, and decay in many of the ways predicted for Higgs particles by the Standard Model, as well as having even parity and zero spin, two fundamental attributes of a Higgs boson. This also means it is the first elementary scalar particle discovered in nature.
By March 2013, the existence of the Higgs boson was confirmed, and therefore the concept of some type of Higgs field throughout space is strongly supported. The presence of the field, now confirmed by experimental investigation, explains why some fundamental particles have (a rest) mass, despite the symmetries controlling their interactions implying that they should be "massless". It also resolves several other long-standing puzzles, such as the reason for the extremely short distance travelled by the weak force bosons, and therefore the weak force's extremely short range. As of 2018, in-depth research shows the particle continuing to behave in line with predictions for the Standard Model's Higgs boson. More studies are needed to verify with higher precision that the discovered particle has all of the properties predicted or whether, as described by some theories, multiple Higgs bosons exist.
The nature and properties of this field are now being investigated further, using more data collected at the LHC.
Interpretation
Various analogies have been used to describe the Higgs field and boson, including analogies with well-known symmetry-breaking effects such as the rainbow and prism, electric fields, and ripples on the surface of water.
Other analogies based on the resistance of macro objects moving through media (such as people moving through crowds, or some objects moving through syrup or molasses) are commonly used but misleading, since the Higgs field does not actually resist particles, and the effect of mass is not caused by resistance.
Overview of Higgs boson and field properties
In the Standard Model, the Higgs boson is a massive scalar boson whose mass must be found experimentally. Its mass has been determined to be by CMS (2022) and by ATLAS (2023). It is the only particle that remains massive even at very high energies. It has zero spin, even (positive) parity, no electric charge, and no colour charge, and it couples to (interacts with) mass. It is also very unstable, decaying into other particles almost immediately via several possible pathways.
The Higgs field is a scalar field, with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU(2) symmetry. Unlike any other known quantum field, it has a Sombrero potential. This shape means that below extremely high energies of about such as those seen during the first of the Big Bang, the Higgs field in its ground state takes less energy to have a nonzero vacuum expectation (value) than a zero value. Therefore in today's universe the Higgs field has a nonzero value everywhere (including otherwise empty space). This nonzero value in turn breaks the weak isospin SU(2) symmetry of the electroweak interaction everywhere. (Technically the non-zero expectation value converts the Lagrangian's Yukawa coupling terms into mass terms.) When this happens, three components of the Higgs field are "absorbed" by the SU(2) and U(1) gauge bosons (the "Higgs mechanism") to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component either manifests as a Higgs boson, or may couple separately to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well.
Significance
Evidence of the Higgs field and its properties has been extremely significant for many reasons. The importance of the Higgs boson largely is that it is able to be examined using existing knowledge and experimental technology, as a way to confirm and study the entire Higgs field theory. Conversely, proof that the Higgs field and boson did not exist would have also been significant.
Particle physics
Validation of the Standard Model
The Higgs boson validates the Standard Model through the mechanism of mass generation. As more precise measurements of its properties are made, more advanced extensions may be suggested or excluded. As experimental means to measure the field's behaviours and interactions are developed, this fundamental field may be better understood. If the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded.
Related to this, a belief generally exists among physicists that there is likely to be "new" physics beyond the Standard Model, and the Standard Model will at some point be extended or superseded. The Higgs discovery, as well as the many measured collisions occurring at the LHC, provide physicists a sensitive tool to search their data for any evidence that the Standard Model seems to fail, and could provide considerable evidence guiding researchers into future theoretical developments.
Symmetry breaking of the electroweak interaction
Below an extremely high temperature, electroweak symmetry breaking causes the electroweak interaction to manifest in part as the short-ranged weak force, which is carried by massive gauge bosons. In the history of the universe, electroweak symmetry breaking is believed to have happened at about after the Big Bang, when the universe was at a temperature . This symmetry breaking is required for atoms and other structures to form, as well as for nuclear reactions in stars, such as the Sun. The Higgs field is responsible for this symmetry breaking.
Particle mass acquisition
The Higgs field is pivotal in generating the masses of quarks and charged leptons (through Yukawa coupling) and the W and Z gauge bosons (through the Higgs mechanism).
The Higgs field does not "create" mass out of nothing (which would violate the law of conservation of energy), nor is the Higgs field responsible for the mass of all particles. For example, approximately 99% of the mass of baryons (composite particles such as the proton and neutron), is due instead to quantum chromodynamic binding energy, which is the sum of the kinetic energies of quarks and the energies of the massless gluons mediating the strong interaction inside the baryons. In Higgs-based theories, the property of "mass" is a manifestation of potential energy transferred to fundamental particles when they interact ("couple") with the Higgs field, which had contained that mass in the form of energy.
Scalar fields and extension of the Standard Model
The Higgs field is the only scalar (spin-0) field to be detected; all the other fundamental fields in the Standard Model are spin- fermions or spin-1 bosons.
According to Rolf-Dieter Heuer, director general of CERN when the Higgs boson was discovered, this existence proof of a scalar field is almost as important as the Higgs's role in determining the mass of other particles. It suggests that other hypothetical scalar fields suggested by other theories, from the inflaton to quintessence, could perhaps exist as well.
Cosmology
Inflaton
There has been considerable scientific research on possible links between the Higgs field and the inflatona hypothetical field suggested as the explanation for the expansion of space during the first fraction of a second of the universe (known as the "inflationary epoch"). Some theories suggest that a fundamental scalar field might be responsible for this phenomenon; the Higgs field is such a field, and its existence has led to papers analysing whether it could also be the inflaton responsible for this exponential expansion of the universe during the Big Bang. Such theories are highly tentative and face significant problems related to unitarity, but may be viable if combined with additional features such as large non-minimal coupling, a Brans–Dicke scalar, or other "new" physics, and they have received treatments suggesting that Higgs inflation models are still of interest theoretically.
Nature of the universe, and its possible fates
In the Standard Model, there exists the possibility that the underlying state of our universe – known as the "vacuum" – is long-lived, but not completely stable. In this scenario, the universe as we know it could effectively be destroyed by collapsing into a more stable vacuum state. This was sometimes misreported as the Higgs boson "ending" the universe. If the masses of the Higgs boson and top quark are known more precisely, and the Standard Model provides an accurate description of particle physics up to extreme energies of the Planck scale, then it is possible to calculate whether the vacuum is stable or merely long-lived. A Higgs mass of seems to be extremely close to the boundary for stability, but a definitive answer requires much more precise measurements of the pole mass of the top quark. New physics can change this picture.
If measurements of the Higgs boson suggest that our universe lies within a false vacuum of this kind, then it would implymore than likely in many billions of yearsthat the universe's forces, particles, and structures could cease to exist as we know them (and be replaced by different ones), if a true vacuum happened to nucleate. It also suggests that the Higgs self-coupling and its function could be very close to zero at the Planck scale, with "intriguing" implications, including theories of gravity and Higgs-based inflation. A future electron–positron collider would be able to provide the precise measurements of the top quark needed for such calculations.
Vacuum energy and the cosmological constant
More speculatively, the Higgs field has also been proposed as the energy of the vacuum, which at the extreme energies of the first moments of the Big Bang caused the universe to be a kind of featureless symmetry of undifferentiated, extremely high energy. In this kind of speculation, the single unified field of a Grand Unified Theory is identified as (or modelled upon) the Higgs field, and it is through successive symmetry breakings of the Higgs field, or some similar field, at phase transitions that the presently known forces and fields of the universe arise.
The relationship (if any) between the Higgs field and the presently observed vacuum energy density of the universe has also come under scientific study. As observed, the present vacuum energy density is extremely close to zero, but the energy densities predicted from the Higgs field, supersymmetry, and other current theories are typically many orders of magnitude larger. It is unclear how these should be reconciled. This cosmological constant problem remains a major unanswered problem in physics.
History
Theorisation
Particle physicists study matter made from fundamental particles whose interactions are mediated by exchange particlesgauge bosonsacting as force carriers. At the beginning of the 1960s a number of these particles had been discovered or proposed, along with theories suggesting how they relate to each other, some of which had already been reformulated as field theories in which the objects of study are not particles and forces, but quantum fields and their symmetries. However, attempts to produce quantum field models for two of the four known fundamental forces – the electromagnetic force and the weak nuclear force – and then to unify these interactions, were still unsuccessful.
One known problem was that gauge invariant approaches, including non-abelian models such as Yang–Mills theory (1954), which held great promise for unified theories, also seemed to predict known massive particles as massless. Goldstone's theorem, relating to continuous symmetries within some theories, also appeared to rule out many obvious solutions, since it appeared to show that zero-mass particles known as Goldstone bosons would also have to exist that simply were "not seen". According to Guralnik, physicists had "no understanding" how these problems could be overcome.
Particle physicist and mathematician Peter Woit summarised the state of research at the time:
The Higgs mechanism is a process by which vector bosons can acquire rest mass without explicitly breaking gauge invariance, as a byproduct of spontaneous symmetry breaking. Initially, the mathematical theory behind spontaneous symmetry breaking was conceived and published within particle physics by Yoichiro Nambu in 1960 (and somewhat anticipated by Ernst Stueckelberg in 1938), and the concept that such a mechanism could offer a possible solution for the "mass problem" was originally suggested in 1962 by Philip Anderson, who had previously written papers on broken symmetry and its outcomes in superconductivity. Anderson concluded in his 1963 paper on the Yang–Mills theory, that "considering the superconducting analog ... [t]hese two types of bosons seem capable of canceling each other out ... leaving finite mass bosons"), and in March 1964, Abraham Klein and Benjamin Lee showed that Goldstone's theorem could be avoided this way in at least some non-relativistic cases, and speculated it might be possible in truly relativistic cases.
These approaches were quickly developed into a full relativistic model, independently and almost simultaneously, by three groups of physicists: by François Englert and Robert Brout in August 1964; by Peter Higgs in October 1964; and by Gerald Guralnik, Carl Hagen, and Tom Kibble (GHK) in November 1964. Higgs also wrote a short, but important, response published in September 1964 to an objection by Gilbert, which showed that if calculating within the radiation gauge, Goldstone's theorem and Gilbert's objection would become inapplicable. Higgs later described Gilbert's objection as prompting his own paper. Properties of the model were further considered by Guralnik in 1965, by Higgs in 1966, by Kibble in 1967, and further by GHK in 1967. The original three 1964 papers demonstrated that when a gauge theory is combined with an additional charged scalar field that spontaneously breaks the symmetry, the gauge bosons may consistently acquire a finite mass.
In 1967, Steven Weinberg
and Abdus Salam
independently showed how a Higgs mechanism could be used to break the electroweak symmetry of Sheldon Glashow's unified model for the weak and electromagnetic interactions,
(itself an extension of work by Schwinger), forming what became the Standard Model of particle physics. Weinberg was the first to observe that this would also provide mass terms for the fermions.
At first, these seminal papers on spontaneous breaking of gauge symmetries were largely ignored, because it was widely believed that the (non-Abelian gauge) theories in question were a dead-end, and in particular that they could not be renormalised. In 1971–72, Martinus Veltman and Gerard 't Hooft proved renormalisation of Yang–Mills was possible in two papers covering massless, and then massive, fields. Their contribution, and the work of others on the renormalisation groupincluding "substantial" theoretical work by Russian physicists Ludvig Faddeev, Andrei Slavnov, Efim Fradkin, and Igor Tyutinwas eventually "enormously profound and influential", but even with all key elements of the eventual theory published there was still almost no wider interest. For example, Coleman found in a study that "essentially no-one paid any attention" to Weinberg's paper prior to 1971 and discussed by David Politzer in his 2004 Nobel speech.now the most cited in particle physicsand even in 1970 according to Politzer, Glashow's teaching of the weak interaction contained no mention of Weinberg's, Salam's, or Glashow's own work. In practice, Politzer states, almost everyone learned of the theory due to physicist Benjamin Lee, who combined the work of Veltman and 't Hooft with insights by others, and popularised the completed theory. In this way, from 1971, interest and acceptance "exploded" and the ideas were quickly absorbed in the mainstream.
The resulting electroweak theory and Standard Model have accurately predicted (among other things) weak neutral currents, three bosons, the top and charm quarks, and with great precision, the mass and other properties of some of these. Many of those involved eventually won Nobel Prizes or other renowned awards. A 1974 paper and comprehensive review in Reviews of Modern Physics commented that "while no one doubted the [mathematical] correctness of these arguments, no one quite believed that nature was diabolically clever enough to take advantage of them", adding that the theory had so far produced accurate answers that accorded with experiment, but it was unknown whether the theory was fundamentally correct. By 1986 and again in the 1990s it became possible to write that understanding and proving the Higgs sector of the Standard Model was "the central problem today in particle physics".
Summary and impact of the PRL papers
The three papers written in 1964 were each recognised as milestone papers during Physical Review Letters 50th anniversary celebration. Their six authors were also awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. (A controversy also arose the same year, because in the event of a Nobel Prize only up to three scientists could be recognised, with six being credited for the papers.) Two of the three PRL papers (by Higgs and by GHK) contained equations for the hypothetical field that eventually would become known as the Higgs field and its hypothetical quantum, the Higgs boson. Higgs' subsequent 1966 paper showed the decay mechanism of the boson; only a massive boson can decay and the decays can prove the mechanism.
In the paper by Higgs the boson is massive, and in a closing sentence Higgs writes that "an essential feature" of the theory "is the prediction of incomplete multiplets of scalar and vector bosons". (Frank Close comments that 1960s gauge theorists were focused on the problem of massless vector bosons, and the implied existence of a massive scalar boson was not seen as important; only Higgs directly addressed it.) In the paper by GHK the boson is massless and decoupled from the massive states. In reviews dated 2009 and 2011, Guralnik states that in the GHK model the boson is massless only in a lowest-order approximation, but it is not subject to any constraint and acquires mass at higher orders, and adds that the GHK paper was the only one to show that there are no massless Goldstone bosons in the model and to give a complete analysis of the general Higgs mechanism. All three reached similar conclusions, despite their very different approaches: Higgs' paper essentially used classical techniques, Englert and Brout's involved calculating vacuum polarisation in perturbation theory around an assumed symmetry-breaking vacuum state, and GHK used operator formalism and conservation laws to explore in depth the ways in which Goldstone's theorem may be worked around. Some versions of the theory predicted more than one kind of Higgs fields and bosons, and alternative "Higgsless" models were considered until the discovery of the Higgs boson.
Experimental search
To produce Higgs bosons, two beams of particles are accelerated to very high energies and allowed to collide within a particle detector. Occasionally, although rarely, a Higgs boson will be created fleetingly as part of the collision byproducts. Because the Higgs boson decays very quickly, particle detectors cannot detect it directly. Instead the detectors register all the decay products (the decay signature) and from the data the decay process is reconstructed. If the observed decay products match a possible decay process (known as a decay channel) of a Higgs boson, this indicates that a Higgs boson may have been created. In practice, many processes may produce similar decay signatures. Fortunately, the Standard Model precisely predicts the likelihood of each of these, and each known process, occurring. So, if the detector detects more decay signatures consistently matching a Higgs boson than would otherwise be expected if Higgs bosons did not exist, then this would be strong evidence that the Higgs boson exists.
Because Higgs boson production in a particle collision is likely to be very rare (1 in 10 billion at the LHC),
and many other possible collision events can have similar decay signatures, the data of hundreds of trillions of collisions needs to be analysed and must "show the same picture" before a conclusion about the existence of the Higgs boson can be reached. To conclude that a new particle has been found, particle physicists require that the statistical analysis of two independent particle detectors each indicate that there is less than a one-in-a-million chance that the observed decay signatures are due to just background random Standard Model eventsi.e., that the observed number of events is more than five standard deviations (sigma) different from that expected if there was no new particle. More collision data allows better confirmation of the physical properties of any new particle observed, and allows physicists to decide whether it is indeed a Higgs boson as described by the Standard Model or some other hypothetical new particle.
To find the Higgs boson, a powerful particle accelerator was needed, because Higgs bosons might not be seen in lower-energy experiments. The collider needed to have a high luminosity in order to ensure enough collisions were seen for conclusions to be drawn. Finally, advanced computing facilities were needed to process the vast amount of data (25 petabytes per year as of 2012) produced by the collisions. For the announcement of 4 July 2012, a new collider known as the Large Hadron Collider was constructed at CERN with a planned eventual collision energy of 14 TeVover seven times any previous colliderand over 300 trillion () LHC proton–proton collisions were analysed by the LHC Computing Grid, the world's largest computing grid (as of 2012), comprising over 170 computing facilities in a worldwide network across 36 countries.
Search before 4 July 2012
The first extensive search for the Higgs boson was conducted at the Large Electron–Positron Collider (LEP) at CERN in the 1990s. At the end of its service in 2000, LEP had found no conclusive evidence for the Higgs.
This implied that if the Higgs boson were to exist it would have to be heavier than .
The search continued at Fermilab in the United States, where the Tevatronthe collider that discovered the top quark in 1995 – had been upgraded for this purpose. There was no guarantee that the Tevatron would be able to find the Higgs, but it was the only supercollider that was operational since the Large Hadron Collider (LHC) was still under construction and the planned Superconducting Super Collider had been cancelled in 1993 and never completed. The Tevatron was only able to exclude further ranges for the Higgs mass, and was shut down on 30 September 2011 because it no longer could keep up with the LHC. The final analysis of the data excluded the possibility of a Higgs boson with a mass between and . In addition, there was a small (but not significant) excess of events possibly indicating a Higgs boson with a mass between and .
The Large Hadron Collider at CERN in Switzerland, was designed specifically to be able to either confirm or exclude the existence of the Higgs boson. Built in a 27 km tunnel under the ground near Geneva originally inhabited by LEP, it was designed to collide two beams of protons, initially at energies of per beam (7 TeV total), or almost 3.6 times that of the Tevatron, and upgradeable to (14 TeV total) in future. Theory suggested if the Higgs boson existed, collisions at these energy levels should be able to reveal it. As one of the most complicated scientific instruments ever built, its operational readiness was delayed for 14 months by a magnet quench event nine days after its inaugural tests, caused by a faulty electrical connection that damaged over 50 superconducting magnets and contaminated the vacuum system.
Data collection at the LHC finally commenced in March 2010. By December 2011 the two main particle detectors at the LHC, ATLAS and CMS, had narrowed down the mass range where the Higgs could exist to around (ATLAS) and (CMS). There had also already been a number of promising event excesses that had "evaporated" and proven to be nothing but random fluctuations. However, from around May 2011, both experiments had seen among their results, the slow emergence of a small yet consistent excess of gamma and 4-lepton decay signatures and several other particle decays, all hinting at a new particle at a mass around . By around November 2011, the anomalous data at was becoming "too large to ignore" (although still far from conclusive), and the team leaders at both ATLAS and CMS each privately suspected they might have found the Higgs. On 28 November 2011, at an internal meeting of the two team leaders and the director general of CERN, the latest analyses were discussed outside their teams for the first time, suggesting both ATLAS and CMS might be converging on a possible shared result at , and initial preparations commenced in case of a successful finding. While this information was not known publicly at the time, the narrowing of the possible Higgs range to around and the repeated observation of small but consistent event excesses across multiple channels at both ATLAS and CMS in the region (described as "tantalising hints" of around 2–3 sigma) were public knowledge with "a lot of interest". It was therefore widely anticipated around the end of 2011, that the LHC would provide sufficient data to either exclude or confirm the finding of a Higgs boson by the end of 2012, when their 2012 collision data (with slightly higher 8 TeV collision energy) had been examined.
Discovery of candidate boson at CERN
On 22 June 2012 CERN announced an upcoming seminar covering tentative findings for 2012, and shortly afterwards (from around 1 July 2012 according to an analysis of the spreading rumour in social media) rumours began to spread in the media that this would include a major announcement, but it was unclear whether this would be a stronger signal or a formal discovery. Speculation escalated to a "fevered" pitch when reports emerged that Peter Higgs, who proposed the particle, was to be attending the seminar, and that "five leading physicists" had been invitedgenerally believed to signify the five living 1964 authorswith Higgs, Englert, Guralnik, Hagen attending and Kibble confirming his invitation (Brout having died in 2011).
On 4 July 2012 both of the CERN experiments announced they had independently made the same discovery: CMS of a previously unknown boson with mass and ATLAS of a boson with mass . Using the combined analysis of two interaction types (known as 'channels'), both experiments independently reached a local significance of 5 sigmaimplying that the probability of getting at least as strong a result by chance alone is less than one in three million. When additional channels were taken into account, the CMS significance was reduced to 4.9 sigma.
The two teams had been working 'blinded' from each other from around late 2011 or early 2012, meaning they did not discuss their results with each other, providing additional certainty that any common finding was genuine validation of a particle. This level of evidence, confirmed independently by two separate teams and experiments, meets the formal level of proof required to announce a confirmed discovery.
On 31 July 2012, the ATLAS collaboration presented additional data analysis on the "observation of a new particle", including data from a third channel, which improved the significance to 5.9 sigma (1 in 588 million chance of obtaining at least as strong evidence by random background effects alone) and mass , and CMS improved the significance to 5-sigma and mass .
New particle tested as a possible Higgs boson
Following the 2012 discovery, it was still unconfirmed whether the particle was a Higgs boson. On one hand, observations remained consistent with the observed particle being the Standard Model Higgs boson, and the particle decayed into at least some of the predicted channels. Moreover, the production rates and branching ratios for the observed channels broadly matched the predictions by the Standard Model within the experimental uncertainties. However, the experimental uncertainties currently still left room for alternative explanations, meaning an announcement of the discovery of a Higgs boson would have been premature. To allow more opportunity for data collection, the LHC's proposed 2012 shutdown and 2013–14 upgrade were postponed by seven weeks into 2013.
In November 2012, in a conference in Kyoto researchers said evidence gathered since July was falling into line with the basic Standard Model more than its alternatives, with a range of results for several interactions matching that theory's predictions. Physicist Matt Strassler highlighted "considerable" evidence that the new particle is not a pseudoscalar negative parity particle (consistent with this required finding for a Higgs boson), "evaporation" or lack of increased significance for previous hints of non-Standard Model findings, expected Standard Model interactions with W and Z bosons, absence of "significant new implications" for or against supersymmetry, and in general no significant deviations to date from the results expected of a Standard Model Higgs boson. However some kinds of extensions to the Standard Model would also show very similar results; so commentators noted that based on other particles that are still being understood long after their discovery, it may take years to be sure, and decades to fully understand the particle that has been found.
These findings meant that as of January 2013, scientists were very sure they had found an unknown particle of mass ~ , and had not been misled by experimental error or a chance result. They were also sure, from initial observations, that the new particle was some kind of boson. The behaviours and properties of the particle, so far as examined since July 2012, also seemed quite close to the behaviours expected of a Higgs boson. Even so, it could still have been a Higgs boson or some other unknown boson, since future tests could show behaviours that do not match a Higgs boson, so as of December 2012 CERN still only stated that the new particle was "consistent with" the Higgs boson, and scientists did not yet positively say it was the Higgs boson. Despite this, in late 2012, widespread media reports announced (incorrectly) that a Higgs boson had been confirmed during the year.
In January 2013, CERN director-general Rolf-Dieter Heuer stated that based on data analysis to date, an answer could be possible 'towards' mid-2013, and the deputy chair of physics at Brookhaven National Laboratory stated in February 2013 that a "definitive" answer might require "another few years" after the collider's 2015 restart. In early March 2013, CERN Research Director Sergio Bertolucci stated that confirming spin-0 was the major remaining requirement to determine whether the particle is at least some kind of Higgs boson.
Confirmation of existence and current status
On 14 March 2013 CERN confirmed the following:
CMS and ATLAS have compared a number of options for the spin-parity of this particle, and these all prefer no spin and even parity [two fundamental criteria of a Higgs boson consistent with the Standard Model]. This, coupled with the measured interactions of the new particle with other particles, strongly indicates that it is a Higgs boson.
This also makes the particle the first elementary scalar particle to be discovered in nature.
The following are examples of tests used to confirm that the discovered particle is the Higgs boson:
Findings since 2013
In July 2017, CERN confirmed that all measurements still agree with the predictions of the Standard Model, and called the discovered particle simply "the Higgs boson". As of 2019, the Large Hadron Collider has continued to produce findings that confirm the 2013 understanding of the Higgs field and particle.
The LHC's experimental work since restarting in 2015 has included probing the Higgs field and boson to a greater level of detail, and confirming whether less common predictions were correct. In particular, exploration since 2015 has provided strong evidence of the predicted direct decay into fermions such as pairs of bottom quarks (3.6 σ)described as an "important milestone" in understanding its short lifetime and other rare decaysand also to confirm decay into pairs of tau leptons (5.9 σ). This was described by CERN as being "of paramount importance to establishing the coupling of the Higgs boson to leptons and represents an important step towards measuring its couplings to third generation fermions, the very heavy copies of the electrons and quarks, whose role in nature is a profound mystery". Published results as of 19 March 2018 at 13 TeV for ATLAS and CMS had their measurements of the Higgs mass at and respectively.
In July 2018, the ATLAS and CMS experiments reported observing the Higgs boson decay into a pair of bottom quarks, which makes up approximately 60% of all of its decays.
Theoretical issues
Theoretical need for the Higgs
Gauge invariance is an important property of modern particle theories such as the Standard Model, partly due to its success in other areas of fundamental physics such as electromagnetism and the strong interaction (quantum chromodynamics). However, before Sheldon Glashow extended the electroweak unification models in 1961, there were great difficulties in developing gauge theories for the weak nuclear force or a possible unified electroweak interaction. Fermions with a mass term would violate gauge symmetry and therefore cannot be gauge invariant. (This can be seen by examining the Dirac Lagrangian for a fermion in terms of left and right handed components; we find none of the spin-half particles could ever flip helicity as required for mass, so they must be massless.)
W and Z bosons are observed to have mass, but a boson mass term contains terms which clearly depend on the choice of gauge, and therefore these masses too cannot be gauge invariant. Therefore, it seems that none of the standard model fermions or bosons could "begin" with mass as an inbuilt property except by abandoning gauge invariance. If gauge invariance were to be retained, then these particles had to be acquiring their mass by some other mechanism or interaction.
Additionally, solutions based on spontaneous symmetry breaking appeared to fail, seemingly an inevitable result of Goldstone's theorem. Because there is no potential energy cost to moving around the complex plane's "circular valley" responsible for spontaneous symmetry breaking, the resulting quantum excitation is pure kinetic energy, and therefore a massless boson ("Goldstone boson"), which in turn implies a new long range force. But no new long range forces or massless particles were detected either. So whatever was giving these particles their mass had to not "break" gauge invariance as the basis for other parts of the theories where it worked well, and had to not require or predict unexpected massless particles or long-range forces which did not actually seem to exist in nature.
A solution to all of these overlapping problems came from the discovery of a previously unnoticed borderline case hidden in the mathematics of Goldstone's theorem,
that under certain conditions it might theoretically be possible for a symmetry to be broken without disrupting gauge invariance and without any new massless particles or forces, and having "sensible" (renormalisable) results mathematically. This became known as the Higgs mechanism.
The Standard Model hypothesises a field which is responsible for this effect, called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". In simple terms, unlike all other known fields, the Higgs field requires less energy to have a non-zero value than a zero value, so it ends up having a non-zero value everywhere. Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. When symmetry breaks under these conditions, the Goldstone bosons that arise interact with the Higgs field (and with other particles capable of interacting with the Higgs field) instead of becoming new massless particles. The intractable problems of both underlying theories "neutralise" each other, and the residual outcome is that elementary particles acquire a consistent mass based on how strongly they interact with the Higgs field. It is the simplest known process capable of giving mass to the gauge bosons while remaining compatible with gauge theories. Its quantum would be a scalar boson, known as the Higgs boson.
Simple explanation of the theory, from its origins in superconductivity
The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity. A superconductor does not allow penetration by external magnetic fields (the Meissner effect). This strange observation implies that somehow, the electromagnetic field becomes short ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions (Ginzburg–Landau theory, 1950), and then for bosons (BCS theory, 1957).
In these theories, superconductivity is interpreted as arising from a charged condensate field. Initially, the condensate value does not have any preferred direction, implying it is scalar, but its phase is capable of defining a gauge, in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating each into the other(s)). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. But in these circumstances, it instead fixes a preferred choice of phase. However, it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range.
Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short ranged, within a gauge invariant theory, was exactly the needed effect sought for the weak force bosons (because a long range force has massless gauge bosons, and a short ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this were also quite well defined – it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other.
Alternative models
The Minimal Standard Model as described above is the simplest known model for the Higgs mechanism with just one Higgs field. However, an extended Higgs sector with additional Higgs particle doublets or triplets is also possible, and many extensions of the Standard Model have this feature. The non-minimal Higgs sector favoured by theory are the two-Higgs-doublet models (2HDM), which predict the existence of a quintet of scalar particles: two CP-even neutral Higgs bosons h0 and H0, a CP-odd neutral Higgs boson A0, and two charged Higgs particles H±. Supersymmetry ("SUSY") also predicts relations between the Higgs-boson masses and the masses of the gauge bosons, and could accommodate a neutral Higgs boson.
The key method to distinguish between these different models involves study of the particles' interactions ("coupling") and exact decay processes ("branching ratios"), which can be measured and tested experimentally in particle collisions. In the Type-I 2HDM model one Higgs doublet couples to up and down quarks, while the second doublet does not couple to quarks. This model has two interesting limits, in which the lightest Higgs couples to just fermions ("gauge-phobic") or just gauge bosons ("fermiophobic"), but not both. In the Type-II 2HDM model, one Higgs doublet only couples to up-type quarks, the other only couples to down-type quarks. The heavily researched Minimal Supersymmetric Standard Model (MSSM) includes a Type-II 2HDM Higgs sector, so it could be disproven by evidence of a Type-I 2HDM Higgs.
In other models the Higgs scalar is a composite particle. For example, in technicolour the role of the Higgs field is played by strongly bound pairs of fermions called techniquarks. Other models feature pairs of top quarks (see top quark condensate). In yet other models, there is no Higgs field at all and the electroweak symmetry is broken using extra dimensions.
Further theoretical issues and hierarchy problem
The Standard Model leaves the mass of the Higgs boson as a parameter to be measured, rather than a value to be calculated. This is seen as theoretically unsatisfactory, particularly as quantum corrections (related to interactions with virtual particles) should apparently cause the Higgs particle to have a mass immensely higher than that observed, but at the same time the Standard Model requires a mass of the order of to ensure unitarity (in this case, to unitarise longitudinal vector boson scattering). Reconciling these points appears to require explaining why there is an almost-perfect cancellation resulting in the visible mass of ~ , and it is not clear how to do this. Because the weak force is about 1032 times stronger than gravity, and (linked to this) the Higgs boson's mass is so much less than the Planck mass or the grand unification energy, it appears that either there is some underlying connection or reason for these observations which is unknown and not described by the Standard Model, or some unexplained and extremely precise fine-tuning of parametershowever at present neither of these explanations is proven. This is known as a hierarchy problem. More broadly, the hierarchy problem amounts to the worry that a future theory of fundamental particles and interactions should not have excessive fine-tunings or unduly delicate cancellations, and should allow masses of particles such as the Higgs boson to be calculable. The problem is in some ways unique to spin-0 particles (such as the Higgs boson), which can give rise to issues related to quantum corrections that do not affect particles with spin. A number of solutions have been proposed, including supersymmetry, conformal solutions and solutions via extra dimensions such as braneworld models.
There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles. Triviality constraints can be used to restrict or predict parameters such as the Higgs boson mass. This can also lead to a predictable Higgs mass in asymptotic safety scenarios.
Properties
Properties of the Higgs field
In the Standard Model, the Higgs field is a scalar tachyonic fieldscalar meaning it does not transform under Lorentz transformations, and tachyonic meaning the field (but not the particle) has imaginary mass, and in certain configurations must undergo symmetry breaking. It consists of four components: Two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarisation components of the massive W+, W−, and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realised as) the massive Higgs boson. This component can interact with fermions via Yukawa coupling to give them mass as well.
Mathematically, the Higgs field has imaginary mass and is therefore a tachyonic field. While tachyons (particles that move faster than light) are a purely hypothetical concept, fields with imaginary mass have come to play an important role in modern physics. Under no circumstances do any excitations ever propagate faster than light in such theoriesthe presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). Instead of faster-than-light particles, the imaginary mass creates an instability: Any configuration in which one or more field excitations are tachyonic must spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation, and is now believed to be the explanation for how the Higgs mechanism itself arises in nature, and therefore the reason behind electroweak symmetry breaking.
Although the notion of imaginary mass might seem troubling, it is only the field, and not the mass itself, that is quantised. Therefore, the field operators at spacelike separated points still commute (or anticommute), and information and particles still do not propagate faster than light. Tachyon condensation drives a physical system that has reached a local limitand might naively be expected to produce physical tachyonsto an alternate stable state where no physical tachyons exist. Once a tachyonic field such as the Higgs field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles such as the Higgs boson.
Properties of the Higgs boson
Since the Higgs field is scalar, the Higgs boson has no spin. The Higgs boson is also its own antiparticle, is CP-even, and has zero electric and colour charge.
The Standard Model does not predict the mass of the Higgs boson. If that mass is between (consistent with empirical observations of ), then the Standard Model can be valid at energy scales all the way up to the Planck scale (). It should be the only particle in the Standard Model that remains massive even at high energies. Many theorists expect new physics beyond the Standard Model to emerge at the TeV-scale, based on unsatisfactory properties of the Standard Model.
The highest possible mass scale allowed for the Higgs boson (or some other electroweak symmetry breaking mechanism) is 1.4 TeV; beyond this point, the Standard Model becomes inconsistent without such a mechanism, because unitarity is violated in certain scattering processes.
It is also possible, although experimentally difficult, to estimate the mass of the Higgs boson indirectly: In the Standard Model, the Higgs boson has a number of indirect effects; most notably, Higgs loops result in tiny corrections to masses of the W and Z bosons. Precision measurements of electroweak parameters, such as the Fermi constant and masses of the W and Z bosons, can be used to calculate constraints on the mass of the Higgs. As of July 2011, the precision electroweak measurements tell us that the mass of the Higgs boson is likely to be less than about at 95% confidence level. These indirect constraints rely on the assumption that the Standard Model is correct. It may still be possible to discover a Higgs boson above these masses, if it is accompanied by other particles beyond those accommodated by the Standard Model.
The LHC cannot directly measure the Higgs boson's lifetime, due to its extreme brevity. It is predicted as based on the predicted decay width of . However it can be measured indirectly, based upon comparing masses measured from quantum phenomena occurring in the on shell production pathways and in the, much rarer, off shell production pathways, derived from Dalitz decay via a virtual photon . Using this technique, the lifetime of the Higgs boson was tentatively measured in 2021 as , at sigma 3.2 (1 in 1000) significance.
Production
If Higgs particle theories are valid, then a Higgs particle can be produced much like other particles that are studied, in a particle collider. This involves accelerating a large number of particles to extremely high energies and extremely close to the speed of light, then allowing them to smash together. Protons and lead ions (the bare nuclei of lead atoms) are used at the LHC. In the extreme energies of these collisions, the desired esoteric particles will occasionally be produced and this can be detected and studied; any absence or difference from theoretical expectations can also be used to improve the theory. The relevant particle theory (in this case the Standard Model) will determine the necessary kinds of collisions and detectors. The Standard Model predicts that Higgs bosons could be formed in a number of ways, although the probability of producing a Higgs boson in any collision is always expected to be very smallfor example, only one Higgs boson per 10 billion collisions in the Large Hadron Collider. The most common expected processes for Higgs boson production are:
Gluon fusion If the collided particles are hadrons such as the proton or antiprotonas is the case in the LHC and Tevatronthen it is most likely that two of the gluons binding the hadron together collide. The easiest way to produce a Higgs particle is if the two gluons combine to form a loop of virtual quarks. Since the coupling of particles to the Higgs boson is proportional to their mass, this process is more likely for heavy particles. In practice it is enough to consider the contributions of virtual top and bottom quarks (the heaviest quarks). This process is the dominant contribution at the LHC and Tevatron being about ten times more likely than any of the other processes.
Higgs Strahlung If an elementary fermion collides with an anti-fermione.g., a quark with an anti-quark or an electron with a positronthe two can merge to form a virtual W or Z boson which, if it carries sufficient energy, can then emit a Higgs boson. This process was the dominant production mode at the LEP, where an electron and a positron collided to form a virtual Z boson, and it was the second largest contribution for Higgs production at the Tevatron. At the LHC this process is only the third largest, because the LHC collides protons with protons, making a quark-antiquark collision less likely than at the Tevatron. Higgs Strahlung is also known as associated production.
Weak boson fusion Another possibility when two (anti-)fermions collide is that the two exchange a virtual W or Z boson, which emits a Higgs boson. The colliding fermions do not need to be the same type. So, for example, an up quark may exchange a Z boson with an anti-down quark. This process is the second most important for the production of Higgs particle at the LHC and LEP.
Top fusion The final process that is commonly considered is by far the least likely (by two orders of magnitude). This process involves two colliding gluons, which each decay into a heavy quark–antiquark pair. A quark and antiquark from each pair can then combine to form a Higgs particle.
Decay
Quantum mechanics predicts that if it is possible for a particle to decay into a set of lighter particles, then it will eventually do so. This is also true for the Higgs boson. The likelihood with which this happens depends on a variety of factors including: the difference in mass, the strength of the interactions, etc. Most of these factors are fixed by the Standard Model, except for the mass of the Higgs boson itself. For a Higgs boson with a mass of the SM predicts a mean life time of about .
Since it interacts with all the massive elementary particles of the SM, the Higgs boson has many different processes through which it can decay. Each of these possible processes has its own probability, expressed as the branching ratio; the fraction of the total number decays that follows that process. The SM predicts these branching ratios as a function of the Higgs mass (see plot).
One way that the Higgs can decay is by splitting into a fermion–antifermion pair. As general rule, the Higgs is more likely to decay into heavy fermions than light fermions, because the mass of a fermion is proportional to the strength of its interaction with the Higgs. By this logic the most common decay should be into a top–antitop quark pair. However, such a decay would only be possible if the Higgs were heavier than ~, twice the mass of the top quark. For a Higgs mass of the SM predicts that the most common decay is into a bottom–antibottom quark pair, which happens 57.7% of the time. The second most common fermion decay at that mass is a tau–antitau pair, which happens only about 6.3% of the time.
Another possibility is for the Higgs to split into a pair of massive gauge bosons. The most likely possibility is for the Higgs to decay into a pair of W bosons (the light blue line in the plot), which happens about 21.5% of the time for a Higgs boson with a mass of . The W bosons can subsequently decay either into a quark and an antiquark or into a charged lepton and a neutrino. The decays of W bosons into quarks are difficult to distinguish from the background, and the decays into leptons cannot be fully reconstructed (because neutrinos are impossible to detect in particle collision experiments). A cleaner signal is given by decay into a pair of Z-bosons (which happens about 2.6% of the time for a Higgs with a mass of ), if each of the bosons subsequently decays into a pair of easy-to-detect charged leptons (electrons or muons).
Decay into massless gauge bosons (i.e., gluons or photons) is also possible, but requires intermediate loop of virtual heavy quarks (top or bottom) or massive gauge bosons. The most common such process is the decay into a pair of gluons through a loop of virtual heavy quarks. This process, which is the reverse of the gluon fusion process mentioned above, happens approximately 8.6% of the time for a Higgs boson with a mass of . Much rarer is the decay into a pair of photons mediated by a loop of W bosons or heavy quarks, which happens only twice for every thousand decays. However, this process is very relevant for experimental searches for the Higgs boson, because the energy and momentum of the photons can be measured very precisely, giving an accurate reconstruction of the mass of the decaying particle.
In 2021 the extremely rare Dalitz decay was tentatively observed, into two leptons (electrons or muons) and a photon (ℓℓγ), via virtual photon decay. This can happen in three ways; Higgs to virtual photon to ℓℓγ in which the virtual photon (γ*) has very small but nonzero mass, Higgs to Z boson to ℓℓγ, or Higgs to two leptons, one of which emits a final-state photon leading to ℓℓγ. ATLAS searched for evidence of the first of these at low di-lepton mass , where this process should dominate. The observation is at sigma 3.2 (1 in 1000) significance. This decay path is important because it facilitates measuring the on- and off-shell mass of the Higgs boson (allowing indirect measurement of decay time), and the decay into two charged particles allows exploration of charge conjugation and charge parity (CP) violation.
Public discussion
Naming
Names used by physicists
The name most strongly associated with the particle and field is the Higgs boson and Higgs field. For some time the particle was known by a combination of its PRL author names (including at times Anderson), for example the Brout–Englert–Higgs particle, the Anderson–Higgs particle, or the Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, and these are still used at times. Fuelled in part by the issue of recognition and a potential shared Nobel Prize,
the most appropriate name was still occasionally a topic of debate until 2013.
Higgs himself preferred to call the particle either by an acronym of all those involved, or "the scalar boson", or "the so-called Higgs particle".
A considerable amount has been written on how Higgs' name came to be exclusively used. Two main explanations are offered. The first is that Higgs undertook a step which was either unique, clearer or more explicit in his paper in formally predicting and examining the particle. Of the PRL papers' authors, only the paper by Higgs explicitly offered as a prediction that a massive particle would exist and calculated some of its properties;
he was therefore "the first to postulate the existence of a massive particle" according to Nature.
Physicist and author Frank Close and physicist-blogger Peter Woit both comment that the paper by GHK was also completed after Higgs and Brout–Englert were submitted to Physical Review Letters,
and that Higgs alone had drawn attention to a predicted massive scalar boson, while all others had focused on the massive vector bosons.
In this way, Higgs' contribution also provided experimentalists with a crucial "concrete target" needed to test the theory.
However, in Higgs' view, Brout and Englert did not explicitly mention the boson since its existence is plainly obvious in their work, while according to Guralnik the GHK paper was a complete analysis of the entire symmetry breaking mechanism whose mathematical rigour is absent from the other two papers, and a massive particle may exist in some solutions. Higgs' paper also provided an "especially sharp" statement of the challenge and its solution according to science historian David Kaiser.
The alternative explanation is that the name was popularised in the 1970s due to its use as a convenient shorthand or because of a mistake in citing. Many accounts including Higgs' own credit the "Higgs" name to physicist Benjamin Lee.
Lee was a significant populariser of the theory in its early days, and habitually attached the name "Higgs" as a "convenient shorthand" for its components from 1972,
and in at least one instance from as early as 1966. Although Lee clarified in his footnotes that "'Higgs' is an abbreviation for Higgs, Kibble, Guralnik, Hagen, Brout, Englert",
his use of the term (and perhaps also Steven Weinberg's mistaken cite of Higgs' paper as the first in his seminal 1967 paper
) meant that by around 1975–1976 others had also begun to use the name "Higgs" exclusively as a shorthand.
In 2012, physicist Frank Wilczek, who was credited for naming the elementary particle, the axion (over an alternative proposal "Higglet", by Weinberg), endorsed the "Higgs boson" name, stating "History is complicated, and wherever you draw the line, there will be somebody just below it."
Nickname
The Higgs boson is often referred to as the "God particle" in popular media outside the scientific community. The nickname comes from the title of the 1993 book on the Higgs boson and particle physics, The God Particle: If the Universe Is the Answer, What Is the Question? by Physics Nobel Prize winner and Fermilab director Leon Lederman. Lederman wrote it in the context of failing US government support for the Superconducting Super Collider, a partially constructed titanic competitor to the Large Hadron Collider with planned collision energies of that was championed by Lederman since its 1983 inception and shut down in 1993. The book sought in part to promote awareness of the significance and need for such a project in the face of its possible loss of funding. Lederman, a leading researcher in the field, writes that he wanted to title his book The Goddamn Particle: If the Universe is the Answer, What is the Question? Lederman's editor decided that the title was too controversial and convinced him to change the title to The God Particle: If the Universe is the Answer, What is the Question?
While media use of this term may have contributed to wider awareness and interest, many scientists feel the name is inappropriate since it is sensational hyperbole and misleads readers; the particle also has nothing to do with any God, leaves open numerous questions in fundamental physics, and does not explain the ultimate origin of the universe. Higgs, an atheist, was reported to be displeased and stated in a 2008 interview that he found it "embarrassing" because it was "the kind of misuse[...] which I think might offend some people". The nickname has been satirised in mainstream media as well. Science writer Ian Sample stated in his 2010 book on the search that the nickname is "universally hate[d]" by physicists and perhaps the "worst derided" in the history of physics, but that (according to Lederman) the publisher rejected all titles mentioning "Higgs" as unimaginative and too unknown.
Lederman begins with a review of the long human search for knowledge, and explains that his tongue-in-cheek title draws an analogy between the impact of the Higgs field on the fundamental symmetries at the Big Bang, and the apparent chaos of structures, particles, forces and interactions that resulted and shaped our present universe, with the biblical story of Babel in which the primordial single language of early Genesis was fragmented into many disparate languages and cultures.
Lederman asks whether the Higgs boson was added just to perplex and confound those seeking knowledge of the universe, and whether physicists will be confounded by it as recounted in that story, or ultimately surmount the challenge and understand "how beautiful is the universe [God has] made".
Other proposals
A renaming competition by British newspaper The Guardian in 2009 resulted in their science correspondent choosing the name "the champagne bottle boson" as the best submission: "The bottom of a champagne bottle is in the shape of the Higgs potential and is often used as an illustration in physics lectures. So it's not an embarrassingly grandiose name, it is memorable, and [it] has some physics connection too."
The name Higgson was suggested as well, in an opinion piece in the Institute of Physics' online publication physicsworld.com.
Educational explanations and analogies
There has been considerable public discussion of analogies and explanations for the Higgs particle and how the field creates mass,
including coverage of explanatory attempts in their own right and a competition in 1993 for the best popular explanation by then-UK Minister for Science Sir William Waldegrave
and articles in newspapers worldwide.
An educational collaboration involving an LHC physicist and a High School Teachers at CERN educator suggests that dispersion of lightresponsible for the rainbow and dispersive prismis a useful analogy for the Higgs field's symmetry breaking and mass-causing effect.
Matt Strassler uses electric fields as an analogy:
A similar explanation was offered by The Guardian:
The Higgs field's effect on particles was famously described by physicist David Miller as akin to a room full of political party workers spread evenly throughout a room: The crowd gravitates to and slows down famous people but does not slow down others.
He also drew attention to well-known effects in solid state physics where an electron's effective mass can be much greater than usual in the presence of a crystal lattice.
Analogies based on drag effects, including analogies of "syrup" or "molasses" are also well known, but can be somewhat misleading since they may be understood (incorrectly) as saying that the Higgs field simply resists some particles' motion but not others'a simple resistive effect could also conflict with Newton's third law.
The Higgs boson is commonly misunderstood as responsible for mass, rather than the Higgs field, and as relating to most mass in the universe.
Recognition and awards
There was considerable discussion prior to late 2013 of how to allocate the credit if the Higgs boson is proven, made more pointed as a Nobel prize had been expected, and the very wide basis of people entitled to consideration. These include a range of theoreticians who made the Higgs mechanism theory possible, the theoreticians of the 1964 PRL papers (including Higgs himself), the theoreticians who derived from these a working electroweak theory and the Standard Model itself, and also the experimentalists at CERN and other institutions who made possible the proof of the Higgs field and boson in reality. The Nobel prize has a limit of three persons to share an award, and some possible winners are already prize holders for other work, or are deceased (the prize is only awarded to persons in their lifetime). Existing prizes for works relating to the Higgs field, boson, or mechanism include:
Nobel Prize in Physics (1979) – Glashow, Salam, and Weinberg, for contributions to the theory of the unified weak and electromagnetic interaction between elementary particles
Nobel Prize in Physics (1999) – 't Hooft and Veltman, for elucidating the quantum structure of electroweak interactions in physics
J. J. Sakurai Prize for Theoretical Particle Physics (2010)Hagen, Englert, Guralnik, Higgs, Brout, and Kibble, for elucidation of the properties of spontaneous symmetry breaking in four-dimensional relativistic gauge theory and of the mechanism for the consistent generation of vector boson masses (for the 1964 papers described above)
Wolf Prize (2004)Englert, Brout, and Higgs
Special Breakthrough Prize in Fundamental Physics (2013)Fabiola Gianotti and Peter Jenni, spokespersons of the ATLAS Collaboration and Michel Della Negra, Tejinder Singh Virdee, Guido Tonelli, and Joseph Incandela spokespersons, past and present, of the CMS collaboration, "For [their] leadership role in the scientific endeavour that led to the discovery of the new Higgs-like particle by the ATLAS and CMS collaborations at CERN's Large Hadron Collider".
Nobel Prize in Physics (2013) – Peter Higgs and François Englert, for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN's Large Hadron Collider
Englert's co-researcher Robert Brout had died in 2011 and the Nobel Prize is not ordinarily given posthumously.
Additionally Physical Review Letters' 50-year review (2008) recognised the 1964 PRL symmetry breaking papers and Weinberg's 1967 paper A model of Leptons (the most cited paper in particle physics, as of 2012) "milestone Letters".
Following reported observation of the Higgs-like particle in July 2012, several Indian media outlets reported on the supposed neglect of credit to Indian physicist Satyendra Nath Bose after whose work in the 1920s the class of particles "bosons" is named
(although physicists have described Bose's connection to the discovery as tenuous).
Technical aspects and mathematical formulation
In the Standard Model, the Higgs field is a four-component scalar field that forms a complex doublet of the weak isospin SU(2) symmetry:
while the field has charge + under the weak hypercharge U(1) symmetry.
The Higgs part of the Lagrangian is
where and are the gauge bosons of the SU(2) and U(1) symmetries, and their respective coupling constants, are the Pauli matrices (a complete set of generators of the SU(2) symmetry), and and , so that the ground state breaks the SU(2) symmetry (see figure).
The ground state of the Higgs field (the bottom of the potential) is degenerate with different ground states related to each other by a SU(2) gauge transformation. It is always possible to pick a gauge such that in the ground state . The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is ~. It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number. Quadratic terms in and arise, which give masses to the W and Z bosons:
with their ratio determining the Weinberg angle, , and leave a massless U(1) photon, . The mass of the Higgs boson itself is given by
The quarks and the leptons interact with the Higgs field through Yukawa interaction terms:
where are left-handed and right-handed quarks and leptons of the th generation, are matrices of Yukawa couplings where h.c. denotes the hermitian conjugate of all the preceding terms. In the symmetry breaking ground state, only the terms containing remain, giving rise to mass terms for the fermions. Rotating the quark and lepton fields to the basis where the matrices of Yukawa couplings are diagonal, one gets
where the masses of the fermions are , and denote the eigenvalues of the Yukawa matrices.
See also
Standard Model
Standard Model fields overview
mass terms and the Higgs mechanism
Other
Composite Higgs models, an extension of the SM where the Higgs boson is made of smaller constituents
Particle Fever, a 2013 American documentary film following various LHC experiments and concluding with the identification of the Higgs boson
Explanatory notes
References
Sources
Further reading
External links
Popular science, mass media, and general coverage
Higgs Boson observation at CERN
Hunting the Higgs Boson at C.M.S. Experiment, at CERN
The Higgs Boson by the CERN exploratorium.
Particle Fever, documentary film about the search for the Higgs Boson.
The Atom Smashers, documentary film about the search for the Higgs Boson at Fermilab.
Collected Articles at the Guardian
Video (04:38) – CERN Announcement on 4 July 2012, of the discovery of a particle which is suspected will be a Higgs Boson.
Video1 (07:44) + Video2 (07:44) – Higgs Boson Explained by CERN Physicist, Dr. Daniel Whiteson (16 June 2011).
HowStuffWorks: What exactly is the Higgs Boson?
New York Times "behind the scenes" style article on the Higgs' search at ATLAS and CMS
The story of the Higgs theory by the authors of the PRL papers and others closely associated:
(also: )
(also: )
, , and Guralnik, Gerald (2013). "Heretical Ideas that Provided the Cornerstone for the Standard Model of Particle Physics". SPG Mitteilungen March 2013, No. 39, (p. 14), and Talk at Brown University about the 1964 PRL papers
Philip Anderson (not one of the PRL authors) on symmetry breaking in superconductivity and its migration into particle physics and the PRL papers
Cartoon about the search
Higgs Boson, BBC Radio 4 discussion with Jim Al-Khalili, David Wark & Roger Cashmore (In Our Time, 18 November 2004)
Significant papers and other
Particle Data Group: Review of searches for Higgs Bosons.
2001, a spacetime odyssey: proceedings of the Inaugural Conference of the Michigan Center for Theoretical Physics : Michigan, 21–25 May 2001, (pp. 86–88), ed. Michael J. Duff, James T. Liu, , containing Higgs' story of the Higgs Boson.
example of a 1966 Russian paper on the subject.
The Department of Energy Explains ... the Higgs Boson
Introductions to the field
Electroweak Symmetry Breaking – A pedagogic introduction to electroweak symmetry breaking with step by step derivations of many key relations, by Robert D. Klauber, 15 January 2018 (archived at Wayback Machine)
Spontaneous symmetry breaking, gauge theories, the Higgs mechanism and all that (Bernstein, Reviews of Modern Physics Jan 1974) an introduction of 47 pages covering the development, history and mathematics of Higgs theories from around 1950 to 1974.
2012 in science
Bosons
Electroweak theory
Elementary particles
Mass
Phase transitions
Standard Model
Quantum field theory
Subatomic particles with spin 0
Force carriers | Higgs boson | [
"Physics",
"Chemistry",
"Mathematics"
] | 17,724 | [
"Physical phenomena",
"Physical quantities",
"Mass",
"Phases of matter",
"Quantum mechanics",
"Fundamental interactions",
"Statistical mechanics",
"Phase transitions",
"Particle physics",
"Wikipedia categories named after physical quantities",
"Subatomic particles",
"Scalar physical quantities... |
20,556,915 | https://en.wikipedia.org/wiki/Boson | In particle physics, a boson ( ) is a subatomic particle whose spin quantum number has an integer value (0, 1, 2, ...). Bosons form one of the two fundamental classes of subatomic particle, the other being fermions, which have odd half-integer spin (, , , ...). Every observed subatomic particle is either a boson or a fermion. Paul Dirac coined the name boson to commemorate the contribution of Satyendra Nath Bose, an Indian physicist.
Some bosons are elementary particles occupying a special role in particle physics, distinct from the role of fermions (which are sometimes described as the constituents of "ordinary matter"). Certain elementary bosons (e.g. gluons) act as force carriers, which give rise to forces between other particles, while one (the Higgs boson) contributes to the phenomenon of mass. Other bosons, such as mesons, are composite particles made up of smaller constituents.
Outside the realm of particle physics, multiple identical composite bosons (in this context sometimes known as 'bose particles') behave at high densities or low temperatures in a characteristic manner described by Bose–Einstein statistics: for example a gas of helium-4 atoms becomes a superfluid at temperatures close to absolute zero. Similarly, superconductivity arises because some quasiparticles, such as Cooper pairs, behave in the same way.
Name
The name boson was coined by Paul Dirac to commemorate the contribution of Satyendra Nath Bose, an Indian physicist. When Bose was a reader (later professor) at the University of Dhaka, Bengal (now in Bangladesh), he and Albert Einstein developed the theory characterising such particles, now known as Bose–Einstein statistics and Bose–Einstein condensate.
Elementary bosons
All observed elementary particles are either bosons (with integer spin) or fermions (with odd half-integer spin). Whereas the elementary particles that make up ordinary matter (leptons and quarks) are fermions, elementary bosons occupy a special role in particle physics. They act either as force carriers which give rise to forces between other particles, or in one case give rise to the phenomenon of mass.
According to the Standard Model of Particle Physics there are five elementary bosons:
One scalar boson (spin = 0)
Higgs boson – the particle that contributes to the phenomenon of mass via the Higgs mechanism
Four vector bosons (spin = 1) that act as force carriers. These are the gauge bosons:
Photon – the force carrier of the electromagnetic field
Gluons (eight different types) – force carriers that mediate the strong force
Neutral weak boson – the force carrier that mediates the weak force
Charged weak bosons (two types) – also force carriers that mediate the weak force
A second order tensor boson (spin = 2) called the graviton (G) has been hypothesised as the force carrier for gravity, but so far all attempts to incorporate gravity into the Standard Model have failed.
Composite bosons
Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. Since bosons have integer spin and fermions odd half-integer spin, any composite particle made up of an even number of fermions is a boson.
Composite bosons include:
All mesons of every type
Stable nuclei with even mass numbers such as deuterium, helium-4 (the alpha particle), carbon-12, lead-208, and many others.
As quantum particles, the behaviour of multiple indistinguishable bosons at high densities is described by Bose–Einstein statistics. One characteristic which becomes important in superfluidity and other applications of Bose–Einstein condensates is that there is no restriction on the number of bosons that may occupy the same quantum state. As a consequence, when for example a gas of helium-4 atoms is cooled to temperatures very close to absolute zero and the kinetic energy of the particles becomes negligible, it condenses into a low-energy state and becomes a superfluid.
Quasiparticles
Certain quasiparticles are observed to behave as bosons and to follow Bose–Einstein statistics, including Cooper pairs, plasmons and phonons.
See also
Explanatory notes
References
Quantum field theory
Atomic physics
Condensed matter physics | Boson | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 918 | [
"Quantum field theory",
"Phases of matter",
"Quantum mechanics",
"Bosons",
"Materials science",
"Subatomic particles",
"Atomic physics",
" molecular",
"Condensed matter physics",
"Atomic",
"Matter",
" and optical physics"
] |
20,558,229 | https://en.wikipedia.org/wiki/FTCS%20scheme | In numerical analysis, the FTCS (forward time-centered space) method is a finite difference method used for numerically solving the heat equation and similar parabolic partial differential equations. It is a first-order method in time, explicit in time, and is conditionally stable when applied to the heat equation. When used as a method for advection equations, or more generally hyperbolic partial differential equations, it is unstable unless artificial viscosity is included. The abbreviation FTCS was first used by Patrick Roache.
The method
The FTCS method is based on the forward Euler method in time (hence "forward time") and central difference in space (hence "centered space"), giving first-order convergence in time and second-order convergence in space. For example, in one dimension, if the partial differential equation is
then, letting , the forward Euler method is given by:
The function must be discretized spatially with a central difference scheme. This is an explicit method which means that, can be explicitly computed (no need of solving a system of algebraic equations) if values of at previous time level are known. FTCS method is computationally inexpensive since the method is explicit.
Illustration: one-dimensional heat equation
The FTCS method is often applied to diffusion problems. As an example, for 1D heat equation,
the FTCS scheme is given by:
or, letting :
Stability
As derived using von Neumann stability analysis, the FTCS method for the one-dimensional heat equation is numerically stable if and only if the following condition is satisfied:
Which is to say that the choice of and must satisfy the above condition for the FTCS scheme to be stable. In two-dimensions, the condition becomes
If we choose , then the stability conditions become , , and for one-, two-, and three-dimensional applications, respectively.
A major drawback of the FTCS method is that for problems with large diffusivity , satisfactory step sizes can be too small to be practical.
For hyperbolic partial differential equations, the linear test problem is the constant coefficient
advection equation, as opposed to the heat equation (or diffusion equation), which is the correct choice for a parabolic differential equation.
It is well known that for these hyperbolic problems, any choice of
results in an unstable scheme.
See also
Partial differential equations
Crank–Nicolson method
Finite-difference time-domain method
References
Numerical differential equations
Computational fluid dynamics | FTCS scheme | [
"Physics",
"Chemistry"
] | 497 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
20,564,439 | https://en.wikipedia.org/wiki/Supercritical%20adsorption | Supercritical adsorption also referred to as the adsorption of supercritical fluids, is the adsorption at above-critical temperatures. There are different tacit understandings of supercritical fluids. For example, “a fluid is considered to be ‘supercritical’ when its temperature and pressure exceed the temperature and pressure at the critical point”. In the studies of supercritical extraction, however, “supercritical fluid” is applied for a narrow temperature region of 1-1.2 or to +10 K, which is called the supercritical region. ( is the critical temperature)
History
Observations of supercritical adsorption reported before 1930 was covered in studies by McBain and Britton. All of the important articles on this subject published between 1930 and 1966 have been reviewed by Menon. During the last 20 years, a growing interest in supercritical adsorption research under the impetus of the quest for clean alternative fuels has been observed. Considerable progress has been made in both adsorption measurement techniques and molecular simulation of adsorption on computers, rendering new insights into the nature of supercritical adsorption.
Properties
According to the adsorption behavior, the adsorption of gases on solids can be classified into three temperature ranges relative to :
1.Subcritical region (T<)
2.Near-critical region (<T<+10)
3. The region T>+10
Isotherms in the first region will show the feature of subcritical adsorption. Isotherms in the second region will show the feature of mechanism transition. Isotherms in the third region will show the feature of supercritical adsorption. The transition will take a continuous way if the isotherms in both sides of the critical temperature belong to the same type, such as adsorption on microporous activated carbon. However, discontinuous transition could be observed on isotherms in the second region if there is a transformation of isotherm types, such as adsorption on mesoporous silica gel. The decisive factor in such a classification of adsorption is merely temperature, irrespective of pressure. This is because a fluid cannot undergo a transition to a liquid phase at above-critical temperature, regardless of the pressure applied. This fundamental law determines the different adsorption mechanism for the subcritical and supercritical regions. For the subcritical region, the highest equilibrium pressure of adsorption is the saturation pressure of adsorbate. Beyond condensation happens. Adsorbate in the adsorbed phase is largely in liquid state, based on which different adsorption and thermodynamic theories as well as their applications were developed. For supercritical region, condensation cannot happen, no matter how great the pressure is.
Acquisition of supercritical adsorption isotherms
An adsorption isotherm depicts the relation between the quantity adsorbate and the bulk phase pressure (or density) at equilibrium for a constant temperature. It is a dataset of specified adsorption equilibrium. Such equilibrium data are required for optimal design of process relying on adsorption and are considered fundamental information for theoretical studies.
Measurement of gas-solid adsorption equilibria
Volumetric method
Volumetric method was used in the early days of adsorption studies by Langmuir, Dubinin and others. It basically comprises a gas expansion process from a storage vessel (reference cell) to an adsorption chamber including adsorbent (adsorption cell) through a controlling valve C, as schematically shown in Figure 1. The reference cell with volume is kept at a constant temperature . The value of includes the volume of the tube between the reference cell and valve C. The adsorption cell is kept at the specified equilibrium temperature . The volume of the connecting tube between the adsorption cell and valve is divided into two parts: one part with volume with same temperature as the reference cell. The other part is buried in an atmosphere of temperature . Its volume is added to the volume of adsorption cell .
The amount adsorbed can be calculated from the pressure readings before and after opening valve C based on the p-V-T relationship of real gases. A dry and degassed adsorbent sample of known weight was enclosed in the adsorption cell. An amount of gas is let into to maintain a pressure . The moles of gas confined in are calculated as:
The pressure drops to after opening valve C. The amount of gas maintained in , , and are respectively:
The amount adsorbed or the excess adsorption N is then obtained:
where and are the moles of the gas remaining in and before opening valve C. All of the compressibility factor values are calculated by a proper equation of state, which can generate appropriate z values for temperatures not close to the critical zone.
The main advantages of this method are simplicity in procedure, commercial availability of instruments, and the large ranges of pressure and temperature in which this method can be realized. The disadvantage of volumetric measurements is the considerable amount of adsorbent sample needed to overcome adsorption effects on the walls of the vessels. However, this may be a positive aspect if the sample is adequate. A larger amount of sample results in considerable adsorption and usually provides a larger void space in the adsorption cell, rendering the effect of uncertainty in “dead space” to a minimum.
Gravimetric method
In gravimetric method, the weight change of the adsorbent sample in the gravity field due to adsorption from the gas phase is recorded. Various types of sensitive microbalance have been developed for this purpose. A continuous-flow gravimetric technique coupled with wavelet rectification allows for higher precision, especially in the near-critical region.
Major advantages of gravimetric method include sensitivity, accuracy, and the possibility of checking the state of activation of an adsorbent sample. However, consideration must be given to buoyancy correction in gravimetric measurement. A counterpart is used for this purpose. The solid sample is placed in a sample holder on one arm of the microbalance while the counterpart is loaded on the other arm. Care must be taken to keep the volume of the sample and the counterpart as close as possible to reduce the buoyancy effect. The system is vacuumed and the balance is zeroed before starting experiments. Buoyancy is measured by introducing helium and pressurizing up to the highest pressure of the experiment. It is assumed that helium does not adsorb and any weight change (ΔW) is due to buoyancy. Knowing the density of helium (), one can determine the difference in volume (ΔV) between the sample and the counterpart:
The measured weight can be corrected for the buoyancy effect at a specified temperature and pressure:
is the weight reading before correction.
Generating isotherms by molecular simulation of adsorption
Monte Carlo and molecular dynamic approaches became useful tools for theoretical calculations aiming at predictions of adsorption equilibria and diffusivities in small pores of various simple geometries. The interactions between adsorbate molecules are represented by the Lenard-Jones potential:
where r is the interparticle distance, is the point at which the potential is zero, and is the well depth.
Experimental isotherms of the supercritical region
Li Zhou and coworkers used a volumetric apparatus to measure the adsorption equilibria of hydrogen and methane on activated carbon (Figure 2, 3). They also measure the adsorption equilibria of nitrogen on microporous activated carbon (Figure 4) and on a mesoporous silica gel (Figure 5) for both subcritical and supercritical region. Figure 6 shows the isotherms of methane on silica gel.
Future problems
Adsorption of fluid at above-critical temperatures and elevated pressures is a field growing importance in both science and engineering. It is the physicochemical basis of many engineering processes and potential industrial applications. For example, separation or purification of light hydrocarbons, storage of fuel gases in microporous solids, adsorption from supercritical gases in extraction processes and chromatography. Besides, knowledge of gas/solid interface phenomenon at high pressures is fundamental to heterogeneous catalysis. However, the limited number of reliable high-pressure adsorption data hampered the progress of the theoretical study.
At least two problems have to be solved before a consistent system of theories for supercritical adsorption becomes sophisticated: first, how to set up a thermodynamically standard state for the supercritical adsorbed phase, so that the adsorption potential for supercritical adsorption can be evaluated? Second, how to determine the total amount in the adsorbed phase based on experimentally measured equilibrium data. Determination of the absolute adsorption is needed for establishing thermodynamic theory because as a reflection of statistical behavior of molecules, thermodynamic rules must rely on the total, not part of, material confined in the system studied.
From recent studies of supercritical adsorption, there seems to be an end in the high-pressure direction for supercritical adsorption. However, adsorbed-phase density is the decisive factor for the existence of this end. The state of adsorbate at the “end” provides the standard state of the supercritical adsorbed phase just like the saturated liquid, which is the end state of adsorbate in the subcritical adsorption. So the “end state” has to be precisely defined. To establish a definite relationship for the adsorbed phase density at the end state, abundant and reliable experimental data are still required.
References
József Tóth (2002). Adsorption: Theory, Modeling, and Analysis. CRC Press ,
Jyh-Ping Hsu (1999). Interfacial Forces and Fields: Theory and Applications. CRC Press ,
Eldred H. Chimowitz (2005). Introduction to Critical Phenomena in Fluids. Oxford University Press US ,
Jacques P. Fraissard, Curt W. Conner (1997). Physical Adsorption: Experiment, Theory, and Applications. Springer ,
Li Zhou (2006). Adsorption Progress in Fundamental and Application Research. World Scientific
Y Zhou, Y Sun, L Zhou. An experimental study on the adsorption behavior of gases on crossing the critical temperature. The 7th International Conference on Fundamentals of Adsorption, Nagasaki, 2001
Peng B, Yu YX, A Density Functional Theory for Lennard-Jones Fluids in Cylindrical Pores and Its Applications to Adsorption of Nitrogen on MCM-41 Materials. Langmuir, 24 (2008) 12431-12439
Estella J, Echeverria JC, Laguna M, et al. Effect of supercritical drying conditions in ethanol on the structural and textural properties of silica aerogels. Journal of Porous Materials, 15 (2008) 705-713
Li M, Pham PJ, Pittman CU, et al. Selective Solid-Phase Extraction of a-Tocopherol by Functionalized Ionic Liquid-modified Mesoporous SBA-15 Adsorbent. Analytical Sciences, 24 (2008) 1245-1250
Ottiger S, Pini R, Storti G, et al. Competitive adsorption equilibria of CO2 and CH4 on a dry coal. Adsorption-Journal of the International Adsorption Society. 14 (2008) 539-556
Vedaraman N, Srinivasakannan C, Brunner G, et al. Kinetics of cholesterol extraction using supercritical carbon dioxide with cosolvents. Industrial & Engineering Chemistry Research, 47 (2008) 6727-6733
Chen Y, Koberstein JT. Fabrication of block copolymer monolayers by adsorption from supercritical fluids: A versatile concept for modification and functionalization of polymer surfaces. Langmuir, 24 (2008) 10488-10493
Surface science | Supercritical adsorption | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,519 | [
"Condensed matter physics",
"Surface science"
] |
740,501 | https://en.wikipedia.org/wiki/Elastin | Elastin is a protein encoded by the ELN gene in humans and several other animals. Elastin is a key component in the extracellular matrix of gnathostomes (jawed vertebrates). It is highly elastic and present in connective tissue of the body to resume its shape after stretching or contracting. Elastin helps skin return to its original position whence poked or pinched. Elastin is also in important load-bearing tissue of vertebrates and used in places where storage of mechanical energy is required.
Function
The ELN gene encodes a protein that is one of the two components of elastic fibers. The encoded protein is rich in hydrophobic amino acids such as glycine and proline, which form mobile hydrophobic regions bounded by crosslinks between lysine residues. Multiple transcript variants encoding different isoforms have been found for this gene. Elastin's soluble precursor is tropoelastin.
Mechanism of elastic recoil
The characterization of disorder is consistent with an entropy-driven mechanism of elastic recoil. It is concluded that conformational disorder is a constitutive feature of elastin structure and function.
Clinical significance
Deletions and mutations in this gene are associated with supravalvular aortic stenosis (SVAS) and the autosomal dominant cutis laxa. Other associated defects in elastin include Marfan syndrome, emphysema caused by α1-antitrypsin deficiency, atherosclerosis, Buschke–Ollendorff syndrome, Menkes syndrome, pseudoxanthoma elasticum, and Williams syndrome.
Elastosis
Elastosis is the buildup of elastin in tissues, and is a form of degenerative disease. There are a multitude of causes, but the most commons cause is actinic elastosis of the skin, also known as solar elastosis, which is caused by prolonged and excessive sun exposure, a process known as photoaging. Uncommon causes of skin elastosis include elastosis perforans serpiginosa, perforating calcific elastosis and linear focal elastosis.
Composition
In the body, elastin is usually associated with other proteins in connective tissues. Elastic fiber in the body is a mixture of amorphous elastin and fibrous fibrillin. Both components are primarily made of smaller amino acids such as glycine, valine, alanine, and proline. The total elastin ranges from 58 to 75% of the weight of the dry defatted artery in normal canine arteries. Comparison between fresh and digested tissues shows that, at 35% strain, a minimum of 48% of the arterial load is carried by elastin, and a minimum of 43% of the change in stiffness of arterial tissue is due to the change in elastin stiffness.
Tissue distribution
Elastin serves an important function in arteries as a medium for pressure wave propagation to help blood flow and is particularly abundant in large elastic blood vessels such as the aorta. Elastin is also very important in the lungs, elastic ligaments, elastic cartilage, the skin, and the bladder. It is present in jawed vertebrates.
Characteristics
Elastin is a very long-lived protein, with a half-life of over 78 years in humans.
Clinical research
The feasibility of using recombinant human tropoelastin to enable elastin fiber production to improve skin flexibility in wounds and scarring has been studied. After subcutaneous injections of recombinant human tropoelastin into fresh wounds it was found there was no improvement in scarring or the flexibility of the eventual scarring.
Biosynthesis
Tropoelastin precursors
Elastin is made by linking together many small soluble precursor tropoelastin protein molecules (50-70 kDa), to make the final massive, insoluble, durable complex. The unlinked tropoelastin molecules are not normally available in the cell, since they become crosslinked into elastin fibres immediately after their synthesis by the cell and export into the extracellular matrix.
Each tropoelastin consists of a string of 36 small domains, each weighing about 2 kDa in a random coil conformation. The protein consists of alternating hydrophobic and hydrophilic domains, which are encoded by separate exons, so that the domain structure of tropoelastin reflects the exon organization of the gene. The hydrophilic domains contain Lys-Ala (KA) and Lys-Pro (KP) motifs that are involved in crosslinking during the formation of mature elastin. In the KA domains, lysine residues occur as pairs or triplets separated by two or three alanine residues (e.g. AAAKAAKAA) whereas in KP domains the lysine residues are separated mainly by proline residues (e.g. KPLKP).
Aggregation
Tropoelastin aggregates at physiological temperature due to interactions between hydrophobic domains in a process called coacervation. This process is reversible and thermodynamically controlled and does not require protein cleavage. The coacervate is made insoluble by irreversible crosslinking.
Crosslinking
To make mature elastin fibres, the tropoelastin molecules are cross-linked via their lysine residues with desmosine and isodesmosine cross-linking molecules. The enzyme that performs the crosslinking is lysyl oxidase, using an in vivo Chichibabin pyridine synthesis reaction.
Molecular biology
In mammals, the genome only contains one gene for tropoelastin, called ELN. The human ELN gene is a 45 kb segment on chromosome 7, and has 34 exons interrupted by almost 700 introns, with the first exon being a signal peptide assigning its extracellular localization. The large number of introns suggests that genetic recombination may contribute to the instability of the gene, leading to diseases such as SVAS. The expression of tropoelastin mRNA is highly regulated under at least eight different transcription start sites.
Tissue specific variants of elastin are produced by alternative splicing of the tropoelastin gene. There are at least 11 known human tropoelastin isoforms. These isoforms are under developmental regulation, however there are minimal differences among tissues at the same developmental stage.
See also
Cutis laxa
Elastic fibers
Elastin receptor
Resilin: an invertebrate protein
Williams syndrome
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Williams or Williams-Beuren Syndrome
The Elastin Protein
Microfibril
Aging-related proteins
Biomaterials
Elastomers
Extracellular matrix proteins
Structural proteins | Elastin | [
"Physics",
"Chemistry",
"Biology"
] | 1,430 | [
"Biomaterials",
"Synthetic materials",
"Elastomers",
"Senescence",
"Materials",
"Aging-related proteins",
"Matter",
"Medical technology"
] |
740,818 | https://en.wikipedia.org/wiki/Siding%20%28construction%29 | Siding or wall cladding is the protective material attached to the exterior side of a wall of a house or other building. Along with the roof, it forms the first line of defense against the elements, most importantly sun, rain/snow, heat and cold, thus creating a stable, more comfortable environment on the interior side. The siding material and style also can enhance or detract from the building's beauty. There is a wide and expanding variety of materials to side with, both natural and artificial, each with its own benefits and drawbacks. Masonry walls as such do not require siding, but any wall can be sided. Walls that are internally framed, whether with wood, or steel I-beams, however, must always be sided.
Most siding consists of pieces of weather-resistant material that are smaller than the wall they cover, to allow for expansion and contraction of the materials due to moisture and temperature changes. There are various styles of joining the pieces, from board and batton, where the butt joints between panels is covered with a thin strip (usually 25 to 50 mm wide) of wood, to a variety of clapboard, also called lap siding, in which planks are laid horizontally across the wall starting from the bottom, and building up, the board below overlapped by the board above it. These techniques of joinery are designed to prevent water from entering the walls. Siding that does not consist of pieces joined would include stucco, which is widely used in the Southwestern United States. It is a plaster-like siding and is applied over a lattice, just like plaster. However, because of the lack of joints, it eventually cracks and is susceptible to water damage. Rainscreen construction is used to improve siding's ability to keep walls dry.
Wood siding
Wood siding is very versatile in style and can be used on a wide variety of building structures. It can be painted or stained in any color palette desired.
Though installation and repair is relatively simple, wood siding requires more maintenance than other popular solutions, requiring treatment every four to nine years depending on the severity of the elements to which it is exposed. Ants and termites are a threat to many types of wood siding, such that extra treatment and maintenance that can significantly increase the cost in some pest-infested areas.
Wood is a moderately renewable resource and is biodegradable. However, most paints and stains used to treat wood are not environmentally friendly and can be toxic. Wood siding can provide some minor insulation and structural properties as compared to thinner cladding materials.
Shingles
Wood shingles or irregular cedar "shake" siding was used in early New England construction, and was revived in Shingle Style and Queen Anne style architecture in the late 19th century.
Clapboards
Wood siding in overlapping horizontal rows or "courses" is called clapboard, weatherboard (British English), or bevel siding which is made with beveled boards, thin at the top edge and thick at the butt.
In colonial North America, Eastern white pine was the most common material. Wood siding can also be made of naturally rot-resistant woods such as redwood or cedar.
Drop siding
Jointed horizontal siding (also called "drop" siding or novelty siding) may be shiplapped or tongue and grooved (though less common). Drop siding comes in a wide variety of face finishes, including Dutch Lap (also called German or Cove Lap) and log siding (milled with curve).
Vertical boards
Vertical siding may have a cover over the joint: board and batten, popular in American wooden Carpenter Gothic houses; or less commonly behind the joint called batten and board or reversed board and batten.
Wooden sheet siding
Plywood sheet siding is sometimes used on inexpensive buildings, sometimes with grooves to imitate vertical shiplap siding. One example of such grooved plywood siding is the type called Texture 1–11, T1-11, or T111 ("tee-one-eleven"). There is also a product known as reverse board-and-batten RBB that looks similar but has deeper grooves. Some of these products may be thick enough and rated for structural applications if properly fastened to studs. Both T-11 and RBB sheets are quick and easy to install as long as they are installed with compatible flashing at butt joints.
Stone siding
Slate shingles may be simple in form but many buildings with slate siding are highly decorative.
Plastic siding
Wood clapboard is often imitated using vinyl siding or uPVC weatherboarding. It is usually produced in units twice as high as clapboard. Plastic imitations of wood shingle and wood shakes also exist.
Since plastic siding is a manufactured product, it may come in unlimited color choices and styles. Historically vinyl sidings would fade, crack and buckle over time, requiring the siding to be replaced. However, newer vinyl options have improved and resist damage and wear better. Vinyl siding is sensitive to direct heat from grills, barbecues or other sources. Unlike wood, vinyl siding does not provide additional insulation for the building, unless an insulation material (e.g., foam) has been added to the product. It has also been criticized by some fire safety experts for its heat sensitivity. This sensitivity makes it easier for a house fire to jump to neighboring houses in comparison to materials such as brick, metal or masonry.
Vinyl siding has a potential environmental cost. While vinyl siding can be recycled, it cannot be burned (due to toxic dioxin gases that would be released). If dumped in a landfill, plastic siding does not break down quickly.
Vinyl siding is also considered one of the more unattractive siding choices by many. Although newer options and proper installation can eliminate this complaint, vinyl siding often has visible seam lines between panels and generally do not have the quality appearance of wood, brick, or masonry. The fading and cracking of older types of plastic siding compound this issue. In many areas of newer housing development, particularly in North America, entire neighbourhoods are often built with all houses clad in vinyl siding, given an unappealing uniformity. Some cities now campaign for house developers to incorporate varied types of siding during construction.
Imitation brick or stone–asphalt siding
A predecessor to modern maintenance free sidings was asphalt brick siding. Asphalt impregnated panels (about ) give the appearance of brick or even stone. Many buildings have this siding, especially old sheds and garages. If the panels are straight and level and not damaged, the only indication that they are not real brick may be seen at the corner caps. Trademarked names included Insulbrick, Insulstone, Insulwood. Commonly used names now are faux brick, lick-it-and-stick-it brick, and ghetto brick. Often such siding is now covered with newer metal or plastic siding. Today thin panels of real brick are manufactured for veneer or siding.
Insulated siding
Insulated siding has emerged as a new siding category in recent years. Considered an improvement over vinyl siding, insulated siding is custom fit with expanded polystyrene foam (EPS) that is fused to the back of the siding, which fills the gap between the home and the siding.
Products provide environmental advantages by reducing energy use by up to 20 percent. On average, insulated siding products have an R-value of 3.96, triple that of other exterior cladding materials. Insulated siding products are typically Energy Star qualified, engineered in compliance with environmental standards set by the U.S. Department of Energy and the United States Environmental Protection Agency.
In addition to reducing energy consumption, insulated siding is a durable exterior product, designed to last more than 50 years, according to manufacturers. The foam provides rigidity for a more ding- and wind-resistant siding, maintaining a quality look for the life of the products. The foam backing also creates straighter lines when hung, providing a look more like that of wood siding, while remaining low maintenance.
Manufacturers report that insulated siding is permeable or "breathable", allowing water vapor to escape, which can protect against rot, mold and mildew, and help maintain healthy indoor air quality.
Metal siding
Metal siding comes in a variety of metals, styles, and colors. It is most often associated with modern, industrial, and retro buildings. Utilitarian buildings often use corrugated galvanized steel sheet siding or cladding, which often has a coloured vinyl finish. Corrugated aluminium cladding is also common where a more durable finish is required, while also being lightweight for easy shaping and installing making it a popular metal siding choice.
Formerly, imitation wood clapboard was made of aluminium (aluminium siding). That role is typically played by vinyl siding today. Aluminium siding is ideal for homes in coastal areas with much moisture and salt, since aluminium reacts with air to form aluminium oxide, an extremely hard coating that seals the aluminium surface from further degradation. In contrast, steel forms rust, which can weaken the structure of the material, and corrosion-resistant coatings for steel, such as zinc, sometimes fail around the edges as years pass. However, an advantage of steel siding can be its dent-resistance, which is excellent for regions with severe storms—especially if the area is prone to hail.
The first architectural application of aluminium was the mounting of a small grounding cap on the Washington Monument in 1884. Sheet-iron or steel clapboard siding units had been patented in 1903, and Sears, Roebuck & Company had been offering embossed steel siding in stone and brick patterns in their catalogues for several years by the 1930s. Alcoa began promoting the use of aluminium in architecture by the 1920s when it produced ornamental spandrel panels for the Cathedral of Learning and the Chrysler and Empire State Buildings in New York. The exterior of the A.O. Smith Corporation Building in Milwaukee was clad entirely in aluminium by 1930, and siding panels of Duralumin sheet from Alcoa sheathed an experimental exhibit house for the Architectural League of New York in 1931. Most architectural applications of aluminium in the 1930s were on a monumental scale, and it was another six years before it was put to use on residential construction.
In the first few years after World War II, manufacturers began developing and widely distributing aluminium siding. Among them Indiana businessman Frank Hoess was credited with the invention of the configuration seen on modern aluminium siding. His experiments began in 1937 with steel siding in imitation of wooden clapboards. Other types of sheet metal and steel siding on the market at the time presented problems with warping, creating openings through which water could enter, introducing rust. Hoess remedied this problem through the use of a locking joint, which was formed by small flap at the top of each panel that joined with a U-shaped flange on the lower edge of the previous panel thus forming a watertight horizontal seam. After he had received a patent for his siding in 1939, Hoess produced a small housing development of about forty-four houses covered in his clapboard-style steel siding for blue-collar workers in Chicago. His operations were curtailed when war plants commandeered the industry. In 1946 Hoess allied with Metal Building Products of Detroit, a corporation that promoted and sold Hoess siding of Alcoa aluminium. Their product was used on large housing projects in the northeast and was purportedly the siding of choice for a 1947 Pennsylvania development, the first subdivision to solely use aluminium siding. Products such as by unpainted aluminium panels, starter strips, corner pieces and specialized application clips were assembled in the Indiana shop of the Hoess brothers. Siding could be applied over conventional wooden clapboards, or it could be nailed to studs via special clips affixed to the top of each panel. Insulation was placed between studs. While the Hoess Brothers company continued to function for about twelve more years after the dissolution of the Metal Building Products Corporation in 1948, they were less successful than rising siding companies like Reynolds Metals.
Thatch siding
Thatch is an ancient and very widespread building material used on roofs and walls. Thatch siding is made with dry vegetation such as longstraw, water reeds, or combed wheat reed. The materials are overlapped and weaved in patterns designed to deflect and direct water.
Masonry siding
Stone and masonry veneer is sometimes considered siding, are varied and can accommodate a variety of styles—from formal to rustic. Though masonry can be painted or tinted to match many color palettes, it is most suited to neutral earth tones, and coatings such as roughcast and pebbeldash. Masonry has excellent durability (over 100 years), and minimal maintenance is required. The primary drawback to masonry siding is the initial cost.
Precipitation can threaten the structure of buildings, so it is important that the siding will be able to withstand the weather conditions in the local region. For rainy regions, exterior insulation finishing systems (EIFS) have been known to suffer underlying wood rot problems with excessive moisture exposure.
The environmental impact of masonry depends on the type of material used. In general, concrete and concrete based materials are intensive energy materials to produce. However, the long durability and minimal maintenance of masonry sidings mean that less energy is required over the life of the siding.
Composite siding
Various composite materials are also used for siding: asphalt shingles, asbestos, fiber cement, aluminium (ACM), fiberboard, hardboard, etc. They may be in the form of shingles or boards, in which case they are sometimes called clapboard.
Composite sidings are available in many styles and can mimic the other siding options. Composite materials are ideal for achieving a certain style or 'look' that may not be suited to the local environment (e.g., corrugated aluminium siding in an area prone to severe storms; steel in coastal climates; wood siding in termite-infested regions).
Costs of composites tend to be lower than wood options, but vary widely as do installation, maintenance and repair requirements. Not surprisingly, the durability and environmental impact of composite sidings depends on the specific materials used in the manufacturing process.
Fiber cement siding is a class of composite siding that is usually made from a combination of cement, cellulose (wood), sand, and water. They are either coated or painted in the factory or installed and then painted after installation. Fiber cement is popular for its realistic look, durability, low-maintenance properties, fire resistance, and its lightweight properties compared to traditional wood siding. Composite siding products containing cellulose (wood fibers) have been shown to have problems with deterioration, delamination, or loss of coating adhesion in certain climates or under certain environmental conditions.
A younger class of non-wood synthetic siding has sprouted in the past 15 years. These products are usually made from a combination of non-wood materials such as polymeric resins, fiberglass, stone, sand, and fly ash and are chosen for their durability, curb appeal, and ease of maintenance. Given the newness of such technologies, product lifespan can only be estimated, varieties are limited, and distribution is sporadic.
See also
Sod house
Log building
Exterior insulation finishing system
Stucco
Masonry
Brick
Concrete masonry unit
Dry stone
References
Building materials
Types of wall
Building engineering
Timber framing | Siding (construction) | [
"Physics",
"Technology",
"Engineering"
] | 3,092 | [
"Structural engineering",
"Timber framing",
"Building engineering",
"Architecture",
"Structural system",
"Construction",
"Materials",
"Types of wall",
"Civil engineering",
"Matter",
"Building materials"
] |
740,871 | https://en.wikipedia.org/wiki/Anglesite | Anglesite is a lead sulfate mineral with the chemical formula PbSO4. It occurs as an oxidation product of primary lead sulfide ore, galena. Anglesite occurs as prismatic orthorhombic crystals and earthy masses, and is isomorphous with barite and celestine. It contains 74% of lead by mass and therefore has a high specific gravity of 6.3. Anglesite's color is white or gray with pale yellow streaks. It may be dark gray if impure.
It was first recognized as a mineral species by William Withering in 1783, who discovered it in the Parys copper-mine in Anglesey; the name anglesite, from this locality, was given by F. S. Beudant in 1832. The crystals from Anglesey, which were formerly found abundantly on a matrix of dull limonite, are small in size and simple in form, being usually bounded by four faces of a prism and four faces of a dome; they are brownish-yellow in colour owing to a stain of limonite. Crystals from some other localities, notably from in Sardinia, are transparent and colourless, possessed of a brilliant adamantine lustre, and usually modified by numerous bright faces. The variety of combinations and habits presented by the crystals is very extensive, nearly two hundred distinct forms being figured by V. von Lang in his monograph of the species; without measurement of the angles the crystals are frequently difficult to decipher. There are distinct cleavages parallel to the faces of the prism (110) and the basal plane (001), but these are not so well developed as in the isomorphous minerals barite and celestite.
Anglesite is a mineral of secondary origin, having been formed by the oxidation of galena in the upper parts of mineral lodes where these have been affected by weathering processes. At Monteponi the crystals encrust cavities in glistening granular galena; and from Leadhills, in Scotland, pseudomorphs of anglesite after galena are known. At most localities it is found as isolated crystals in the lead-bearing lodes, but at some places, in Australia and Mexico, it occurs as large masses, and is then mined as an ore of lead.
Anglesite is sometimes used as a gemstone.
Gallery
See also
Lead(II) sulfate
References
Bibliography
Palache, P.; Berman H.; Frondel, C. (1960). "Dana's System of Mineralogy, Volume II: Halides, Nitrates, Borates, Carbonates, Sulfates, Phosphates, Arsenates, Tungstates, Molybdates, Etc. (Seventh Edition)" John Wiley and Sons, Inc., New York, pp. 420–424.
External links
Lead minerals
Sulfate minerals
Orthorhombic minerals
Minerals in space group 62
Luminescent minerals
Gemstones
Baryte group
Minerals described in 1832 | Anglesite | [
"Physics",
"Chemistry"
] | 605 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
740,872 | https://en.wikipedia.org/wiki/Calcium%20aluminosilicate | Calcium aluminosilicate, an aluminosilicate compound with calcium cations, most typically has formula CaAl2Si2O8.
In minerals, as a feldspar, it can be found as anorthite, an end-member of the plagioclase series.
Uses
As a food additive, it is sometimes designated E556. It is known to the FDA as, at under 2% by weight, an anti-caking agent for table salt, and as an ingredient in vanilla powder.
References
Aluminosilicates
Calcium compounds
E-number additives | Calcium aluminosilicate | [
"Chemistry"
] | 124 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
740,874 | https://en.wikipedia.org/wiki/Barium%20sulfate | Barium sulfate (or sulphate) is the inorganic compound with the chemical formula BaSO4. It is a white crystalline solid that is odorless and insoluble in water. It occurs in nature as the mineral barite, which is the main commercial source of barium and materials prepared from it. Its opaque white appearance and its high density are exploited in its main applications.
Uses
Drilling fluids
About 80% of the world's barium sulfate production, mostly purified mineral, is consumed as a component of oil well drilling fluid. It increases the density of the fluid, increasing the hydrostatic pressure in the well and reducing the chance of a blowout.
Radiocontrast agent
Barium sulfate in suspension is often used medically as a radiocontrast agent for X-ray imaging and other diagnostic procedures. It is most often used in imaging of the GI tract during what is colloquially known as a "barium meal". It is administered orally, or by enema, as a suspension of fine particles in a thick milk-like solution (often with sweetening and flavoring agents added). Although barium is a heavy metal, and its water-soluble compounds are often highly toxic, the low solubility of barium sulfate protects the patient from absorbing harmful amounts of the metal. Barium sulfate is also readily removed from the body, unlike Thorotrast, which it replaced. Due to the relatively high atomic number (Z = 56) of barium, its compounds absorb X-rays more strongly than compounds derived from lighter nuclei.
Pigment
The majority of synthetic barium sulfate is used as a component of white pigment for paints. In oil paint, barium sulfate is almost transparent, and is used as a filler or to modify consistency. One major manufacturer of artists' oil paint sells "permanent white" that contains a mixture of titanium white pigment (TiO2) and barium sulfate. The combination of barium sulfate and zinc sulfide (ZnS) is the inorganic pigment called lithopone. In photography it is used as a coating for certain photographic papers.
It is also used as a coating to diffuse light evenly.
Light-reflecting paint for cooling
Barium sulfate is highly reflective, of both visible and ultraviolet light. Researchers used it as an ingredient in paint that reflects 98.1% of solar radiation, allowing surfaces to which it has been applied to stay cooler in sunlit conditions. Commercially available white paints only reflect 80 - 90% of solar radiation. By using hexagonal nanoplatelet boron nitride, the thickness of a coat of this type of paint was reduced to 0.15 mm.
Paper brightener
A thin layer of barium sulfate called baryta is first coated on the base surface of most photographic paper to increase the reflectiveness of the image, with the first such paper introduced in 1884 in Germany. The light-sensitive silver halide emulsion is then coated over the baryta layer. The baryta coating limits the penetration of the emulsion into the fibers of the paper and makes the emulsion more even, resulting in more uniform blacks. Further coatings may then be present for fixing and protection of the image. Baryta has also been used to brighten papers intended for ink-jet printing.
Plastics filler
Barium sulfate is commonly used as a filler for plastics to increase the density of the polymer in vibrational mass damping applications. In polypropylene and polystyrene plastics, it is used as a filler in proportions up to 70%. It has an effect of increasing acid and alkali resistance and opacity. Such composites are also used as X-ray shielding materials due to their enhanced radio-opacity. In cases where machinability and weight are a concern, composites with high mass fraction (70–80%) of barium sulfate may be preferred to the more commonly used steel shields.
Barium sulfate can also be used to enhance the material properties of HDPE, although typically in relatively low concentrations, and often in combination with other fillers like calcium carbonate or titanium oxide.
Niche uses
Barium sulfate is used in soil testing. Tests for soil pH and other qualities of soil use colored indicators, and small particles (usually clay) from the soil can cloud the test mixture and make it hard to see the color of the indicator. Barium sulfate added to the mixture binds with these particles, making them heavier so they fall to the bottom, leaving a clearer solution.
In colorimetry, barium sulfate is used as a near-perfect diffuser when measuring light sources.
In metal casting, the moulds used are often coated with barium sulfate in order to prevent the molten metal from bonding with the mould.
It is also used in brake linings, anacoustic foams, powder coatings, and root canal filling.
Barium sulfate is an ingredient in the "rubber" pellets used by Chilean police. This together with silica helps the pellet attain a 96.5 Shore A hardness.
Catalyst support
Barium sulfate is used as a catalyst support when selectively hydrogenating functional groups that are sensitive to overreduction. With a low surface area, the contact time of the substrate with the catalyst is shorter and thus selectivity is achieved. Palladium on barium sulfate is also used as a catalyst in the Rosenmund reduction.
Pyrotechnics
As barium compounds emit a characteristic green light when heated at high temperature, barium salts are often used in green pyrotechnic formulas, although nitrate and chlorate salts are more common. Barium sulfate is commonly used as a component of "strobe" pyrotechnic compositions.
Copper industry
As barium sulfate has a high melting point and is insoluble in water, it is used as a release material in casting of copper anode plates. The anode plates are cast in copper molds, so to avoid the direct contact of the liquid copper with the solid copper mold, a suspension of fine barium sulfate powder in water is used as a coating on the mold surface. Thus, when the molten copper solidifies in form of an anode plate it can be easily released from its mold.
Radiometric measurements
Barium sulfate is sometimes used, besides polytetrafluoroethylene (PTFE), to coat the interior of integrating spheres due to the high reflectance of the material and near Lambertian characteristics.
3D printing of firearms
Barium sulfate is listed among the materials acceptable to the Bureau of Alcohol, Tobacco, Firearms and Explosives (BATFE) for the manufacturing of firearms and/or components that are made of plastic, to achieve compliance with the U.S. federal requirement that an X-ray machine must be able to accurately depict the shape of the plastic firearm or component.
Production
Almost all of the barium consumed commercially is obtained from barite, which is often highly impure. Barite is processed by thermo-chemical sulfate reduction (TSR), also known as carbothermal reduction (heating with coke) to give barium sulfide:
BaSO4 + 4 C → BaS + 4 CO
In contrast to barium sulfate, barium sulfide is soluble in water and readily converted to the oxide, carbonate, and halides. To produce highly pure barium sulfate, the sulfide or chloride is treated with sulfuric acid or sulfate salts:
BaS + H2SO4 → BaSO4 + H2S
Barium sulfate produced in this way is often called , which is French for "permanent white". Blanc fixe is the form of barium encountered in consumer products, such as paints.
In the laboratory barium sulfate is generated by combining solutions of barium ions and sulfate salts. Because barium sulfate is the least toxic salt of barium due to its insolubility, wastes containing barium salts are sometimes treated with sodium sulfate to immobilize (detoxify) the barium. Barium sulfate is one of the most insoluble salts of sulfate. Its low solubility is exploited in qualitative inorganic analysis as a test for Ba2+ ions, as well as for sulfate.
Untreated raw materials such as natural baryte formed under hydrothermal conditions may contain many impurities, a.o., quartz, or even amorphous silica.
History
Barium sulfate is reduced to barium sulfide by carbon. The accidental discovery of this conversion many centuries ago led to the discovery of the first synthetic phosphor. The sulfide, unlike the sulfate, is water-soluble.
During the early part of the 20th century, during the Japanese colonization period, hokutolite was found to exist naturally in the Beitou hot-springs area near Taipei City, Taiwan. Hokutolite is a radioactive mineral composed mostly of PbSO4 and BaSO4, but also containing traces of uranium, thorium and radium. The Japanese harvested these elements for industrial uses, and also developed dozens of “therapeutic hot-spring baths” in the area.
Safety aspects
Although soluble salts of barium are moderately toxic to humans, barium sulfate is nontoxic due to its insolubility. The most common means of inadvertent barium poisoning arises from the consumption of soluble barium salts mislabeled as BaSO4. In the Celobar incident (Brazil, 2003), nine patients died from improperly prepared radiocontrast agent. In regards to occupational exposures, the Occupational Safety and Health Administration set a permissible exposure limit at 15 mg/m3, while the National Institute for Occupational Safety and Health has a recommended exposure limit at 10 mg/m3. For respiratory exposures, both agencies have set an occupational exposure limit at 5 mg/m3.
See also
Baryte
List of inorganic pigments
References
Barium compounds
Sulfates
Inorganic pigments
Radiocontrast agents | Barium sulfate | [
"Chemistry"
] | 2,033 | [
"Inorganic pigments",
"Sulfates",
"Inorganic compounds",
"Salts"
] |
740,875 | https://en.wikipedia.org/wiki/Barium%20oxide | Barium oxide, also known as baria, is a white hygroscopic non-flammable compound with the formula BaO. It has a cubic structure and is used in cathode-ray tubes, crown glass, and catalysts. It is harmful to human skin and if swallowed in large quantity causes irritation. Excessive quantities of barium oxide may lead to death.
It is prepared by heating barium carbonate with coke, carbon black or tar or by thermal decomposition of barium nitrate.
Uses
Barium oxide is used as a coating for hot cathodes, for example, those in cathode-ray tubes. It replaced lead(II) oxide in the production of certain kinds of glass such as optical crown glass. While lead oxide raised the refractive index, it also raised the dispersive power, which barium oxide does not alter. Barium oxide also has use as an ethoxylation catalyst in the reaction of ethylene oxide and alcohols, which takes place between 150 and 200 °C.
It is also a source of pure oxygen through heat fluctuation. It readily oxidises to BaO2 by formation of a peroxide ion. The complete peroxidation of BaO to BaO2 occurs at moderate temperatures but the increased entropy of the O2 molecule at high temperatures means that BaO2 decomposes to O2 and BaO at 1175K.
The reaction was used as a large scale method to produce oxygen before air separation became the dominant method in the beginning of the 20th century. The method was named the Brin process, after its inventors.
Preparation
Barium oxide is made by heating barium carbonate at temperatures of 1000–1450 °C. It may also be prepared by thermal decomposition of barium nitrate. Likewise, it is often formed through the decomposition of other barium salts.
2 Ba + O2 → 2 BaO
BaCO3 → BaO + CO2
Safety issues
Barium oxide is an irritant. If it contacts the skin or the eyes or is inhaled it causes pain and redness. However, it is more dangerous when ingested. It can cause nausea and diarrhea, muscle paralysis, cardiac arrhythmia, and can cause death. If ingested, medical attention should be sought immediately.
Barium oxide should not be released environmentally; it is harmful to aquatic organisms.
See also
References
External links
International Chemical Safety Card 0778
Barium compounds
Oxides
Rock salt crystal structure | Barium oxide | [
"Chemistry"
] | 502 | [
"Oxides",
"Salts"
] |
740,885 | https://en.wikipedia.org/wiki/Calcium%20fluoride | Calcium fluoride is the inorganic compound of the elements calcium and fluorine with the formula CaF2. It is a white solid that is practically insoluble in water. It occurs as the mineral fluorite (also called fluorspar), which is often deeply coloured owing to impurities.
Chemical structure
The compound crystallizes in a cubic motif called the fluorite structure.
Ca2+ centres are eight-coordinate, being centred in a cube of eight F− centres. Each F− centre is coordinated to four Ca2+ centres in the shape of a tetrahedron. Although perfectly packed crystalline samples are colorless, the mineral is often deeply colored due to the presence of F-centers.
The same crystal structure is found in numerous ionic compounds with formula AB2, such as CeO2, cubic ZrO2, UO2, ThO2, and PuO2. In the corresponding anti-structure, called the antifluorite structure, anions and cations are swapped, such as Be2C.
Gas phase
The gas phase is noteworthy for failing the predictions of VSEPR theory; the molecule is not linear like , but bent with a bond angle of approximately 145°; the strontium and barium dihalides also have a bent geometry. It has been proposed that this is due to the fluoride ligands interacting with the electron core or the d-subshell of the calcium atom.
Preparation
The mineral fluorite is abundant, widespread, and mainly of interest as a precursor to HF. Thus, little motivation exists for the industrial production of CaF2. High purity CaF2 is produced by treating calcium carbonate with hydrofluoric acid:
CaCO3 + 2 HF → CaF2 + CO2 + H2O
Applications
Naturally occurring CaF2 is the principal source of hydrogen fluoride, a commodity chemical used to produce a wide range of materials.
Calcium fluoride in the fluorite state is of significant commercial importance as a fluoride source. Hydrogen fluoride is liberated from the mineral by the action of concentrated sulfuric acid:
CaF2 + H2SO4 → CaSO4(solid) + 2 HF
Others
Calcium fluoride is used to manufacture optical components such as windows and lenses, used in thermal imaging systems, spectroscopy, telescopes, and excimer lasers (used for photolithography in the form of a fused lens). It is transparent over a broad range from ultraviolet (UV) to infrared (IR) frequencies. Its low refractive index reduces the need for anti-reflection coatings. Its insolubility in water is convenient as well. It also allows much smaller wavelengths to pass through.
Doped calcium fluoride, like natural fluorite, exhibits thermoluminescence and is used in thermoluminescent dosimeters. It forms when fluorine combines with calcium.
Safety
CaF2 is classified as "not dangerous", although reacting it with sulfuric acid produces hydrofluoric acid, which is highly corrosive and toxic. With regards to inhalation, the NIOSH-recommended concentration of fluorine-containing dusts is 2.5 mg/m3 in air.
See also
List of laser types
Photolithography
Skeletal fluorosis
References
External links
NIST webbook thermochemistry data
Charles Townes on the history of lasers
National Pollutant Inventory - Fluoride and compounds fact sheet
Crystran Material Data
MSDS (University of Oxford)
Calcium compounds
Crystals
Fluorides
Fluorite
Alkaline earth metal halides
Optical materials
Fluorite crystal structure | Calcium fluoride | [
"Physics",
"Chemistry",
"Materials_science"
] | 746 | [
"Salts",
"Materials",
"Optical materials",
"Crystallography",
"Crystals",
"Fluorides",
"Matter"
] |
741,020 | https://en.wikipedia.org/wiki/Rotary%20encoder | A rotary encoder, also called a shaft encoder, is an electro-mechanical device that converts the angular position or motion of a shaft or axle to analog or digital output signals.
There are two main types of rotary encoder: absolute and incremental. The output of an absolute encoder indicates the current shaft position, making it an angle transducer. The output of an incremental encoder provides information about the motion of the shaft, which typically is processed elsewhere into information such as position, speed and distance.
Rotary encoders are used in a wide range of applications that require monitoring or control, or both, of mechanical systems, including industrial controls, robotics, photographic lenses, computer input devices such as optomechanical mice and trackballs, controlled stress rheometers, and rotating radar platforms.
Technologies
Mechanical: Also known as conductive encoders. A series of circumferential copper tracks etched onto a PCB is used to encode the information via contact brushes sensing the conductive areas. Mechanical encoders are economical but susceptible to mechanical wear. They are common in human interfaces such as digital multimeters.
Optical: This uses a light shining onto a photodiode through slits in a metal or glass disc. Reflective versions also exist. This is one of the most common technologies. Optical encoders are very sensitive to dust.
On-Axis Magnetic: This technology typically uses a specially magnetized 2 pole neodymium magnet attached to the motor shaft. Because it can be fixed to the end of the shaft, it can work with motors that only have 1 shaft extending out of the motor body. The accuracy can vary from a few degrees to under 1 degree. Resolutions can be as low as 1 degree or as high as 0.09 degree (4000 CPR, Count per Revolution). Poorly designed internal interpolation can cause output jitter, but this can be overcome with internal sample averaging.
Off-Axis Magnetic: This technology typically employs the use of rubber bonded ferrite magnets attached to a metal hub. This offers flexibility in design and low cost for custom applications. Due to the flexibility in many off axis encoder chips they can be programmed to accept any number of pole widths so the chip can be placed in any position required for the application. Magnetic encoders operate in harsh environments where optical encoders would fail to work.
Basic types
Absolute
An absolute encoder maintains position information when power is removed from the encoder. The position of the encoder is available immediately on applying power. The relationship between the encoder value and the physical position of the controlled machinery is set at assembly; the system does not need to return to a calibration point to maintain position accuracy.
An absolute encoder has multiple code rings with various binary weightings which provide a data word representing the absolute position of the encoder within one revolution. This type of encoder is often referred to as a parallel absolute encoder.
A multi-turn absolute rotary encoder includes additional code wheels and toothed wheels. A high-resolution wheel measures the fractional rotation, and lower-resolution geared code wheels record the number of whole revolutions of the shaft.
Incremental
An incremental encoder will immediately report changes in position, which is an essential capability in some applications. However, it does not report or keep track of absolute position. As a result, the mechanical system monitored by an incremental encoder may have to be homed (moved to a fixed reference point) to initialize absolute position measurements.
Absolute encoder
Absolute rotary encoder
Construction
Digital absolute encoders produce a unique digital code for each distinct angle of the shaft. They come in two basic types: optical and mechanical.
Mechanical absolute encoders
A metal disc containing a set of concentric rings of openings is fixed to an insulating disc, which is rigidly fixed to the shaft. A row of sliding contacts is fixed to a stationary object so that each contact wipes against the metal disc at a different distance from the shaft. As the disc rotates with the shaft, some of the contacts touch metal, while others fall in the gaps where the metal has been cut out. The metal sheet is connected to a source of electric current, and each contact is connected to a separate electrical sensor. The metal pattern is designed so that each possible position of the axle creates a unique binary code in which some of the contacts are connected to the current source (i.e. switched on) and others are not (i.e. switched off).
Brush-type contacts are susceptible to wear, and consequently mechanical encoders are typically found in low-speed applications such as manual volume or tuning controls in a radio receiver.
Optical absolute encoders
The optical encoder's disc is made of glass or plastic with transparent and opaque areas. A light source and photo detector array reads the optical pattern that results from the disc's position at any one time.
The Gray code is often used.
This code can be read by a controlling device, such as a microprocessor or microcontroller to determine the angle of the shaft.
The absolute analog type produces a unique dual analog code that can be translated into an absolute angle of the shaft.
Magnetic absolute encoders
The magnetic encoder uses a series of magnetic poles (2 or more) to represent the encoder position to a magnetic sensor (typically magneto-resistive or Hall Effect). The magnetic sensor reads the magnetic pole positions.
This code can be read by a controlling device, such as a microprocessor or microcontroller to determine the angle of the shaft, similar to an optical encoder.
The absolute analog type produces a unique dual analog code that can be translated into an absolute angle of the shaft (by using a special algorithm).
Due to the nature of recording magnetic effects, these encoders may be optimal to use in conditions where other types of encoders may fail due to dust or debris accumulation. Magnetic encoders are also relatively insensitive to vibrations, minor misalignment, or shocks.
Brushless motor commutation
Built-in rotary encoders are used to indicate the angle of the motor shaft in permanent magnet brushless motors, which are commonly used on CNC machines, robots, and other industrial equipment. In such cases, the encoder serves as a feedback device that plays a vital role in proper equipment operation. Brushless motors require electronic commutation, which often is implemented in part by using rotor magnets as a low-resolution absolute encoder (typically six or twelve pulses per revolution). The resulting shaft angle information is conveyed to the servo drive to enable it to energize the proper stator winding at any moment in time.
Capacitive absolute encoders
An asymmetrical shaped disc is rotated within the encoder. This disc will change the capacitance between two electrodes which can be measured and calculated back to an angular value.
Absolute multi-turn encoder
A multi-turn encoder can detect and store more than one revolution. The term absolute multi-turn encoder is generally used if the encoder will detect movements of its shaft even if the encoder is not provided with external power.
Battery-powered multi-turn encoder
This type of encoder uses a battery for retaining the counts across power cycles. It uses energy conserving electrical design to detect the movements.
Geared multi-turn encoder
These encoders use a train of gears to mechanically store the number of revolutions. The position of the single gears is detected with one of the above-mentioned technologies.
Self-powered multi-turn encoder
These encoders use the principle of energy harvesting to generate energy from the moving shaft. This principle, introduced in 2007, uses a Wiegand sensor to produce electricity sufficient to power the encoder and write the turns count to non-volatile memory.
Ways of encoding shaft position
Standard binary encoding
An example of a binary code, in an extremely simplified encoder with only three contacts, is shown below.
In general, where there are n contacts, the number of distinct positions of the shaft is 2n. In this example, n is 3, so there are 2³ or 8 positions.
In the above example, the contacts produce a standard binary count as the disc rotates. However, this has the drawback that if the disc stops between two adjacent sectors, or the contacts are not perfectly aligned, it can be impossible to determine the angle of the shaft. To illustrate this problem, consider what happens when the shaft angle changes from 179.9° to 180.1° (from sector 3 to sector 4). At some instant, according to the above table, the contact pattern changes from off-on-on to on-off-off. However, this is not what happens in reality. In a practical device, the contacts are never perfectly aligned, so each switches at a different moment. If contact 1 switches first, followed by contact 3 and then contact 2, for example, the actual sequence of codes is:
off-on-on (starting position)
on-on-on (first, contact 1 switches on)
on-on-off (next, contact 3 switches off)
on-off-off (finally, contact 2 switches off)
Now look at the sectors corresponding to these codes in the table. In order, they are 3, 7, 6 and then 4. So, from the sequence of codes produced, the shaft appears to have jumped from sector 3 to sector 7, then gone backwards to sector 6, then backwards again to sector 4, which is where we expected to find it. In many situations, this behaviour is undesirable and could cause the system to fail. For example, if the encoder were used in a robot arm, the controller would think that the arm was in the wrong position, and try to correct the error by turning it through 180°, perhaps causing damage to the arm.
Gray encoding
To avoid the above problem, Gray coding is used. This is a system of binary counting in which any two adjacent codes differ by only one bit position. For the three-contact example given above, the Gray-coded version would be as follows.
In this example, the transition from sector 3 to sector 4, like all other transitions, involves only one of the contacts changing its state from on to off or vice versa. This means that the sequence of incorrect codes shown in the previous illustration cannot happen.
Single-track Gray encoding
If the designer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be omitted, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder with a single ring.
It is possible to arrange several sensors around a single track (ring) so that consecutive positions differ at only a single sensor; the result is the single-track Gray code encoder.
Data output methods
Depending on the device and manufacturer, an absolute encoder may use any of several signal types and communication protocols to transmit data, including parallel binary, analog signals (current or voltage), and serial bus systems such as SSI, BiSS, Heidenhain EnDat, Sick-Stegmann Hiperface, DeviceNet, Modbus, Profibus, CANopen and EtherCAT, which typically employ Ethernet or RS-422/RS-485 physical layers.
Incremental encoder
The rotary incremental encoder is the most widely used of all rotary encoders due to its ability to provide real-time position information. The measurement resolution of an incremental encoder is not limited in any way by its two internal, incremental movement sensors; one can find in the market incremental encoders with up to 10,000 counts per revolution, or more.
Rotary incremental encoders report position changes without being prompted to do so, and they convey this information at data rates which are orders of magnitude faster than those of most types of absolute shaft encoders. Because of this, incremental encoders are commonly used in applications that require precise measurement of position and velocity.
A rotary incremental encoder may use mechanical, optical or magnetic sensors to detect rotational position changes. The mechanical type is commonly employed as a manually operated "digital potentiometer" control on electronic equipment. For example, modern home and car stereos typically use mechanical rotary encoders as volume controls. Encoders with mechanical sensors require switch debouncing and consequently are limited in the rotational speeds they can handle. The optical type is used when higher speeds are encountered or a higher degree of precision is required.
A rotary incremental encoder has two output signals, A and B, which issue a periodic digital waveform in quadrature when the encoder shaft rotates. This is similar to sine encoders, which output sinusoidal waveforms in quadrature (i.e., sine and cosine), thus combining the characteristics of an encoder and a resolver. The waveform frequency indicates the speed of shaft rotation and the number of pulses indicates the distance moved, whereas the A-B phase relationship indicates the direction of rotation.
Some rotary incremental encoders have an additional "index" output (typically labeled Z), which emits a pulse when the shaft passes through a particular angle. Once every rotation, the Z signal is asserted, typically always at the same angle, until the next AB state change. This is commonly used in radar systems and other applications that require a registration signal when the encoder shaft is located at a particular reference angle.
Unlike absolute encoders, an incremental encoder does not keep track of, nor do its outputs indicate the absolute position of the mechanical system to which it is attached. Consequently, to determine the absolute position at any particular moment, it is necessary to "track" the absolute position with an incremental encoder interface which typically includes a bidirectional electronic counter.
Inexpensive incremental encoders are used in mechanical computer mice. Typically, two encoders are used: one to sense left-right motion and another to sense forward-backward motion.
Rotary (Angle) Pulse Encoder
A Rotary (Angle) Pulse Encoder has a SPDT switch for each direction, with each one only operating in the direction of travel. Each turn indent in one direction causes the SPDT switch associated with that direction only to toggle.
Other pulse-output rotary encoders
Rotary encoders with a single output (i.e. tachometers) cannot be used to sense direction of motion but are suitable for measuring speed and for measuring position when the direction of travel is constant. In certain applications they may be used to measure distance of motion (e.g. feet of movement).
See also
Analogue devices that perform a similar function include the synchro, the resolver, the rotary variable differential transformer (RVDT), and the rotary potentiometer.
A linear encoder is similar to a rotary encoder, but measures position or motion in a straight line, rather than rotation. Linear encoders often use incremental encoding and are used in many machine tools.
Rotary switch
References
Further reading
(NB. Supersedes MIL-HDBK-231(AS) (1970-07-01).)
External links
"Choosing a code wheel: A detailed look at how encoders work" article by Steve Trahey 2008-03-25 describes "rotary encoders".
"Encoders provide a sense of place" article by Jack Ganssle 2005-07-19 describes "nonlinear encoders".
"Robot Encoders".
Introductory Tutorial on PWM and Quadrature Encoding.
Revotics - Understanding Quadrature Encoding - Covers details of rotary and quadrature encoding with a focus on robotic applications.
How Rotary Encoder Works - Video explanation how rotary encoder works, plus how to use it with an Arduino microcontroller.
Electromechanical engineering
Position sensors | Rotary encoder | [
"Engineering"
] | 3,440 | [
"Electrical engineering",
"Electromechanical engineering",
"Mechanical engineering by discipline"
] |
741,036 | https://en.wikipedia.org/wiki/Newton%20scale | The Newton scale is a temperature scale devised by Isaac Newton in 1701. He called his device a "thermometer", but he did not use the term "temperature", speaking of "degrees of heat" () instead. Newton's publication represents the first attempt to introduce an objective way of measuring (what would come to be called) temperature (alongside the Rømer scale published at nearly the same time). Newton likely developed his scale for practical use rather than for a theoretical interest in thermodynamics; he had been appointed Warden of the Mint in 1695, and Master of the Mint in 1699, and his interest in the melting points of metals are likely inspired by his duties in connection with the Royal Mint.
Newton used linseed oil as thermometric material and measured its change of volume against his reference points. He set as 0 on his scale "the heat of air in winter at which water begins to freeze" (), reminiscent of the standard of the modern Celsius scale (i.e. 0 °N = 0 °C), but he has no single second reference point; he does give the "heat at which water begins to boil" as 33, but this is not a defining reference; the values for body temperature and the freezing and boiling point of water suggest a conversion factor between the Newton and the Celsius scale of between about 3.08 (12 °N = 37 °C) and 3.03 (33 °N = 100 °C) but since the objectively verifiable reference points given result in irreconcilable data (especially for high temperatures), no unambiguous "conversion" between the scales is possible.
The linseed thermometer could be used up to the melting point of tin. For higher temperatures, Newton used a "sufficiently thick piece of iron" that was heated until red-hot and then exposed to the wind. On this piece of iron, samples of metals and alloys were placed, which melted and then again solidified on cooling. Newton then determined the "degrees of heat" of these samples based on the solidification times, and tied this scale to the linseed one by measuring the melting point of tin in both systems. This second system of measurement led Newton to derive his law of convective heat transfer, also known as Newton's law of cooling.
In his publication, Newton gives 18 reference points (in addition to a range of meteorological air temperatures), which he labels by two systems, one in arithmetic progression and the other in geometric progression, as follows:
See also
Outline of metrology and measurement
Comparison of temperature scales
List of obsolete units of measurement
References
External links
Photo of an antique thermometer backing board c. 1758marked in four scales; the first is Newton's.
Obsolete units of measurement
Scales of temperature | Newton scale | [
"Physics",
"Mathematics"
] | 580 | [
"Scales of temperature",
"Obsolete units of measurement",
"Physical quantities",
"Quantity",
"Units of measurement"
] |
741,104 | https://en.wikipedia.org/wiki/Intelligent%20control | Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.
Overview
Intelligent control can be divided into the following major sub-domains:
Neural network control
Machine learning control
Reinforcement learning
Bayesian control
Fuzzy control
Neuro-fuzzy control
Expert Systems
Genetic control
New control techniques are created continuously as new models of intelligent behavior are created and computational methods developed to support them.
Neural network controller
Neural networks have been used to solve problems in almost all spheres of science and technology. Neural network control basically involves two steps:
System identification
Control
It has been shown that a feedforward network with nonlinear, continuous and differentiable activation functions have universal approximation capability. Recurrent networks have also been used for system identification. Given, a set of input-output data pairs, system identification aims to form a mapping among these data pairs. Such a network is supposed to capture the dynamics of a system. For the control part, deep reinforcement learning has shown its ability to control complex systems.
Bayesian controllers
Bayesian probability has produced a number of algorithms that are in common use in many advanced control systems, serving as state space estimators of some variables that are used in the controller.
The Kalman filter and the Particle filter are two examples of popular Bayesian control components. The Bayesian approach to controller design often requires an important effort in deriving the so-called system model and measurement model, which are the mathematical relationships linking the state variables to the sensor measurements available in the controlled system. In this respect, it is very closely linked to the
system-theoretic approach to control design.
See also
Action selection
AI effect
Applications of artificial intelligence
Artificial intelligence systems integration
Function approximation
Hybrid intelligent system
Lists
List of emerging technologies
Outline of artificial intelligence
References
Further reading
Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley & Sons, NY;
Control theory
Artificial intelligence
Applications of Bayesian inference | Intelligent control | [
"Mathematics"
] | 429 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
741,114 | https://en.wikipedia.org/wiki/Dalton%20%28program%29 | Dalton (named after John Dalton) is an ab initio quantum chemistry computer program suite, consisting of the Dalton and LSDalton programs. The Dalton suite is capable of calculating various molecular properties using the Hartree–Fock, MP2, MCSCF and coupled cluster theories. Version 2.0 of DALTON added support for density functional theory calculations. There are many authors, including Trygve Helgaker, Poul Jørgensen and Kenneth Ruud.
Dalton switched to the open source GNU LGPL licence in August 2017.
See also
Quantum chemistry software
Centre for Theoretical and Computational Chemistry
External links
Dalton project homepage
References
Computational chemistry software | Dalton (program) | [
"Chemistry"
] | 135 | [
"Computational chemistry software",
"Chemistry software",
"Theoretical chemistry stubs",
"Computational chemistry stubs",
"Computational chemistry",
"Physical chemistry stubs"
] |
742,288 | https://en.wikipedia.org/wiki/Faraday%27s%20law%20of%20induction | Faraday's law of induction (or simply Faraday's law) is a law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf). This phenomenon, known as electromagnetic induction, is the fundamental operating principle of transformers, inductors, and many types of electric motors, generators and solenoids.
The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that emf (electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) appears on a conductive loop when the magnetic flux through the surface enclosed by the loop varies in time.
Once Faraday's law had been discovered, one aspect of it (transformer emf) was formulated as the Maxwell–Faraday equation. The equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer emf) and the Lorentz force (describing motional emf). The integral form of the Maxwell–Faraday equation describes only the transformer emf, while the equation of Faraday's law describes both the transformer emf and the motional emf.
History
Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Faraday was the first to publish the results of his experiments.
Faraday's notebook on August 29, 1831 describes an experimental demonstration of electromagnetic induction (see figure) that wraps two wires around opposite sides of an iron ring (like a modern toroidal transformer). His assessment of newly-discovered properties of electromagnets suggested that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Indeed, a galvanometer's needle measured a transient current (which he called a "wave of electricity") on the right side's wire when he connected or disconnected the left side's wire to a battery. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. His notebook entry also noted that fewer wraps for the battery side resulted in a greater disturbance of the galvanometer's needle.
Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
Lenz's law, formulated by Emil Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced emf and current resulting from electromagnetic induction (elaborated upon in the examples below).
According to Albert Einstein, much of the groundwork and discovery of his special relativity theory was presented by this law of induction by Faraday in 1834.
Faraday's law
The most widespread version of Faraday's law states:
Mathematical statement
For a loop of wire in a magnetic field, the magnetic flux is defined for any surface whose boundary is the given loop. Since the wire loop may be moving, we write for the surface. The magnetic flux is the surface integral:
where is an element of area vector of the moving surface , is the magnetic field, and is a vector dot product representing the element of flux through . In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.
When the flux changes—because changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an emf, defined as the energy available from a unit charge that has traveled once around the wire loop. (Although some sources state the definition differently, this expression was chosen for compatibility with the equations of special relativity.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads.
Faraday's law states that the emf is also given by the rate of change of the magnetic flux:
where is the electromotive force (emf) and is the magnetic flux.
The direction of the electromotive force is given by Lenz's law.
The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845.
Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.
It is possible to find out the direction of the electromotive force (emf) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows:
Align the curved fingers of the left hand with the loop (yellow line).
Stretch your thumb. The stretched thumb indicates the direction of (brown), the normal to the area enclosed by the loop.
Find the sign of , the change in flux. Determine the initial and final fluxes (whose difference is ) with respect to the normal , as indicated by the stretched thumb.
If the change in flux, , is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads).
If is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads).
For a tightly wound coil of wire, composed of identical turns, each with the same , Faraday's law of induction states that
where is the number of turns of wire and is the magnetic flux through a single loop.
Maxwell–Faraday equation
The Maxwell–Faraday equation states that a time-varying magnetic field always accompanies a spatially varying (also possibly time-varying), non-conservative electric field, and vice versa. The Maxwell–Faraday equation is
(in SI units) where is the curl operator and again is the electric field and is the magnetic field. These fields can generally be functions of position and time .
The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem, thereby reproducing Faraday's law:
where, as indicated in the figure, is a surface bounded by the closed contour , is an infinitesimal vector element of the contour , and is an infinitesimal vector element of surface . Its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface.
Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface , a positive path element of curve is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal to the surface .
The line integral around is called circulation. A nonzero circulation of is different from the behavior of the electric field generated by static charges. A charge-generated -field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem.
The integral equation is true for any path through space, and any surface for which that path is a boundary.
If the surface is not changing in time, the equation can be rewritten:
The surface integral at the right-hand side is the explicit expression for the magnetic flux through .
The electric vector field induced by a changing magnetic flux, the solenoidal component of the overall electric field, can be approximated in the non-relativistic limit by the volume integral equation
Proof
The four Maxwell's equations (including the Maxwell–Faraday equation), along with Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism. Therefore, it is possible to "prove" Faraday's law starting with these equations.
The starting point is the time-derivative of flux through an arbitrary surface (that can be moved or deformed) in space:
(by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation and some vector identities; the details are in the box below:
The result is:
where is the boundary (loop) of the surface , and is the velocity of a part of the boundary.
In the case of a conductive loop, emf (Electromotive Force) is the electromagnetic work done on a unit charge when it has traveled around the loop once, and this work is done by the Lorentz force. Therefore, emf is expressed as
where is emf and is the unit charge velocity.
In a macroscopic view, for charges on a segment of the loop, consists of two components in average; one is the velocity of the charge along the segment , and the other is the velocity of the segment (the loop is deformed or moved). does not contribute to the work done on the charge since the direction of is same to the direction of . Mathematically,
since is perpendicular to as and are along the same direction. Now we can see that, for the conductive loop, emf is same to the time-derivative of the magnetic flux through the loop except for the sign on it. Therefore, we now reach the equation of Faraday's law (for the conductive loop) as
where . With breaking this integral, is for the transformer emf (due to a time-varying magnetic field) and is for the motional emf (due to the magnetic Lorentz force on charges by the motion or deformation of the loop in the magnetic field).
Exceptions
It is tempting to generalize Faraday's law to state: If is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through equals the emf around . This statement, however, is not always true and the reason is not just from the obvious reason that emf is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve matches the actual velocity of the material conducting the electricity. The two examples illustrated below show that one often obtains incorrect results when the motion of is divorced from the motion of the material.
One can analyze examples like these by taking care that the path moves with the same velocity as the material. Alternatively, one can always correctly calculate the emf by combining Lorentz force law with the Maxwell–Faraday equation:
where "it is very important to notice that (1) is the velocity of the conductor ... not the velocity of the path element and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time."
Faraday's law and relativity
Two phenomena
Faraday's law is a single equation describing two different phenomena: the motional emf generated by a magnetic force on a moving wire (see the Lorentz force), and the transformer emf generated by an electric force due to a changing magnetic field (described by the Maxwell–Faraday equation).
James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena.
A reference to these two aspects of electromagnetic induction is made in some modern textbooks. As Richard Feynman states:
Explanation based on four-dimensional formalism
In the general case, explanation of the motional emf appearance by action of the magnetic force on the charges in the moving wire or in the circuit changing its area is unsatisfactory. As a matter of fact, the charges in the wire or in the circuit could be completely absent, will then the electromagnetic induction effect disappear in this case? This situation is analyzed in the article, in which, when writing the integral equations of the electromagnetic field in a four-dimensional covariant form, in the Faraday’s law the total time derivative of the magnetic flux through the circuit appears instead of the partial time derivative. Thus, electromagnetic induction appears either when the magnetic field changes over time or when the area of the circuit changes. From the physical point of view, it is better to speak not about the induction emf, but about the induced electric field strength , that occurs in the circuit when the magnetic flux changes. In this case, the contribution to from the change in the magnetic field is made through the term , where is the vector potential. If the circuit area is changing in case of the constant magnetic field, then some part of the circuit is inevitably moving, and the electric field emerges in this part of the circuit in the comoving reference frame K’ as a result of the Lorentz transformation of the magnetic field , present in the stationary reference frame K, which passes through the circuit. The presence of the field in K’ is considered as a result of the induction effect in the moving circuit, regardless of whether the charges are present in the circuit or not. In the conducting circuit, the field causes motion of the charges. In the reference frame K, it looks like appearance of emf of the induction , the gradient of which in the form of , taken along the circuit, seems to generate the field .
Einstein's view
Reflection on this apparent dichotomy was one of the principal paths that led Albert Einstein to develop special relativity:
See also
References
Further reading
External links
A simple interactive tutorial on electromagnetic induction (click and drag magnet back and forth) National High Magnetic Field Laboratory
Roberto Vega. Induction: Faraday's law and Lenz's law – Highly animated lecture, with sound effects, Electricity and Magnetism course page
Notes from Physics and Astronomy HyperPhysics at Georgia State University
Tankersley and Mosca: Introducing Faraday's law
A free simulation on motional emf
Faraday's law of electromagnetic induction
Michael Faraday
Maxwell's equations | Faraday's law of induction | [
"Physics",
"Mathematics"
] | 3,130 | [
"Electrodynamics",
"Maxwell's equations",
"Equations of physics",
"Dynamical systems"
] |
742,319 | https://en.wikipedia.org/wiki/Faraday%27s%20laws%20of%20electrolysis | Faraday's laws of electrolysis are quantitative relationships based on the electrochemical research published by Michael Faraday in 1833.
First law
Michael Faraday reported that the mass () of a substance deposited or liberated at an electrode is directly proportional to the charge (, for which the SI unit is the ampere-second or coulomb).
Here, the constant of proportionality, , is called the electro-chemical equivalent (ECE) of the substance. Thus, the ECE can be defined as the mass of the substance deposited or liberated per unit charge.
Second law
Faraday discovered that when the same amount of electric current is passed through different electrolytes connected in series, the masses of the substances deposited or liberated at the electrodes are directly proportional to their respective chemical equivalent/equivalent weight (). This turns out to be the molar mass () divided by the valence ()
Derivation
A monovalent ion requires one electron for discharge, a divalent ion requires two electrons for discharge and so on. Thus, if electrons flow, atoms are discharged.
Thus, the mass discharged is
where
is the Avogadro constant;
is the total charge, equal to the number of electrons () times the elementary charge ;
is the Faraday constant.
Mathematical form
Faraday's laws can be summarized by
where is the molar mass of the substance (usually given in SI units of grams per mole) and is the valency of the ions .
For Faraday's first law, are constants; thus, the larger the value of , the larger will be.
For Faraday's second law, are constants; thus, the larger the value of (equivalent weight), the larger will be.
In the simple case of constant-current electrolysis, , leading to
and then to
where:
is the amount of substance ("number of moles") liberated:
is the total time the constant current was applied.
For the case of an alloy whose constituents have different valencies, we have
where represents the mass fraction of the th element.
In the more complicated case of a variable electric current, the total charge is the electric current integrated over time :
Here is the total electrolysis time.
Applications
Electroplating – a process where a thin layer of metal is deposited onto the surface of an object using an electric current
Electrochemical cells – generates electrical energy from chemical reactions
Electrotyping – a process used to create metal copies of designs by depositing metal onto a mold using electroplating
Electrowinning – a process that extract metals from their solutions using an electric current
Electroforming – a process that deposits metal onto a mold or substrate to create metal parts
Anodization – a process that converts the surface of a metal into a durable corrosion-resistant oxide layer
Conductive polymers – organic polymers that conduct electricity
Water electrolysis – a process that uses an electric current to split water molecules into hydrogen and oxygen gases
Electrolytic capacitors – a type of capacitor that uses an electrolytic solution as one of its plates
See also
Electrolysis
Faraday's law of induction
Tafel equation
References
Further reading
Serway, Moses, and Moyer, Modern Physics, third edition (2005), principles of physics.
Experiment with Faraday's laws
Electrochemistry
Electrolysis
Electrochemical equations
Scientific laws
Michael Faraday | Faraday's laws of electrolysis | [
"Chemistry",
"Mathematics"
] | 678 | [
"Mathematical objects",
"Scientific laws",
"Equations",
"Electrochemistry",
"Electrolysis",
"Electrochemical equations"
] |
742,352 | https://en.wikipedia.org/wiki/Derivative%20test | In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.
The usefulness of derivatives to find extrema is proved mathematically by Fermat's theorem of stationary points.
First-derivative test
The first-derivative test examines a function's monotonic properties (where the function is increasing or decreasing), focusing on a particular point in its domain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch" and remains increasing or remains decreasing, then no highest or least value is achieved.
One can examine a function's monotonicity without calculus. However, calculus is usually helpful because there are sufficient conditions that guarantee the monotonicity properties above, and these conditions apply to the vast majority of functions one would encounter.
Precise statement of monotonicity properties
Stated precisely, suppose that f is a real-valued function defined on some open interval containing the point x and suppose further that f is continuous at x.
If there exists a positive number r > 0 such that f is weakly increasing on and weakly decreasing on , then f has a local maximum at x.
If there exists a positive number r > 0 such that f is strictly increasing on and strictly increasing on , then f is strictly increasing on and does not have a local maximum or minimum at x.
Note that in the first case, f is not required to be strictly increasing or strictly decreasing to the left or right of x, while in the last case, f is required to be strictly increasing or strictly decreasing. The reason is that in the definition of local maximum and minimum, the inequality is not required to be strict: e.g. every value of a constant function is considered both a local maximum and a local minimum.
Precise statement of first-derivative test
The first-derivative test depends on the "increasing–decreasing test", which is itself ultimately a consequence of the mean value theorem. It is a direct consequence of the way the derivative is defined and its connection to decrease and increase of a function locally, combined with the previous section.
Suppose f is a real-valued function of a real variable defined on some interval containing the critical point a. Further suppose that f is continuous at a and differentiable on some open interval containing a, except possibly at a itself.
If there exists a positive number r > 0 such that for every x in (a − r, a) we have and for every x in (a, a + r) we have then f has a local maximum at a.
If there exists a positive number r > 0 such that for every x in (a − r, a) we have and for every x in (a, a + r) we have then f has a local minimum at a.
If there exists a positive number r > 0 such that for every x in (a − r, a) ∪ (a, a + r) we have then f is strictly increasing at a and has neither a local maximum nor a local minimum there.
If none of the above conditions hold, then the test fails. (Such a condition is not vacuous; there are functions that satisfy none of the first three conditions, e.g. f(x) = x2 sin(1/x)).
Again, corresponding to the comments in the section on monotonicity properties, note that in the first two cases, the inequality is not required to be strict, while in the third, strict inequality is required.
Applications
The first-derivative test is helpful in solving optimization problems in physics, economics, and engineering. In conjunction with the extreme value theorem, it can be used to find the absolute maximum and minimum of a real-valued function defined on a closed and bounded interval. In conjunction with other information such as concavity, inflection points, and asymptotes, it can be used to sketch the graph of a function.
Second-derivative test (single variable)
After establishing the critical points of a function, the second-derivative test uses the value of the second derivative at those points to determine whether such points are a local maximum or a local minimum. If the function f is twice-differentiable at a critical point x (i.e. a point where (x) = 0), then:
If , then has a local maximum at .
If , then has a local minimum at .
If , the test is inconclusive.
In the last case, Taylor's theorem may sometimes be used to determine the behavior of f near x using higher derivatives.
Proof of the second-derivative test
Suppose we have (the proof for is analogous). By assumption, . Then
Thus, for h sufficiently small we get
which means that if (intuitively, f is decreasing as it approaches from the left), and that if (intuitively, f is increasing as we go right from x). Now, by the first-derivative test, has a local minimum at .
Concavity test
A related but distinct use of second derivatives is to determine whether a function is concave up or concave down at a point. It does not, however, provide information about inflection points. Specifically, a twice-differentiable function f is concave up if and concave down if . Note that if , then has zero second derivative, yet is not an inflection point, so the second derivative alone does not give enough information to determine whether a given point is an inflection point.
Higher-order derivative test
The higher-order derivative test or general derivative test is able to determine whether a function's critical points are maxima, minima, or points of inflection for a wider variety of functions than the second-order derivative test. As shown below, the second-derivative test is mathematically identical to the special case of n = 1 in the higher-order derivative test.
Let f be a real-valued, sufficiently differentiable function on an interval , let , and let be a natural number. Also let all the derivatives of f at c be zero up to and including the n-th derivative, but with the (n + 1)th derivative being non-zero:
There are four possibilities, the first two cases where c is an extremum, the second two where c is a (local) saddle point:
If n is odd and , then c is a local maximum.
If n is odd and , then c is a local minimum.
If n is even and , then c is a strictly decreasing point of inflection.
If n is even and , then c is a strictly increasing point of inflection.
Since n must be either odd or even, this analytical test classifies any stationary point of f, so long as a nonzero derivative shows up eventually.
Example
Say we want to perform the general derivative test on the function at the point . To do this, we calculate the derivatives of the function and then evaluate them at the point of interest until the result is nonzero.
,
,
,
,
,
,
As shown above, at the point , the function has all of its derivatives at 0 equal to 0, except for the 6th derivative, which is positive. Thus n = 5, and by the test, there is a local minimum at 0.
Multivariable case
For a function of more than one variable, the second-derivative test generalizes to a test based on the eigenvalues of the function's Hessian matrix at the critical point. In particular, assuming that all second-order partial derivatives of f are continuous on a neighbourhood of a critical point x, then if the eigenvalues of the Hessian at x are all positive, then x is a local minimum. If the eigenvalues are all negative, then x is a local maximum, and if some are positive and some negative, then the point is a saddle point. If the Hessian matrix is singular, then the second-derivative test is inconclusive.
See also
Bordered Hessian
Convex function
Differentiability
Fermat's theorem (stationary points)
Inflection point
Karush–Kuhn–Tucker conditions
Maxima and minima
Optimization (mathematics)
Phase line – virtually identical diagram, used in the study of ordinary differential equations
Saddle point
Second partial derivative test
Stationary point
Further reading
References
External links
"Second Derivative Test" at Mathworld
Concavity and the Second Derivative Test
Thomas Simpson's use of Second Derivative Test to Find Maxima and Minima at Convergence
Differential calculus | Derivative test | [
"Mathematics"
] | 1,808 | [
"Differential calculus",
"Calculus"
] |
742,477 | https://en.wikipedia.org/wiki/Newton%20polygon | In mathematics, the Newton polygon is a tool for understanding the behaviour of polynomials over local fields, or more generally, over ultrametric fields.
In the original case, the ultrametric field of interest was essentially the field of formal Laurent series in the indeterminate X, i.e. the field of fractions of the formal power series ring ,
over , where was the real number or complex number field. This is still of considerable utility with respect to Puiseux expansions. The Newton polygon is an effective device for understanding the leading terms
of the power series expansion solutions to equations
where is a polynomial with coefficients in , the polynomial ring; that is, implicitly defined algebraic functions. The exponents here are certain rational numbers, depending on the branch chosen; and the solutions themselves are power series in
with for a denominator corresponding to the branch. The Newton polygon gives an effective, algorithmic approach to calculating .
After the introduction of the p-adic numbers, it was shown that the Newton polygon is just as useful in questions of ramification for local fields, and hence in algebraic number theory. Newton polygons have also been useful in the study of elliptic curves.
Definition
A priori, given a polynomial over a field, the behaviour of the roots (assuming it has roots) will be unknown. Newton polygons provide one technique for the study of the behaviour of the roots.
Let be a field endowed with a non-archimedean valuation , and let
with . Then the Newton polygon of is defined to be the lower boundary of the convex hull of the set of points
ignoring the points with .
Restated geometrically, plot all of these points Pi on the xy-plane. Let's assume that the points indices increase from left to right (P0 is the leftmost point, Pn is the rightmost point). Then, starting at P0, draw a ray straight down parallel with the y-axis, and rotate this ray counter-clockwise until it hits the point Pk1 (not necessarily P1). Break the ray here. Now draw a second ray from Pk1 straight down parallel with the y-axis, and rotate this ray counter-clockwise until it hits the point Pk2. Continue until the process reaches the point Pn; the resulting polygon (containing the points P0, Pk1, Pk2, ..., Pkm, Pn) is the Newton polygon.
Another, perhaps more intuitive way to view this process is this : consider a rubber band surrounding all the points P0, ..., Pn. Stretch the band upwards, such that the band is stuck on its lower side by some of the points (the points act like nails, partially hammered into the xy plane). The vertices of the Newton polygon are exactly those points.
For a neat diagram of this see Ch6 §3 of "Local Fields" by JWS Cassels, LMS Student Texts 3, CUP 1986. It is on p99 of the 1986 paperback edition.
Main theorem
With the notations in the previous section, the main result concerning the Newton polygon is the following theorem, which states that the valuation of the roots of are entirely determined by its Newton polygon:
Let
be the slopes of the line segments of the Newton polygon of (as defined above) arranged in increasing order, and let
be the corresponding lengths of the line segments projected onto the x-axis (i.e. if we have a line segment stretching between the points and then the length is ).
The are distinct;
;
if is a root of in , ;
for every , the number of roots of whose valuations are equal to (counting multiplicities) is at most , with equality if splits into the product of linear factors over .
Corollaries and applications
With the notation of the previous sections, we denote, in what follows, by the splitting field of over , and by an extension of to .
Newton polygon theorem is often used to show the irreducibility of polynomials, as in the next corollary for example:
Suppose that the valuation is discrete and normalized, and that the Newton polynomial of contains only one segment whose slope is and projection on the x-axis is . If , with coprime to , then is irreducible over . In particular, since the Newton polygon of an Eisenstein polynomial consists of a single segment of slope connecting and , Eisenstein criterion follows.
Indeed, by the main theorem, if is a root of ,
If were not irreducible over , then the degree of would be , and there would hold . But this is impossible since with coprime .
Another simple corollary is the following:
Assume that is Henselian. If the Newton polygon of fulfills for some , then has a root in .
Proof: By the main theorem, must have a single root whose valuation is In particular, is separable over .
If does not belong to , has a distinct Galois conjugate over , with , and is a root of , a contradiction.
More generally, the following factorization theorem holds:
Assume that is Henselian. Then , where , is monic for every , the roots of are of valuation , and .
Moreover, , and if is coprime to , is irreducible over .
Proof:
For every , denote by the product of the monomials such that is a root of and . We also denote the factorization of in into prime monic factors
Let be a root of . We can assume that is the minimal polynomial of over .
If is a root of , there exists a K-automorphism of that sends to , and we have since is Henselian. Therefore is also a root of .
Moreover, every root of of multiplicity is clearly a root of of multiplicity , since repeated roots share obviously the same valuation. This shows that divides
Let . Choose a root of . Notice that the roots of are distinct from the roots of . Repeat the previous argument with the minimal polynomial of over , assumed w.l.g. to be , to show that divides .
Continuing this process until all the roots of are exhausted, one eventually arrives to
, with . This shows that , monic.
But the are coprime since their roots have distinct valuations. Hence clearly , showing the main contention.
The fact that follows from the main theorem, and so does the fact that , by remarking that the Newton polygon of can have only one segment joining to . The condition for the irreducibility of follows from the corollary above. (q.e.d.)
The following is an immediate corollary of the factorization above, and constitutes a test for the reducibility of polynomials over Henselian fields:
Assume that is Henselian. If the Newton polygon does not reduce to a single segment then is reducible over .
Other applications of the Newton polygon comes from the fact that a Newton Polygon is sometimes a special case of a Newton polytope, and can be used to construct asymptotic solutions of two-variable polynomial equations like
Symmetric function explanation
In the context of a valuation, we are given certain information in the form of the valuations of elementary symmetric functions of the roots of a polynomial, and require information on the valuations of the actual roots, in an algebraic closure. This has aspects both of ramification theory and singularity theory. The valid inferences possible are to the valuations of power sums, by means of Newton's identities.
History
Newton polygons are named after Isaac Newton, who first described them and some of their uses in correspondence from the year 1676 addressed to Henry Oldenburg.
See also
F-crystal
Eisenstein's criterion
Newton–Okounkov body
Newton polytope
References
Gouvêa, Fernando: p-adic numbers: An introduction. Springer Verlag 1993. p. 199.
External links
Applet drawing a Newton Polygon
Algebraic number theory
Symmetric functions
Isaac Newton | Newton polygon | [
"Physics",
"Mathematics"
] | 1,649 | [
"Symmetry",
"Number theory",
"Symmetric functions",
"Algebraic number theory",
"Algebra"
] |
10,319,792 | https://en.wikipedia.org/wiki/Fragmentation%20%28cell%20biology%29 | Fragmentation describes the process of splitting into several pieces or fragments. In cell biology, fragmentation is useful for a cell during both DNA cloning and apoptosis. DNA cloning is important in asexual reproduction or creation of identical DNA molecules, and can be performed spontaneously by the cell or intentionally by laboratory researchers. Apoptosis is the programmed destruction of cells, and the DNA molecules within them, and is a highly regulated process. These two ways in which fragmentation is used in cellular processes describe normal cellular functions and common laboratory procedures performed with cells. However, problems within a cell can sometimes cause fragmentation that results in irregularities such as red blood cell fragmentation and sperm cell DNA fragmentation.
DNA Cloning
DNA cloning can be performed spontaneously by the cell for reproductive purposes. This is a form of asexual reproduction where an organism splits into fragments and then each of these fragments develop into mature, fully grown individuals that are clones of the original organism (See reproductive fragmentation).
DNA cloning can also be performed intentionally by laboratory researchers. Here, DNA fragmentation is a molecular genetic technique that permits researchers to use recombinant DNA technology to prepare large numbers of identical DNA molecules.
In order for DNA cloning to be completed, it is necessary to obtain discrete, small regions of an organism's DNA that constitute specific genes. Only relatively small DNA molecules can be cloned in any available vector. Therefore, the long DNA molecules that compose an organism's genome must be cleaved into fragments that can be inserted into the vector DNA. Two enzymes facilitate the production of such recombinant DNA molecules:
1. Restriction Enzymes
Restriction enzymes are endonucleases produced by bacteria that typically recognize small base pair sequences (called restriction sites) and then cleave both strands of DNA at this site. A restriction site is typically a palindromic sequence, which means that the restriction-site sequence is the same on each strand of DNA when read in the 5' to 3' direction.
For each restriction enzyme, bacteria also produce a modification enzyme so that a host bacterium's own DNA is protected from cleavage. This is done by modifying the host DNA at or near each potential cleavage site. The modification enzyme adds a methyl group to one or two bases, and the presence of this methyl group prevents the restriction endonuclease from cutting the DNA.
Many restriction enzymes make staggered cuts in the two DNA strands at their recognition site, which generates fragments with a single stranded "tail" that overhangs at both ends, called a sticky end. Restriction enzymes can also make straight cuts in the two DNA strands at their recognition site, which generates blunt ends.
2. DNA ligase
During normal DNA replication, DNA ligase catalyzes end-to-end joining (ligation) of short fragments of DNA, called Okazaki fragments. For the purposes of DNA cloning, purified DNA ligase is used to covalently join the ends of a restriction fragment and vector DNA that have complementary ends. They are covalently ligated together through the standard 3' to 5' phosphodiester bonds of DNA.
DNA ligase can ligate complementary sticky and blunt ends, but blunt-end ligation is inefficient and requires a higher concentration of both DNA and DNA ligase than the ligation of sticky ends does. For this reason, most restriction enzymes used in DNA cloning make staggered cuts in the DNA strands to create sticky ends.
The key to cloning a DNA fragment is to link it to a vector DNA molecule that can replicate within a host cell. After a single recombinant DNA molecule (composed of a vector plus an inserted DNA fragment) is introduced into a host cell, the inserted DNA can be replicated along with the vector, generating a large number of identical DNA molecules.
The basic scheme for this can be summarized as follows:
Vector + DNA Fragment
↓
Recombinant DNA
↓
Replication of recombinant DNA within host cell
↓
Isolation, sequencing, and manipulation of purified DNA fragment
There are numerous experimental variations to this scheme, but these steps are essential to DNA cloning in a laboratory.
Apoptosis
Apoptosis refers to the demise of cells by a specific form of programmed cell death, characterized by a well-defined sequence of morphological changes. Cellular and nuclear shrinkage, chromatin condensation and fragmentation, formation of apoptotic bodies and phagocytosis by neighboring cells characterize the main morphological changes in the apoptosis process. Extensive morphological and biochemical changes during apoptosis ensure that dying cells leave minimal impact on neighboring cells and/or tissues.
Genes involved in controlling cell death encode proteins with three distinct functions:
"Killer" proteins are required for a cell to begin the apoptotic process
"Destruction" proteins do things such as digest DNA in a dying cell
"Engulfment" proteins are required for phagocytosis of the dying cell by another cell
The cleavage of chromosomal DNA into smaller fragments is an integral part, and biochemical hallmark, of apoptosis. Apoptosis involves the activation of endonucleases with subsequent cleavage of chromatin DNA into fragments of 180 base pairs or multiples of 180 base pairs (e.g. 360, 540). This pattern of fragmentation can be used to detect apoptosis in tests such as a DNA laddering assay with gel electrophoresis, a TUNEL assay, or a Nicoletti assay.
Apoptotic DNA fragmentation relies on an enzyme called Caspase-Activated DNase (CAD). CAD is usually inhibited by another protein in the cell, called Inhibitor of caspase-activated DNase (ICAD). In order for apoptosis to begin, an enzyme called caspase 3 cleaves ICAD so that CAD becomes activated. CAD then cleaves the DNA between nucleosomes, which occur in chromatin at 180 base pair intervals. The sites between nucleosomes are the only parts of the DNA that are exposed and accessible to CAD.
Irregularities
DNA fragmentation can occur under certain conditions in a few different cell types. This can lead to problems for a cell, or it may lead to a cell receiving a signal to undergo apoptosis. Below are a couple of examples of irregular fragmentation that can occur in cells.
1. Red blood cell fragmentation
A fragmented red blood cell is known as a schistocyte and is generally the result of an intracellular mechanical injury to the red blood cell. A wide variety of schistocytes may be observed. Schistocytes are usually seen in relatively low numbers and are associated with conditions in which the normally smooth endothelial lining, or endothelium, is roughened or irregular, and/or the vascular lumen is crossed by strands of fibrin. Schistocytes are commonly seen in patients that have hemolytic anemia. They are also a feature of advanced iron deficiency anemia, but in this case the observed fragmentation is most likely a result of the fragility of the cells produced under these conditions.
2. Sperm cell DNA fragmentation
In an average male, less than 4% of his sperm cells will contain fragmented DNA. However, partaking in behaviors such as smoking can significantly increase DNA fragmentation in sperm cells. There is a negative correlation between the percentage of DNA fragmentation and the motility, morphology, and concentration of sperm. There is also a negative association between the percentage of sperm that contain fragmented DNA and the fertilization rate and embryo cleavage rate.
References
Cell biology | Fragmentation (cell biology) | [
"Biology"
] | 1,547 | [
"Cell biology"
] |
10,320,661 | https://en.wikipedia.org/wiki/Fluid%20Dynamics%20Prize%20%28APS%29 | The Fluid Dynamics Prize is a prize that has been awarded annually by the American Physical Society (APS) since 1979. The recipient is chosen for "outstanding achievement in fluid dynamics research". The prize is currently valued at . In 2004, the Otto Laporte Award—another APS award on fluid dynamics—was merged into the Fluid Dynamics Prize.
Recipients
The Fluid Dynamics Prize has been awarded to:
2022: Elisabeth Charlaix
2021:
2020: Katepalli Sreenivasan
2019: Alexander Smits
2018: Keith Moffatt
2017: Detlef Lohse
2016: Howard A. Stone
2015: Morteza Gharib
2014: Geneviève Comte-Bellot
2013: Elaine Surick Oran
2012: John F. Brady
2011: Tony Maxworthy
2010: E. John Hinch
2009: Stephen B. Pope
2008: Julio M. Ottino
2007:
2006: Thomas S. Lundgren
2005: Ronald J. Adrian
2004:
2003:
2002: Gary Leal
2001: Howard Brenner
2000:
1999: Daniel D. Joseph
1998: Fazle Hussain
1997: Louis Norberg Howard
1996: Parviz Moin
1995: Harry L Swinney
1994: Stephen H. Davis
1993: Theodore Yao-tsu Wu
1992: William R. Sears
1991: Andreas Acrivos
1990: John L. Lumley
1989:
1988:
1987: Anatol Roshko
1986: Robert T. Jones
1985: Chia-Shun Yih
1984: George Carrier
1983: Stanley Corrsin
1982: Howard W. Emmons
1981:
1980: Hans Wolfgang Liepmann
1979: Chia Chiao Lin
See also
List of physics awards
References
External links
Fluid Dynamics Prize, American Physical Society
Fluid dynamics
Awards established in 1979
Awards of the American Physical Society | Fluid Dynamics Prize (APS) | [
"Chemistry",
"Engineering"
] | 362 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
10,325,676 | https://en.wikipedia.org/wiki/Hybrid%20functional | Hybrid functionals are a class of approximations to the exchange–correlation energy functional in density functional theory (DFT) that incorporate a portion of exact exchange from Hartree–Fock theory with the rest of the exchange–correlation energy from other sources (ab initio or empirical). The exact exchange energy functional is expressed in terms of the Kohn–Sham orbitals rather than the density, so is termed an implicit density functional. One of the most commonly used versions is B3LYP, which stands for "Becke, 3-parameter, Lee–Yang–Parr".
Origin
The hybrid approach to constructing density functional approximations was introduced by Axel Becke in 1993. Hybridization with Hartree–Fock (HF) exchange (also called exact exchange) provides a simple scheme for improving the calculation of many molecular properties, such as atomization energies, bond lengths and vibration frequencies, which tend to be poorly described with simple "ab initio" functionals.
Method
A hybrid exchange–correlation functional is usually constructed as a linear combination of the Hartree–Fock exact exchange functional
and any number of exchange and correlation explicit density functionals. The parameters determining the weight of each individual functional are typically specified by fitting the functional's predictions to experimental or accurately calculated thermochemical data, although in the case of the "adiabatic connection functionals" the weights can be set a priori.
B3LYP
For example, the popular B3LYP (Becke, 3-parameter, Lee–Yang–Parr) exchange-correlation functional is
where , , and . is a generalized gradient approximation: the Becke 88 exchange functional and the correlation functional of Lee, Yang and Parr for B3LYP, and is the VWN local spin density approximation to the correlation functional.
The three parameters defining B3LYP have been taken without modification from Becke's original fitting of the analogous B3PW91 functional to a set of atomization energies, ionization potentials, proton affinities, and total atomic energies.
PBE0
The PBE0 functional
mixes the Perdew–Burke–Ernzerhof (PBE) exchange energy and Hartree–Fock exchange energy in a set 3:1 ratio, along with the full PBE correlation energy:
where is the Hartree–Fock exact exchange functional, is the PBE exchange functional, and is the PBE correlation functional.
HSE
The HSE (Heyd–Scuseria–Ernzerhof) exchange–correlation functional uses an error-function-screened Coulomb potential to calculate the exchange portion of the energy in order to improve computational efficiency, especially for metallic systems:
where is the mixing parameter, and is an adjustable parameter controlling the short-rangeness of the interaction. Standard values of and (usually referred to as HSE06) have been shown to give good results for most systems. The HSE exchange–correlation functional degenerates to the PBE0 hybrid functional for . is the short-range Hartree–Fock exact exchange functional, and are the short- and long-range components of the PBE exchange functional, and is the PBE correlation functional.
Meta-hybrid GGA
The M06 suite of functionals is a set of four meta-hybrid GGA and meta-GGA DFT functionals. These functionals are constructed by empirically fitting their parameters, while being constrained to a uniform electron gas.
The family includes the functionals M06-L, M06, M06-2X and M06-HF, with a different amount of exact exchange for each one. M06-L is fully local without HF exchange (thus it cannot be considered hybrid), M06 has 27% HF exchange, M06-2X 54% and M06-HF 100%.
The advantages and usefulness of each functional are
M06-L: Fast, good for transition metals, inorganic and organometallics.
M06: For main group, organometallics, kinetics and non-covalent bonds.
M06-2X: Main group, kinetics.
M06-HF: Charge-transfer TD-DFT, systems where self-interaction is pathological.
The suite gives good results for systems containing dispersion forces, one of the biggest deficiencies of standard DFT methods.
Medvedev, Perdew, et al. say: "Despite their excellent performance for energies and geometries, we must suspect that modern highly parameterized functionals need further guidance from exact constraints, or exact density, or both"
References
Density functional theory | Hybrid functional | [
"Physics",
"Chemistry"
] | 974 | [
"Density functional theory",
"Quantum chemistry",
"Quantum mechanics"
] |
19,377,279 | https://en.wikipedia.org/wiki/Laser%20Doppler%20vibrometer | A laser Doppler vibrometer (LDV) is a scientific instrument that is used to make non-contact vibration measurements of a surface. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the reflected laser beam frequency due to the motion of the surface. The output of an LDV is generally a continuous analog voltage that is directly proportional to the target velocity component along the direction of the laser beam.
Some advantages of an LDV over similar measurement devices such as an accelerometer are that the LDV can be directed at targets that are difficult to access, or that may be too small or too hot to attach a physical transducer. Also, the LDV makes the vibration measurement without mass-loading the target, which is especially important for MEMS devices.
Principles of operation
A vibrometer is generally a two beam laser interferometer that measures the frequency (or phase) difference between an internal reference beam and a test beam. The most common type of laser in an LDV is the helium–neon laser, although laser diodes, fiber lasers, and Nd:YAG lasers are also used. The test beam is directed to the target, and scattered light from the target is collected and interfered with the reference beam on a photodetector, typically a photodiode. Most commercial vibrometers work in a heterodyne regime by adding a known frequency shift (typically 30–40 MHz) to one of the beams. This frequency shift is usually generated by a Bragg cell, or acousto-optic modulator.
A schematic of a typical laser vibrometer is shown above. The beam from the laser, which has a frequency fo, is divided into a reference beam and a test beam with a beamsplitter. The test beam then passes through the Bragg cell, which adds a frequency shift fb. This frequency shifted beam then is directed to the target. The motion of the target adds a Doppler shift to the beam given by fd = 2*v(t)*cos(α)/λ, where v(t) is the velocity of the target as a function of time, α is the angle between the laser beam and the velocity vector, and λ is the wavelength of the light.
Light scatters from the target in all directions, but some portion of the light is collected by the LDV and reflected by the beamsplitter to the photodetector. This light has a frequency equal to fo + fb + fd. This scattered light is combined with the reference beam at the photo-detector. The initial frequency of the laser is very high (> 1014 Hz), which is higher than the response of the detector. The detector does respond, however, to the beat frequency between the two beams, which is at fb + fd (typically in the tens of MHz range).
The output of the photodetector is a standard frequency modulated (FM) signal, with the Bragg cell frequency as the carrier frequency, and the Doppler shift as the modulation frequency. This signal can be demodulated to derive the velocity vs. time of the vibrating target.
Applications
LDVs are used in a wide variety of scientific, industrial, and medical applications. Some examples are provided below:
Aerospace – LDVs are being used as tools in non-destructive inspection of aircraft components.
Acoustic – LDVs are standard tools for speaker design, and have also been used to diagnose the performance of musical instruments.
Architectural – LDVs are being used for bridge and structure vibration tests.
Automotive – LDVs have been used extensively in many automotive applications, such as structural dynamics, brake diagnostics, and quantification of Noise, vibration, and harshness (NVH), measurement of accurate speed.
Biological – LDVs have been used for diverse applications such as eardrum diagnostics and insect communication.
Calibration – Since LDVs measure motion that can be calibrated directly to the wavelength of light, they are frequently used to calibrate other types of transducers.
Hard disk drive diagnostics – LDVs have been used extensively in the analysis of hard disk drives, specifically in the area of head positioning.
Dental Devices - LDVs are used in the dental industry to measure the vibration signature of dental scalers to improve vibration quality.
Landmine detection – LDVs have shown great promise in the detection of buried landmines. The technique uses an audio source such as a loudspeaker to excite the ground, causing the ground to vibrate a very small amount with the LDV used to measure the amplitude of the ground vibrations. Areas above a buried mine show an enhanced ground velocity at the resonance frequency of the mine-soil system. Mine detection with single-beam scanning LDVs, an array of LDVs, and multi-beam LDVs has been demonstrated.
Security – Laser Doppler vibrometers (LDVs) as non-contact vibration sensors have an ability of remote voice acquisition. With the assistance of a visual sensor (camera), various targets in the environment, where an audio event takes place, can be selected as reflecting surfaces for collecting acoustic signals by an LDV. The performance of the LDV greatly depends on the vibration characteristics of the selected targets (surfaces) in the scene, on which a laser beam strikes and from which it returns.
Materials Research – Due to the non-contact method, Laser Vibrometers, especially Laser Scanning Vibrometers, can measure surface vibrations of modern materials like carbon plates. The vibration information can help identify and study defects as materials with defects will show a different vibration profile compared to materials without defect.
Types
Single-point vibrometers – This is the most common type of LDV. It can measure one directional out of plane movement.
Scanning vibrometers – A scanning LDV adds a set of X-Y scanning mirrors, allowing the single laser beam to be moved across the surface of interest.
Holographic laser Doppler vibrometry (HLDV) – An extended-illumination LDV that relies on digital holography for image rendering to capture the motion of a surface at many points simultaneously.
3-D vibrometers – A standard LDV measures the velocity of the target along the direction of the laser beam. To measure all three components of the target's velocity, a 3-D vibrometer measures a location with three independent beams, which strike the target from three different directions. This allows a determination of the complete in-plane and out-of-plane velocity of the target.
Rotational vibrometers – A rotational LDV is used to measure rotational or angular velocity.
Differential vibrometers – A differential LDV measures the out-of-plane velocity difference between two locations on the target.
Multi-beam vibrometers – A multi-beam LDV measures the target velocity at several locations simultaneously.
Self-mixing vibrometers – Simple LDV configuration with ultra-compact optical head. These are generally based on a laser diode with a built-in photodetector.
Continuous scan laser Doppler vibrometry (CSLDV) – A modified LDV that sweeps the laser continuously across the surface of the test specimen to capture the motion of a surface at many points simultaneously
See also
Fibre-optic gyroscope
Laser Doppler imaging
Laser Doppler velocimetry
Laser microphone
Laser scanning vibrometry
Laser turntable
Optical heterodyne detection
References
External links
Introduction to laser Doppler vibrometers and physical principles
Video of the basic principles of laser Doppler vibrometry
Laser applications
Doppler effects
Measurement
Measuring instruments | Laser Doppler vibrometer | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,598 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Astrophysics",
"Size",
"Measurement",
"Measuring instruments",
"Doppler effects"
] |
19,377,694 | https://en.wikipedia.org/wiki/Spin%20transition | The spin transition is an example of transition between two electronic states in molecular chemistry. The ability of an electron to transit from a stable to another stable (or metastable) electronic state in a reversible and detectable fashion, makes these molecular systems appealing in the field of molecular electronics.
In octahedral surroundings
When a transition metal ion of configuration , to , is in octahedral surroundings, its ground state may be low spin (LS) or high spin (HS), depending to a first approximation on the magnitude of the energy gap between and metal orbitals relative to the mean spin pairing energy (see Crystal field theory). More precisely, for , the ground state arises from the configuration where the electrons occupy first the orbitals of lower energy, and if there are more than six electrons, the orbitals of higher energy. The ground state is then LS. On the other hand, for , Hund's rule is obeyed. The HS ground state has got the same multiplicity as the free metal ion. If the values of and are comparable, a LS↔HS transition may occur.
configurations
Between all the possible configurations of the metal ion, and are by far the most important. The spin transition phenomenon, in fact, was first observed in 1930 for tris (dithiocarbamato) iron(III) compounds. On the other hand, the iron(II) spin transition complexes were the most extensively studied: among these two of them may be considered as archetypes of spin transition systems, namely Fe(NCS)2(bipy)2 and Fe(NCS)2(phen)2 (bipy = 2,2'-bipyridine and phen = 1,10-phenanthroline).
Iron(II) complexes
We discuss the mechanism of the spin transition by focusing on the specific case of iron(II) complexes. At the molecular scale the spin transition corresponds to an interionic electron transfer with spin flip of the transferred electrons. For an iron(II) compound this transfer involves two electrons and the spin variations is . The occupancy of the orbitals is higher in the HS state than in the LS state and these orbitals are more antibonding than the . It follows that the average metal-ligand bond length is longer in the HS state than in the LS state. This difference is in the range 1.4–2.4 pm for iron(II) compounds.
To induce a spin transition
The most common way to induce a spin transition is to change the temperature of the system: the transition will be then characterized by a , where is the molar fraction of molecules in high-spin state. Several techniques are currently used to obtain such curves. The simplest method consists of measuring the temperature dependence of molar susceptibility. Any other technique that provides different responses according to whether the state is LS or HS may also be used to determine . Among these techniques, Mössbauer spectroscopy has been particularly useful in the case of iron compounds, showing two well resolved quadrupole doublets. One of these is associated with LS molecules, the other with HS molecules: the high-spin molar fraction then may be deduced from the relative intensities of the doublets.
Types of transition
Various types of transition have been observed. This may be abrupt, occurring within a few kelvins range, or smooth, occurring within a large temperature range. It could also be incomplete both at low temperature and at high temperature, even if the latter is more often observed. Moreover, the curves may be strictly identical in the cooling or heating modes, or exhibit a hysteresis: in this case the system could assume two different electronic states in a certain range of temperature. Finally the transition may occur in two steps.
See also
Spin crossover
Quantum chemistry | Spin transition | [
"Physics",
"Chemistry"
] | 790 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
19,378,200 | https://en.wikipedia.org/wiki/Representation%20theory | Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations (for example, matrix addition, matrix multiplication).
The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices such that the group operation is matrix multiplication.
Representation theory is a useful method because it reduces problems in abstract algebra to problems in linear algebra, a subject that is well understood. Representations of more abstract objects in terms of familiar linear algebra can elucidate properties and simplify calculations within more abstract theories. For instance, representing a group by an infinite-dimensional Hilbert space allows methods of analysis to be applied to the theory of groups. Furthermore, representation theory is important in physics because it can describe how the symmetry group of a physical system affects the solutions of equations describing that system.
Representation theory is pervasive across fields of mathematics. The applications of representation theory are diverse. In addition to its impact on algebra, representation theory
generalizes Fourier analysis via harmonic analysis,
is connected to geometry via invariant theory and the Erlangen program,
has an impact in number theory via automorphic forms and the Langlands program.
There are many approaches to representation theory: the same objects can be studied using methods from algebraic geometry, module theory, analytic number theory, differential geometry, operator theory, algebraic combinatorics and topology.
The success of representation theory has led to numerous generalizations. One of the most general is in category theory. The algebraic objects to which representation theory applies can be viewed as particular kinds of categories, and the representations as functors from the object category to the category of vector spaces. This description points to two natural generalizations: first, the algebraic objects can be replaced by more general categories; second, the target category of vector spaces can be replaced by other well-understood categories.
Definitions and concepts
Let be a vector space over a field . For instance, suppose is or , the standard n-dimensional space of column vectors over the real or complex numbers, respectively. In this case, the idea of representation theory is to do abstract algebra concretely by using matrices of real or complex numbers.
There are three main sorts of algebraic objects for which this can be done: groups, associative algebras and Lie algebras.
The set of all invertible matrices is a group under matrix multiplication, and the representation theory of groups analyzes a group by describing ("representing") its elements in terms of invertible matrices.
Matrix addition and multiplication make the set of all matrices into an associative algebra, and hence there is a corresponding representation theory of associative algebras.
If we replace matrix multiplication by the matrix commutator , then the matrices become instead a Lie algebra, leading to a representation theory of Lie algebras.
This generalizes to any field and any vector space over , with linear maps replacing matrices and composition replacing matrix multiplication: there is a group of automorphisms of , an associative algebra of all endomorphisms of , and a corresponding Lie algebra .
Definition
Action
There are two ways to define a representation. The first uses the idea of an action, generalizing the way that matrices act on column vectors by matrix multiplication.
A representation of a group or (associative or Lie) algebra on a vector space is a map
with two properties.
For any in (or in ), the map
is linear (over ).
If we introduce the notation g · v for (g, v), then for any g1, g2 in G and v in V:
where e is the identity element of G and g1g2 is the group product in G.
The definition for associative algebras is analogous, except that associative algebras do not always have an identity element, in which case equation (2.1) is omitted. Equation (2.2) is an abstract expression of the associativity of matrix multiplication. This doesn't hold for the matrix commutator and also there is no identity element for the commutator. Hence for Lie algebras, the only requirement is that for any x1, x2 in A and v in V:
where [x1, x2] is the Lie bracket, which generalizes the matrix commutator MN − NM.
Mapping
The second way to define a representation focuses on the map φ sending g in G to a linear map φ(g): V → V, which satisfies
and similarly in the other cases. This approach is both more concise and more abstract.
From this point of view:
a representation of a group G on a vector space V is a group homomorphism φ: G → GL(V,F);
a representation of an associative algebra A on a vector space V is an algebra homomorphism φ: A → EndF(V);
a representation of a Lie algebra on a vector space is a Lie algebra homomorphism .
Terminology
The vector space V is called the representation space of φ and its dimension (if finite) is called the dimension of the representation (sometimes degree, as in ). It is also common practice to refer to V itself as the representation when the homomorphism φ is clear from the context; otherwise the notation (V,φ) can be used to denote a representation.
When V is of finite dimension n, one can choose a basis for V to identify V with Fn, and hence recover a matrix representation with entries in the field F.
An effective or faithful representation is a representation (V,φ), for which the homomorphism φ is injective.
Equivariant maps and isomorphisms
If V and W are vector spaces over F, equipped with representations φ and ψ of a group G, then an equivariant map from V to W is a linear map α: V → W such that
for all g in G and v in V. In terms of φ: G → GL(V) and ψ: G → GL(W), this means
for all g in G, that is, the following diagram commutes:
Equivariant maps for representations of an associative or Lie algebra are defined similarly. If α is invertible, then it is said to be an isomorphism, in which case V and W (or, more precisely, φ and ψ) are isomorphic representations, also phrased as equivalent representations. An equivariant map is often called an intertwining map of representations. Also, in the case of a group , it is on occasion called a -map.
Isomorphic representations are, for practical purposes, "the same"; they provide the same information about the group or algebra being represented. Representation theory therefore seeks to classify representations up to isomorphism.
Subrepresentations, quotients, and irreducible representations
If is a representation of (say) a group , and is a linear subspace of that is preserved by the action of in the sense that for all and , (Serre calls these stable under ), then is called a subrepresentation: by defining where is the restriction of to , is a representation of and the inclusion of is an equivariant map. The quotient space can also be made into a representation of . If has exactly two subrepresentations, namely the trivial subspace {0} and itself, then the representation is said to be irreducible; if has a proper nontrivial subrepresentation, the representation is said to be reducible.
The definition of an irreducible representation implies Schur's lemma: an equivariant map between irreducible representations is either the zero map or an isomorphism, since its kernel and image are subrepresentations. In particular, when , this shows that the equivariant endomorphisms of form an associative division algebra over the underlying field F. If F is algebraically closed, the only equivariant endomorphisms of an irreducible representation are the scalar multiples of the identity.
Irreducible representations are the building blocks of representation theory for many groups: if a representation is not irreducible then it is built from a subrepresentation and a quotient that are both "simpler" in some sense; for instance, if is finite-dimensional, then both the subrepresentation and the quotient have smaller dimension. There are counterexamples where a representation has a subrepresentation, but only has one non-trivial irreducible component. For example, the additive group has a two dimensional representation
This group has the vector fixed by this homomorphism, but the complement subspace maps to
giving only one irreducible subrepresentation. This is true for all unipotent groups.
Direct sums and indecomposable representations
If (V,φ) and (W,ψ) are representations of (say) a group G, then the direct sum of V and W is a representation, in a canonical way, via the equation
The direct sum of two representations carries no more information about the group G than the two representations do individually. If a representation is the direct sum of two proper nontrivial subrepresentations, it is said to be decomposable. Otherwise, it is said to be indecomposable.
Complete reducibility
In favorable circumstances, every finite-dimensional representation is a direct sum of irreducible representations: such representations are said to be semisimple. In this case, it suffices to understand only the irreducible representations. Examples where this "complete reducibility" phenomenon occurs (at least over fields of characteristic zero) include finite groups (see Maschke's theorem), compact groups, and semisimple Lie algebras.
In cases where complete reducibility does not hold, one must understand how indecomposable representations can be built from irreducible representations by using extensions of quotients by subrepresentations.
Tensor products of representations
Suppose and are representations of a group . Then we can form a representation of G acting on the tensor product vector space as follows:
.
If and are representations of a Lie algebra, then the correct formula to use is
.
This product can be recognized as the coproduct on a coalgebra. In general, the tensor product of irreducible representations is not irreducible; the process of decomposing a tensor product as a direct sum of irreducible representations is known as Clebsch–Gordan theory.
In the case of the representation theory of the group SU(2) (or equivalently, of its complexified Lie algebra ), the decomposition is easy to work out. The irreducible representations are labeled by a parameter that is a non-negative integer or half integer; the representation then has dimension . Suppose we take the tensor product of the representation of two representations, with labels and where we assume . Then the tensor product decomposes as a direct sum of one copy of each representation with label , where ranges from to in increments of 1. If, for example, , then the values of that occur are 0, 1, and 2. Thus, the tensor product representation of dimension decomposes as a direct sum of a 1-dimensional representation a 3-dimensional representation and a 5-dimensional representation .
Branches and topics
Representation theory is notable for the number of branches it has, and the diversity of the approaches to studying representations of groups and algebras. Although, all the theories have in common the basic concepts discussed already, they differ considerably in detail. The differences are at least 3-fold:
Representation theory depends upon the type of algebraic object being represented. There are several different classes of groups, associative algebras and Lie algebras, and their representation theories all have an individual flavour.
Representation theory depends upon the nature of the vector space on which the algebraic object is represented. The most important distinction is between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (for example, whether or not the space is a Hilbert space, Banach space, etc.). Additional algebraic structures can also be imposed in the finite-dimensional case.
Representation theory depends upon the type of field over which the vector space is defined. The most important cases are the field of complex numbers, the field of real numbers, finite fields, and fields of p-adic numbers. Additional difficulties arise for fields of positive characteristic and for fields that are not algebraically closed.
Finite groups
Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to geometry and crystallography. Representations of finite groups exhibit many of the features of the general theory and point the way to other branches and topics in representation theory.
Over a field of characteristic zero, the representation of a finite group G has a number of convenient properties. First, the representations of G are semisimple (completely reducible). This is a consequence of Maschke's theorem, which states that any subrepresentation V of a G-representation W has a G-invariant complement. One proof is to choose any projection π from W to V and replace it by its average πG defined by
πG is equivariant, and its kernel is the required complement.
The finite-dimensional G-representations can be understood using character theory: the character of a representation φ: G → GL(V) is the class function χφ: G → F defined by
where is the trace. An irreducible representation of G is completely determined by its character.
Maschke's theorem holds more generally for fields of positive characteristic p, such as the finite fields, as long as the prime p is coprime to the order of G. When p and |G| have a common factor, there are G-representations that are not semisimple, which are studied in a subbranch called modular representation theory.
Averaging techniques also show that if F is the real or complex numbers, then any G-representation preserves an inner product on V in the sense that
for all g in G and v, w in W. Hence any G-representation is unitary.
Unitary representations are automatically semisimple, since Maschke's result can be proven by taking the orthogonal complement of a subrepresentation. When studying representations of groups that are not finite, the unitary representations provide a good generalization of the real and complex representations of a finite group.
Results such as Maschke's theorem and the unitary property that rely on averaging can be generalized to more general groups by replacing the average with an integral, provided that a suitable notion of integral can be defined. This can be done for compact topological groups (including compact Lie groups), using Haar measure, and the resulting theory is known as abstract harmonic analysis.
Over arbitrary fields, another class of finite groups that have a good representation theory are the finite groups of Lie type. Important examples are linear algebraic groups over finite fields. The representation theory of linear algebraic groups and Lie groups extends these examples to infinite-dimensional groups, the latter being intimately related to Lie algebra representations. The importance of character theory for finite groups has an analogue in the theory of weights for representations of Lie groups and Lie algebras.
Representations of a finite group G are also linked directly to algebra representations via the group algebra F[G], which is a vector space over F with the elements of G as a basis, equipped with the multiplication operation defined by the group operation, linearity, and the requirement that the group operation and scalar multiplication commute.
Modular representations
Modular representations of a finite group G are representations over a field whose characteristic is not coprime to |G|, so that Maschke's theorem no longer holds (because |G| is not invertible in F and so one cannot divide by it). Nevertheless, Richard Brauer extended much of character theory to modular representations, and this theory played an important role in early progress towards the classification of finite simple groups, especially for simple groups whose characterization was not amenable to purely group-theoretic methods because their Sylow 2-subgroups were "too small".
As well as having applications to group theory, modular representations arise naturally in other branches of mathematics, such as algebraic geometry, coding theory, combinatorics and number theory.
Unitary representations
A unitary representation of a group G is a linear representation φ of G on a real or (usually) complex Hilbert space V such that φ(g) is a unitary operator for every g ∈ G. Such representations have been widely applied in quantum mechanics since the 1920s, thanks in particular to the influence of Hermann Weyl, and this has inspired the development of the theory, most notably through the analysis of representations of the Poincaré group by Eugene Wigner. One of the pioneers in constructing a general theory of unitary representations (for any group G rather than just for particular groups useful in applications) was George Mackey, and an extensive theory was developed by Harish-Chandra and others in the 1950s and 1960s.
A major goal is to describe the "unitary dual", the space of irreducible unitary representations of G. The theory is most well-developed in the case that G is a locally compact (Hausdorff) topological group and the representations are strongly continuous. For G abelian, the unitary dual is just the space of characters, while for G compact, the Peter–Weyl theorem shows that the irreducible unitary representations are finite-dimensional and the unitary dual is discrete. For example, if G is the circle group S1, then the characters are given by integers, and the unitary dual is Z.
For non-compact G, the question of which representations are unitary is a subtle one. Although irreducible unitary representations must be "admissible" (as Harish-Chandra modules) and it is easy to detect which admissible representations have a nondegenerate invariant sesquilinear form, it is hard to determine when this form is positive definite. An effective description of the unitary dual, even for relatively well-behaved groups such as real reductive Lie groups (discussed below), remains an important open problem in representation theory. It has been solved for many particular groups, such as SL(2,R) and the Lorentz group.
Harmonic analysis
The duality between the circle group S1 and the integers Z, or more generally, between a torus Tn and Zn is well known in analysis as the theory of Fourier series, and the Fourier transform similarly expresses the fact that the space of characters on a real vector space is the dual vector space. Thus unitary representation theory and harmonic analysis are intimately related, and abstract harmonic analysis exploits this relationship, by developing the analysis of functions on locally compact topological groups and related spaces.
A major goal is to provide a general form of the Fourier transform and the Plancherel theorem. This is done by constructing a measure on the unitary dual and an isomorphism between the regular representation of G on the space L2(G) of square integrable functions on G and its representation on the space of L2 functions on the unitary dual. Pontrjagin duality and the Peter–Weyl theorem achieve this for abelian and compact G respectively.
Another approach involves considering all unitary representations, not just the irreducible ones. These form a category, and Tannaka–Krein duality provides a way to recover a compact group from its category of unitary representations.
If the group is neither abelian nor compact, no general theory is known with an analogue of the Plancherel theorem or Fourier inversion, although Alexander Grothendieck extended Tannaka–Krein duality to a relationship between linear algebraic groups and tannakian categories.
Harmonic analysis has also been extended from the analysis of functions on a group G to functions on homogeneous spaces for G. The theory is particularly well developed for symmetric spaces and provides a theory of automorphic forms (discussed below).
Lie groups
A Lie group is a group that is also a smooth manifold. Many classical groups of matrices over the real or complex numbers are Lie groups. Many of the groups important in physics and chemistry are Lie groups, and their representation theory is crucial to the application of group theory in those fields.
The representation theory of Lie groups can be developed first by considering the compact groups, to which results of compact representation theory apply. This theory can be extended to finite-dimensional representations of semisimple Lie groups using Weyl's unitary trick: each semisimple real Lie group G has a complexification, which is a complex Lie group Gc, and this complex Lie group has a maximal compact subgroup K. The finite-dimensional representations of G closely correspond to those of K.
A general Lie group is a semidirect product of a solvable Lie group and a semisimple Lie group (the Levi decomposition). The classification of representations of solvable Lie groups is intractable in general, but often easy in practical cases. Representations of semidirect products can then be analysed by means of general results called Mackey theory, which is a generalization of the methods used in Wigner's classification of representations of the Poincaré group.
Lie algebras
A Lie algebra over a field F is a vector space over F equipped with a skew-symmetric bilinear operation called the Lie bracket, which satisfies the Jacobi identity. Lie algebras arise in particular as tangent spaces to Lie groups at the identity element, leading to their interpretation as "infinitesimal symmetries". An important approach to the representation theory of Lie groups is to study the corresponding representation theory of Lie algebras, but representations of Lie algebras also have an intrinsic interest.
Lie algebras, like Lie groups, have a Levi decomposition into semisimple and solvable parts, with the representation theory of solvable Lie algebras being intractable in general. In contrast, the finite-dimensional representations of semisimple Lie algebras are completely understood, after work of Élie Cartan. A representation of a semisimple Lie algebra 𝖌 is analysed by choosing a Cartan subalgebra, which is essentially a generic maximal subalgebra 𝖍 of 𝖌 on which the Lie bracket is zero ("abelian"). The representation of 𝖌 can be decomposed into weight spaces that are eigenspaces for the action of 𝖍 and the infinitesimal analogue of characters. The structure of semisimple Lie algebras then reduces the analysis of representations to easily understood combinatorics of the possible weights that can occur.
Infinite-dimensional Lie algebras
There are many classes of infinite-dimensional Lie algebras whose representations have been studied. Among these, an important class are the Kac–Moody algebras. They are named after Victor Kac and Robert Moody, who independently discovered them. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and share many of their combinatorial properties. This means that they have a class of representations that can be understood in the same way as representations of semisimple Lie algebras.
Affine Lie algebras are a special case of Kac–Moody algebras, which have particular importance in mathematics and theoretical physics, especially conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras.
Lie superalgebras
Lie superalgebras are generalizations of Lie algebras in which the underlying vector space has a Z2-grading, and skew-symmetry and Jacobi identity properties of the Lie bracket are modified by signs. Their representation theory is similar to the representation theory of Lie algebras.
Linear algebraic groups
Linear algebraic groups (or more generally, affine group schemes) are analogues in algebraic geometry of Lie groups, but over more general fields than just R or C. In particular, over finite fields, they give rise to finite groups of Lie type. Although linear algebraic groups have a classification that is very similar to that of Lie groups, their representation theory is rather different (and much less well understood) and requires different techniques, since the Zariski topology is relatively weak, and techniques from analysis are no longer available.
Invariant theory
Invariant theory studies actions on algebraic varieties from the point of view of their effect on functions, which form representations of the group. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. The modern approach analyses the decomposition of these representations into irreducibles.
Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence is projective geometry, where invariant theory can be used to organize the subject, and during the 1960s, new life was breathed into the subject by David Mumford in the form of his geometric invariant theory.
The representation theory of semisimple Lie groups has its roots in invariant theory and the strong links between representation theory and algebraic geometry have many parallels in differential geometry, beginning with Felix Klein's Erlangen program and Élie Cartan's connections, which place groups and symmetry at the heart of geometry. Modern developments link representation theory and invariant theory to areas as diverse as holonomy, differential operators and the theory of several complex variables.
Automorphic forms and number theory
Automorphic forms are a generalization of modular forms to more general analytic functions, perhaps of several complex variables, with similar transformation properties. The generalization involves replacing the modular group PSL2 (R) and a chosen congruence subgroup by a semisimple Lie group G and a discrete subgroup Γ. Just as modular forms can be viewed as differential forms on a quotient of the upper half space H = PSL2 (R)/SO(2), automorphic forms can be viewed as differential forms (or similar objects) on Γ\G/K, where K is (typically) a maximal compact subgroup of G. Some care is required, however, as the quotient typically has singularities. The quotient of a semisimple Lie group by a compact subgroup is a symmetric space and so the theory of automorphic forms is intimately related to harmonic analysis on symmetric spaces.
Before the development of the general theory, many important special cases were worked out in detail, including the Hilbert modular forms and Siegel modular forms. Important results in the theory include the Selberg trace formula and the realization by Robert Langlands that the Riemann–Roch theorem could be applied to calculate the dimension of the space of automorphic forms. The subsequent notion of "automorphic representation" has proved of great technical value for dealing with the case that G is an algebraic group, treated as an adelic algebraic group. As a result, an entire philosophy, the Langlands program has developed around the relation between representation and number theoretic properties of automorphic forms.
Associative algebras
In one sense, associative algebra representations generalize both representations of groups and Lie algebras. A representation of a group induces a representation of a corresponding group ring or group algebra, while representations of a Lie algebra correspond bijectively to representations of its universal enveloping algebra. However, the representation theory of general associative algebras does not have all of the nice properties of the representation theory of groups and Lie algebras.
Module theory
When considering representations of an associative algebra, one can forget the underlying field, and simply regard the associative algebra as a ring, and its representations as modules. This approach is surprisingly fruitful: many results in representation theory can be interpreted as special cases of results about modules over a ring.
Hopf algebras and quantum groups
Hopf algebras provide a way to improve the representation theory of associative algebras, while retaining the representation theory of groups and Lie algebras as special cases. In particular, the tensor product of two representations is a representation, as is the dual vector space.
The Hopf algebras associated to groups have a commutative algebra structure, and so general Hopf algebras are known as quantum groups, although this term is often restricted to certain Hopf algebras arising as deformations of groups or their universal enveloping algebras. The representation theory of quantum groups has added surprising insights to the representation theory of Lie groups and Lie algebras, for instance through the crystal basis of Kashiwara.
History
Generalizations
Set-theoretic representations
A set-theoretic representation (also known as a group action or permutation representation) of a group G on a set X is given by a function ρ from G to XX, the set of functions from X to X, such that for all g1, g2 in G and all x in X:
This condition and the axioms for a group imply that ρ(g) is a bijection (or permutation) for all g in G. Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group SX of X.
Representations in other categories
Every group G can be viewed as a category with a single object; morphisms in this category are just the elements of G. Given an arbitrary category C, a representation of G in C is a functor from G to C. Such a functor selects an object X in C and a group homomorphism from G to Aut(X), the automorphism group of X.
In the case where C is VectF, the category of vector spaces over a field F, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of G in the category of sets.
For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from G to the homeomorphism group of a topological space X.
Three types of representations closely related to linear representations are:
projective representations: in the category of projective spaces. These can be described as "linear representations up to scalar transformations".
affine representations: in the category of affine spaces. For example, the Euclidean group acts affinely upon Euclidean space.
corepresentations of unitary and antiunitary groups: in the category of complex vector spaces with morphisms being linear or antilinear transformations.
Representations of categories
Since groups are categories, one can also consider representation of other categories. The simplest generalization is to monoids, which are categories with one object. Groups are monoids for which every morphism is invertible. General monoids have representations in any category. In the category of sets, these are monoid actions, but monoid representations on vector spaces and other objects can be studied.
More generally, one can relax the assumption that the category being represented has only one object. In full generality, this is simply the theory of functors between categories, and little can be said.
One special case has had a significant impact on representation theory, namely the representation theory of quivers. A quiver is simply a directed graph (with loops and multiple arrows allowed), but it can be made into a category (and also an algebra) by considering paths in the graph. Representations of such categories/algebras have illuminated several aspects of representation theory, for instance by allowing non-semisimple representation theory questions about a group to be reduced in some cases to semisimple representation theory questions about a quiver.
See also
Galois representation
Glossary of representation theory
Group representation
Itô's theorem
List of representation theory topics
List of harmonic analysis topics
Numerical analysis
Philosophy of cusp forms
Representation (mathematics)
Representation theorem
Universal algebra
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
; (2nd ed.); (3rd ed.)
.
.
.
.
.
.
.
.
.
.
.
External links
Alexander Kirillov Jr., An introduction to Lie groups and Lie algebras (2008). Textbook, preliminary version pdf downloadable from author's home page.
Kevin Hartnett, (2020), article on representation theory in Quanta magazine | Representation theory | [
"Mathematics"
] | 6,739 | [
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic structures",
"Representation theory"
] |
19,378,412 | https://en.wikipedia.org/wiki/Rotodynamic%20pump | A rotodynamic pump is a kinetic machine in which energy is continuously imparted to the pumped fluid by means of a rotating impeller, propeller, or rotor, in contrast to a positive-displacement pump in which a fluid is moved by trapping a fixed amount of fluid and forcing the trapped volume into the pump's discharge. Examples of rotodynamic pumps include adding kinetic energy to a fluid such as by using a centrifugal pump to increase fluid velocity or pressure.
Introduction
A pump is a mechanical device generally used for raising liquid from a lower level to higher one. This is achieved by creating a low pressure at the inlet and high pressure at the outlet of the pump. Due to low inlet pressure, the liquid rises from where it is to be stored or supplied. However, work has to be done by a prime mover to enable it to impart mechanical energy to the liquid which ultimately converts into pressure energy.
Considering the basic principle of operation, pumps can be classified into two categories:
Positive-displacement pumps.
Non-positive-displacement pumps.
Classification of pumps
Pumps are classified as follows:
Positive-displacement pumps
A positive-displacement pump operates by forcing a fixed volume of fluid from inlet pressure section of the pump into the discharge zone of the pump. It can be classified into two types:
Rotary-type positive-displacement pumps:
Internal gear pumps
Screw pumps
Reciprocating-type positive-displacement pumps:
Piston pumps
Diaphragm pumps
Rotary-type positive-displacement pumps
Positive-displacement rotary pump move the fluid by using a rotating mechanism that creates a vacuum that captures and draws in the liquid. Rotary positive-displacement pumps can be classified into two main types:
Gear pumps
Rotary vane pumps
Reciprocating positive-displacement pump
Reciprocating pumps move the fluid using one or more oscillating pistons, plungers or membranes, while valves limit fluid motion to the desired direction.
Pumps in this category are simple, with one or more cylinders. They can be either single-acting, with suction during one direction of the piston motion and discharge on the other, or double-acting, with suction and discharge in both directions.
Non-positive-displacement pumps
With this pump type, the volume of the liquid delivered for each cycle depends on the resistance offered to flow. A pump produces a force on the liquid that is constant for each particular speed of the pump. Resistance in a discharge line produces a force in the opposite direction. When these forces are equal, a liquid is in a state of equilibrium and does not flow. If the outlet of a non-positive-displacement pump is completely closed, the discharge pressure will rise to the maximum for a pump operating at a maximum speed.
Centrifugal pumps
Centrifugal pumps employ centrifugal force to lift liquids from a lower level to a higher level by developing pressure. A simplest type of pump comprises an impeller fitted onto a shaft, rotating in a volute casing. Liquid is led into the centre of the impeller (known as 'eye' of the impeller), and is picked up by the vanes of the impeller and accelerated to a high velocity by the vanes of the impeller, and discharged by the centrifugal force into the casing and then out the discharge pipe. When liquid is forced away from the centre, a vacuum is created and more liquid receives energy from the vanes and gains in pressure energy and kinetic energy. Since a large amount of kinetic energy is not desirable at the impeller outlet, an arrangement is made in the design to convert the kinetic energy of the liquid to pressure energy before the liquid enters the discharge pipe.
Types of rotodynamic pumps
Rotodynamic pumps can be classified by various factors such as design, construction, applications, service etc.
By number of stages:
Single-stage pumps:
Also known as single impeller pumps
Simple and low-maintenance
Ideal for large flow rates and low-pressure installations
Two-stage pumps:
Two impellers in series
For medium-use applications
Multistage pumps:
Three or more impellers in series
For high-head applications
By type of case split:
Axial split:
In these types of pumps the volute casing is split axially and the split line at which the pump casing separates is at the shaft's centerline.
They are typically mounted horizontally due to ease in installation and maintenance.
Radial split:
The pump case is split radially, the volute casing split is perpendicular to shaft centre line.
By impeller design
Single-suction pumps:
It has single suction impeller which allows fluid to enter blades only through a single opening.
It has a simple design but the impeller has higher axial thrust imbalance due to flow coming through one side of impeller.
Double-suction pumps:
Double-suction impeller allows fluid to enter from both the sides of blades.
These are the most common types of pumps.
By number of volutes:
Single-volute pumps:
Usually used for low capacity pumps due to small volute size
Casting small volutes is difficult but results in good quality
Have higher radial loads
Double volute pumps:
Have two volutes placed 180 degrees apart
Good at balancing radial loads
The most commonly used design
By shaft orientation:
Horizontal centrifugal pumps:
Readily available
Easy to install, inspect, maintain and service
Suitable for low pressure
Vertical centrifugal pumps:
Require large headroom for installation, servicing and maintenance
Withstand higher pressure loads
More expensive than horizontal pumps
Working of a rotodynamic pump
Centrifugal pump is the most common used pumping device in the hydraulic world. In which the water comes from the tank at the center of the impeller and exits at the top of the pump. The impeller is called the heart of the system. Which have three types 1. Open impeller, 2. Semi-open impeller, 3. Enclosed impeller, in which the enclosed impeller gives the best efficiency. Enclosed impellers have a series of backward-curved vanes fitted between the two plates. It will always stay in the water. When impeller starts to rotate, the fluid in which the impeller lies will also rotate. When fluid starts to rotate, the centrifugal force will induced in the fluid particles. Due to centrifugal force, both pressure and kinetic energy of fluid will increases. As the centrifugal force occurs in the fluid particles, at the inlet nozzle (at the suction) side the pressure will decreases. The pressure will comparatively less than the atmospheric pressure. Such low pressure will help to suck the fluid from the storage. But if the inlet nozzle (at the suction) is empty or filled with the air it will damage the impeller. The difference between pressure created at the inlet nozzle (at the suction) and the atmospheric pressure will be very less to suck the fluid from the tank. The impeller if fitted inside the casing. So the fluid has to be inside the casing. Casing will be designed such that it will give maximum pressure at the exit. In casing, the maximum diameter or space is at exit (discharge nozzle) and as we move inside the diameter will gradually decrease. Due to this, the volume of the fluid is more at the discharge nozzle, so the velocity will decrease, and as velocity and pressure both are inversely proportional the pressure will increase. And the increase in pressure is required because to overcome the resistance of the pumping system.
If the pressure at the inlet nozzle (at the suction) goes below the pressure of vapor of the fluid, air bubbles created inside the casing. This situation is very dangerous for the pump because the fluid starts to boil and form the bubbles. Those bubbles will hit the impeller and it will spoil its material. This situation is known as the cavitation. To increase the pressure at the inlet nozzle (suction) we have to decrease the section head.
Those three types of impellers have its different usages. If the fluid is more cloggy then the semi open or the open type of impeller is used. But the efficiency will decreases respectively. And also the Mechanical design of the pump is difficult. The shaft is used to connect the impeller and the motor which will transfer the rotary motion to the impeller. The fluid pressure inside the casing is very high, a proper sealing arrangement is required.
Applications
Main industries where rotodynamic pumps are used include:
General services: Cooling water, service water, firefighting, drainage
Agriculture: Irrigation, borehole, land drainage
Chemical/Petrochemical: Transfer
Construction/building services: Pressure boosting, drainage, hot water circulation, air conditioning, boiler feed
Dairy/Brewery: Transfer, ‘wort’, ‘wash’ to fermentation
Domestic: Hot water
Metal manufacture: Mill scale, furnace gas rubbing, descaling
Mining/quarrying: Coal washing, ore washing, solids transport, dewatering, water jetting
Oil/gas production: Main oil line, tanker loading, water injection, seawater lift
Oil/gas refining: Hydrocarbon transfer, crude oil supply, tanker loading, product pipeline, reactor charge
Paper/pulp: Medium/low consistency stock, wood chips, liquors/condensate, stock to head box
Power generation: Large cooling water, ash handling, flue gas desulphurisation process, condensate extraction, boiler feed
Sugar manufacture: Milk of lime/syrup, beet tailings, juices, whole beets
Wastewater: Raw and settled sewage, grit laden flows, stormwater
Water supply: Raw water extraction, supply distribution, boosting
See also
Centrifugal pumps
Impeller
Roots blower
Shaft
Volute
Suction
Bernoulli's principle
References
External links
http://www.pumps.org/Pump_Fundamentals/Rotodynamic.aspx
http://shodhganga.inflibnet.ac.in/bitstream/10603/40703/8/08_chapter3.pdf
http://nptel.ac.in/courses/Webcourse-contents/IIT-KANPUR/machine/ui/Course_home-lec33.htm
https://www.introtopumps.com/pump-terms/rotodynamic/
https://link.springer.com/chapter/10.1007/978-1-4613-1217-8_1
https://www.brighthubengineering.com/fluid-mechanics-hydraulics/29394-the-basic-concept-construction-and-working-principle-of-hydraulic-pumps/
http://www.roymech.co.uk/Related/Pumps/Centrifugal%20Pumps.html
https://powerequipment.honda.com/pumps/pump-theory-1
https://www.castlepumps.com/info-hub/positive-displacement-vs-centrifugal-pumps
https://www.flowcontrolnetwork.com/piping-requirements-rotodynamic-pumps/
http://indjst.org/index.php/indjst/article/view/100938/73724
https://souzimport.ru/upload/files/auslegung-en-data.pdf
Pumps | Rotodynamic pump | [
"Physics",
"Chemistry"
] | 2,356 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
19,379,241 | https://en.wikipedia.org/wiki/Volumetric%20pipette | A volumetric pipette, bulb pipette, or belly pipette allows extremely accurate measurement (to four significant figures) of the volume of a solution. It is calibrated to deliver accurately a fixed volume of liquid.
These pipettes have a large bulb with a long narrow portion above with a single graduation mark as it is calibrated for a single volume (like a volumetric flask). Typical volumes are 1, 2, 5, 10, 20, 25, 50 and 100 mL. Volumetric pipettes are commonly used in analytical chemistry to make laboratory solutions from a base stock as well as to prepare solutions for titration.
ASTM standard E969 defines the standard tolerance for volumetric transfer pipettes. The tolerance depends on the size: a 0.5-mL pipette has a tolerance of ±0.006 mL, while a 50-mL pipette has a tolerance of ±0.05 mL. (These are for Class A pipettes; Class B pipettes are given a tolerance of twice that for the corresponding Class A.)
A specialized example of a volumetric pipette is the microfluid pipette (capable of dispensing as little as 10 μL) designed with a circulating liquid tip that generates a self-confining volume in front of its outlet channels.
History
Pyrex started to make laboratory equipment in 1916 and became a favorite brand for the scientific community due to the borosilicate glass's natural properties. These included strength against; chemicals, thermal shift, and mechanical stress.
References
External links
Helpful Hints on the Use of a Volumetric Pipet by Oliver Seely
Laboratory glassware
Laboratory equipment
Analytical chemistry
Volumetric instruments | Volumetric pipette | [
"Chemistry",
"Technology",
"Engineering"
] | 348 | [
"Measuring instruments",
"Volumetric instruments",
"nan",
"Analytical chemistry stubs"
] |
19,385,079 | https://en.wikipedia.org/wiki/Universal%20variable%20formulation | In orbital mechanics, the universal variable formulation is a method used to solve the two-body Kepler problem. It is a generalized form of Kepler's Equation, extending it to apply not only to elliptic orbits, but also parabolic and hyperbolic orbits common for spacecraft departing from a planetary orbit. It is also applicable to ejection of small bodies in Solar System from the vicinity of massive planets, during which processes the approximating two-body orbits can have widely varying eccentricities, almost always
Introduction
A common problem in orbital mechanics is the following: Given a body in an orbit and a fixed original time find the position of the body at some later time For elliptical orbits with a reasonably small eccentricity, solving Kepler's Equation by methods like Newton's method gives excellent results. However, as the orbit approaches an escape trajectory, it becomes more and more eccentric, convergence of numerical iteration may become unusably sluggish, or fail to converge at all for
Note that the conventional form of Kepler's equation cannot be applied to parabolic and hyperbolic orbits without special adaptions, to accommodate imaginary numbers, since its ordinary form is specifically tailored to sines and cosines; escape trajectories instead use and (hyperbolic functions).
Derivation
Although equations similar to Kepler's equation can be derived for parabolic and hyperbolic orbits, it is more convenient to introduce a new independent variable to take the place of the eccentric anomaly and having a single equation that can be solved regardless of the eccentricity of the orbit. The new variable is defined by the following differential equation:
where is the time-dependent scalar distance to the center of attraction.
(In all of the following formulas, carefully note the distinction between scalars in italics, and vectors in upright bold.)
We can regularize the fundamental equation
where is the system gravitational scaling constant,
by applying the change of variable from time to which yields
where is some t.b.d. constant vector and : is the orbital energy, defined by
The equation is the same as the equation for the harmonic oscillator, a well-known equation in both physics and mathematics, however, the unknown constant vector is somewhat inconvenient. Taking the derivative again, we eliminate the constant vector at the price of getting a third-degree differential equation:
The family of solutions to this differential equation are for convenience written symbolically in terms of the three functions and where the functions called Stumpff functions, which are truncated generalizations of sine and cosine series. The change-of-variable equation gives the scalar integral equation
After extensive algebra and back-substitutions, its solution results in
which is the universal variable formulation of Kepler's equation.
There is no closed analytic solution, but this universal variable form of Kepler's equation can be solved numerically for using a root-finding algorithm such as Newton's method or Laguerre's method for a given time The value of so-obtained is then used in turn to compute the and functions and the and functions needed to find the current position and velocity:
The values of the and functions determine the position of the body at the time :
In addition the velocity of the body at time can be found using and as follows:
where and are respectively the position and velocity vectors at time and and
are the position and velocity at arbitrary initial time
References
Orbits
Equations of astronomy | Universal variable formulation | [
"Physics",
"Astronomy"
] | 686 | [
"Concepts in astronomy",
"Equations of astronomy"
] |
7,984,007 | https://en.wikipedia.org/wiki/Traced%20monoidal%20category | In category theory, a traced monoidal category is a category with some extra structure which gives a reasonable notion of feedback.
A traced symmetric monoidal category is a symmetric monoidal category C together with a family of functions
called a trace, satisfying the following conditions:
naturality in : for every and ,
naturality in : for every and ,
dinaturality in : for every and
vanishing I: for every , (with being the right unitor),
vanishing II: for every
superposing: for every and ,
yanking:
(where is the symmetry of the monoidal category).
Properties
Every compact closed category admits a trace.
Given a traced monoidal category C, the Int construction generates the free (in some bicategorical sense) compact closure Int(C) of C.
References
Monoidal categories | Traced monoidal category | [
"Mathematics"
] | 165 | [
"Monoidal categories",
"Mathematical structures",
"Category theory",
"Category theory stubs"
] |
7,984,781 | https://en.wikipedia.org/wiki/Three-phase%20traffic%20theory | Three-phase traffic theory is a theory of traffic flow developed by Boris Kerner between 1996 and 2002. It focuses mainly on the explanation of the physics of traffic breakdown and resulting congested traffic on highways. Kerner describes three phases of traffic, while the classical theories based on the fundamental diagram of traffic flow have two phases: free flow and congested traffic. Kerner’s theory divides congested traffic into two distinct phases, synchronized flow and wide moving jam, bringing the total number of phases to three:
Free flow (F)
Synchronized flow (S)
Wide moving jam (J)
The word "wide" is used even though it is the length of the traffic jam that is being referred to.
A phase is defined as a state in space and time.
Free flow (F)
In free traffic flow, empirical data show a positive correlation between the flow rate (in vehicles per unit time) and vehicle density (in vehicles per unit distance). This relationship stops at the maximum free flow with a corresponding critical density . (See Figure 1.)
Congested traffic
Data show a weaker relationship between flow and density in congested conditions. Therefore, Kerner argues that the fundamental diagram, as used in classical traffic theory, cannot adequately describe the complex dynamics of vehicular traffic. He instead divides congestion into synchronized flow and wide moving jams.
In congested traffic, the vehicle speed is lower than the lowest vehicle speed encountered in free flow, i.e., the line with the slope of the minimal speed in free flow (dotted line in Figure 2) divides the empirical data on the flow-density plane into two regions: on the left side data points of free flow and on the right side data points corresponding to congested traffic.
Definitions [J] and [S] of the phases J and S in congested traffic
In Kerner's theory, the phases J and S in congested traffic are observed outcomes in universal spatial-temporal features of real traffic data. The phases J and S are defined through the definitions [J] and [S] as follows:
The "wide moving jam" phase [J]
A so-called "wide moving jam" moves upstream through any highway bottlenecks. While doing so, the mean velocity of the downstream front is maintained. This is the characteristic feature of the wide moving jam that defines the phase J.
The term wide moving jam is meant to reflect the characteristic feature of the jam to propagate through any other state of traffic flow and through any bottleneck while maintaining the velocity of the downstream jam front. The phrase moving jam reflects the jam propagation as a whole localized structure on a road. To distinguish wide moving jams from other moving jams, which do not characteristically maintain the mean velocity of the downstream jam front, Kerner used the term wide. The term wide reflects the fact that if a moving jam has a width (in the longitudinal road direction) considerably greater than the widths of the jam fronts, and if the vehicle speed inside the jam is zero, the jam always exhibits the characteristic feature of maintaining the velocity of the downstream jam front (see Sec. 7.6.5 of the book).
Thus the term wide has nothing to do with the width across the jam, but actually refers to its length being considerably more than the transition zones at its head and tail. Historically, Kerner used the term wide from a qualitative analogy of a wide moving jam in traffic flow with wide autosolitons occurring in many systems of natural science (like gas plasma, electron-hole plasma in semiconductors, biological systems, and chemical reactions): Both the wide moving jam and a wide autosoliton exhibit some characteristic features, which do not depend on initial conditions at which these localized patterns have occurred.
The "synchronized flow" phase [S]
In "synchronized flow," the downstream front, where the vehicles accelerate to free flow, does not show this characteristic feature of the wide moving jam. Specifically, the downstream front of the synchronized flow is often fixed at a bottleneck.
The term "synchronized flow" is meant to reflect the following features of this traffic phase: (i) It is a continuous traffic flow with no significant stoppage, as often occurs inside a wide moving jam. The term "flow" reflects this feature. (ii) There is a tendency towards synchronization of vehicle speeds across different lanes on a multilane road in this flow. In addition, there is a tendency towards synchronization of vehicle speeds in each of the road lanes (bunching of vehicles) in synchronized flow. This is due to a relatively low probability of passing. The term "synchronized" reflects this speed synchronization effect.
Explanation of the traffic phase definitions based on measured traffic data
Measured data of averaged vehicle speeds (Figure 3 (a)) illustrate the phase definitions [J] and [S]. There are two spatial-temporal patterns of congested traffic with low vehicle speeds in Figure 3 (a). One pattern propagates upstream with an almost constant velocity of the downstream front, moving straight through the freeway bottleneck. According to the definition [J], this pattern of congestion belongs to the "wide moving jam" phase. In contrast, the downstream front of the other pattern is fixed at a bottleneck. According to the definition [S], this pattern belongs to the "synchronized flow" phase (Figure 3 (a) and (b)). Other empirical examples of the validation of the traffic phase definitions [J] and [S] can be found in the books and, in the article as well as in an empirical study of floating car data (floating car data is also called probe vehicle data).
Traffic phase definition based on empirical single-vehicle data
In Sec. 6.1 of the book has been shown that the traffic phase definitions [S] and [J] are the origin of most hypotheses of three-phase theory and related three-phase microscopic traffic flow models. The traffic phase definitions [J] and [S] are non-local macroscopic ones and they are applicable only after macroscopic data has been measured in space and time, i.e., in an "off-line" study. This is because for the definitive distinction of the phases J and S through the definitions [J] and [S] a study of the propagation of traffic congestion through a bottleneck is necessary. This is often considered as a drawback of the traffic phase definitions [S] and [J]. However, there are local microscopic criteria for the distinction between the phases J and S without a study of the propagation of congested traffic through a bottleneck. The microscopic criteria are as follows (see Sec. 2.6 in the book): If in single-vehicle (microscopic) data related to congested traffic the "flow-interruption interval", i.e., a time headway between two vehicles following each other is observed, which is much longer than the mean time delay in vehicle acceleration from a wide moving jam (the latter is about 1.3–2.1 s), then the related flow-interruption interval corresponds to the wide moving jam phase. After all wide moving jams have been found through this criterion in congested traffic, all remaining congested states are related to the synchronized flow phase.
Kerner’s hypothesis about two-dimensional (2D) states of traffic flow
Steady states of synchronized flow
Homogeneous synchronized flow is a hypothetical state of synchronized flow of identical vehicles and drivers in which all vehicles move with the same time-independent speed and have the same space gaps (a space gap is the distance between one vehicle and the one behind it), i.e., this synchronized flow is homogeneous in time and space.
Kerner’s hypothesis is that homogeneous synchronized flow can occur anywhere in a two-dimensional region (2D) of the flow-density plane (2D-region S in Figure 4(a)). The set of possible free flow states (F) overlaps in vehicle density with the set of possible states of homogeneous synchronized flow. The free flow states on a multi-lane road and states of homogeneous synchronized flow are separated by a gap in the flow rate and, therefore, by a gap in the speed at a given density: at each given density the synchronized flow speed is lower than the free flow speed.
In accordance with this hypothesis of Kerner’s three-phase theory, at a given speed in synchronized flow, the driver can make an arbitrary choice as to the space gap to the preceding vehicle, within the range associated with the 2D region of homogeneous synchronized flow (Figure 4(b)): the driver accepts different space gaps at different times and does not use one unique gap.
The hypothesis of Kerner’s three-phase traffic theory about the 2D region of steady states of synchronized flow is contrary to the hypothesis of earlier traffic flow theories involving the fundamental diagram of traffic flow, which supposes a one-dimensional relationship between vehicle density and flow rate.
Car following in three-phase traffic theory
In Kerner’s three-phase theory, a vehicle accelerates when the space gap to the preceding vehicle is greater than a synchronization space gap , i.e., at (labelled by acceleration in Figure 5); the vehicle decelerates when the gap g is smaller than a safe space gap , i.e., at (labelled by deceleration in Figure 5).
If the gap is less than G, the driver tends to adapt his speed to the speed of the preceding vehicle without caring what the precise gap is, so long as this gap is not smaller than the safe space gap (labelled by speed adaptation in Figure 5). Thus the space gap in car following in the framework of Kerner’s three-phase theory can be any space gap within the space gap range .
Autonomous driving in the framework of three-phase traffic theory
In the framework of the three-phase theory the hypothesis about 2D regions of states of synchronized flow has also been applied for the development of a model of autonomous driving vehicle (called also automated driving, self-driving or autonomous vehicle).
Traffic breakdown – a F → S phase transition
In measured data, congested traffic most often occurs in the vicinity of highway bottlenecks, e.g., on-ramps, off-ramps, or roadwork. A transition from free flow to congested traffic is known as traffic breakdown.
In Kerner’s three-phase traffic theory traffic breakdown is explained by a phase transition from free flow to synchronized flow (called as F →S phase transition). This explanation is supported by available measurements, because in measured traffic data after a traffic breakdown at a bottleneck the downstream front of the congested traffic is fixed at the bottleneck. Therefore, the resulting congested traffic after a traffic breakdown satisfies the definition [S] of the "synchronized flow" phase.
Empirical spontaneous and induced F → S transitions
Kerner notes using empirical data that synchronized flow can form in free flow spontaneously (spontaneous F →S phase transition) or can be externally induced (induced F → S phase transition).
A spontaneous F →S phase transition means that the breakdown occurs when there has previously been free flow at the bottleneck as well as both up- and downstream of the bottleneck. This implies that a spontaneous F → S phase transition occurs through the growth of an internal disturbance in free flow in a neighbourhood of a bottleneck.
In contrast, an induced F → S phase transition occurs through a region of congested traffic that initially emerged at a different road location downstream from the bottleneck location. Normally, this is in connection with the upstream propagation of a synchronized flow region or a wide moving jam. An empirical example of an induced breakdown at a bottleneck leading to synchronized flow can be seen in Figure 3: synchronized flow emerges through the upstream propagation of a wide moving jam.
The existence of empirical induced traffic breakdown (i.e., empirical induced F →S phase transition) means that an F → S phase transition occurs in a metastable state of free flow at a highway bottleneck. The term metastable free flow means that when small perturbations occur in free flow, the state of free flow is still stable, i.e., free flow persists at the bottleneck. However, when larger perturbations occur in free flow in a neighborhood of the bottleneck, the free flow is unstable and synchronized flow will emerge at the bottleneck.
Physical explanation of traffic breakdown in three-phase theory
Kerner explains the nature of the F → S phase transitions as a competition between "speed adaptation" and "over-acceleration". Speed adaptation is defined as the vehicle's deceleration to the speed of a slower moving preceding vehicle. Over-acceleration is defined as the vehicle acceleration occurring even if the preceding vehicle does not drive faster than the vehicle and the preceding vehicle additionally does not accelerate. In Kerner’s theory, the probability of over-acceleration is a discontinuous function of the vehicle speed: At the same vehicle density, the probability of over-acceleration in free flow is greater than in synchronized flow. When within a local speed disturbance speed adaptation is stronger than over-acceleration, an F → S phase transition occurs. Otherwise, when over-acceleration is stronger than speed adaptation the initial disturbance decays over time. Within a region of synchronized flow, a strong over-acceleration is responsible for a return transition from synchronized flow to free flow (S → F transition).
There can be several mechanisms of vehicle over-acceleration. It can be assumed that on a multi-lane road the most probable mechanism of over-acceleration is lane changing to a faster lane. In this case, the F → S phase transitions are explained by an interplay of acceleration while overtaking a slower vehicle (over-acceleration) and deceleration to the speed of a slower-moving vehicle ahead (speed adaptation). Overtaking supports the maintenance of free flow. "Speed adaptation" on the other hand leads to synchronized flow. Speed adaptation will occur if overtaking is not possible. Kerner states that the probability of overtaking is an interrupted function of the vehicle density (Figure 6): at a given vehicle density, the probability of overtaking in free flow is much higher than in synchronized flow.
Discussion of Kerner’s explanation of traffic breakdown
Kerner’s explanation of traffic breakdown at a highway bottleneck by the F → S phase transition in a metastable free flow is associated with the following fundamental empirical features of traffic breakdown at the bottleneck found in real measured data: (i) Spontaneous traffic breakdown in an initial free flow at the bottleneck leads to the emergence of congested traffic whose downstream front is fixed at the bottleneck (at least during some time interval), i.e., this congested traffic satisfies the definition [S] for the synchronized flow phase. In other words, spontaneous traffic breakdown is always an F → S phase transition. (ii) Probability of this spontaneous traffic breakdown is an increasing function of the flow rates at the bottleneck. (iii) At the same bottleneck, traffic breakdown can be either spontaneous or induced (see empirical examples for these fundamental features of traffic breakdown in Secs. 2.2.3 and 3.1 of the book); for this reason, the F → S phase transition occurs in a metastable free flow at a highway bottleneck.
As explained above, the sense of the term metastable free flow is as follows. Small enough disturbances in metastable free flow decay. However, when a large enough disturbance occurs at the bottleneck, an F → S phase transition does occur. Such a disturbance that initiates the F → S phase transition in metastable free flow at the bottleneck can be called a nucleus for traffic breakdown. In other words, real traffic breakdown (F → S phase transition) at a highway bottleneck exhibits the nucleation nature. Kerner considers the empirical nucleation nature of traffic breakdown (F → S phase transition) at a road bottleneck as the empirical fundamental of traffic and transportation science.
The reason for Kerner’s theory and his criticism of classical traffic flow theories
The empirical nucleation nature of traffic breakdown at highway bottlenecks cannot be explained by classical traffic theories and models. The search for an explanation of the empirical nucleation nature of traffic breakdown (F → S phase transition) at a highway bottleneck has been the motivation for the development of Kerner’s three-phase theory.
In particular, in two-phase traffic flow models in which traffic breakdown is associated with free flow instability, this model instability leads to the F → J phase transition, i.e. in these traffic flow models traffic breakdown is governed by spontaneous emergence of a wide moving jam(s) in an initial free flow (see Kerner’s criticism on such two-phase models as well as on other classical traffic flow models and theories in Chapter 10 of the book as well as in critical reviews,).
The main prediction of Kerner’s three-phase theory
Kerner developed the three-phase theory as an explanation of the empirical nature of traffic breakdown at highway bottlenecks: a random (probabilistic) F → S phase transition that occurs in the metastable state of free flow.
Herewith Kerner explained the main prediction, that this metastability of free flow with respect to the F → S phase transition is governed by the nucleation nature of an instability of synchronized flow. The explanation is a large enough local increase in speed in synchronized flow (called an S → F instability), which is a growing speed wave of a local increase in speed in synchronized flow at the bottleneck. The development of the S → F instability leads to a local phase transition from synchronized flow to free flow at the bottleneck (S → F transition). To explain this phenomenon Kerner developed a microscopic theory of the S → F instability.
None of the classical traffic flow theories and models incorporate the S → F instability of the three-phase theory.
Initially developed for highway traffic, Kerner expanded the three phase theory for the description of city traffic in 2011–2014.
Range of highway capacities
In three-phase traffic theory, traffic breakdown is explained by the F → S transition occurring in a metastable free flow. Probably the most important consequence of that is the existence of a range of highway capacities between some maximum and minimum capacities.
Maximum and minimum highway capacities
Spontaneous traffic breakdown, i.e., a spontaneous F → S phase transition, may occur in a wide range of flow rates in free flow. Kerner states, based on empirical data, that because of the possibility of spontaneous or induced traffic breakdowns at the same freeway bottleneck at any time instant there is a range of highway capacities at a bottleneck. This range of freeway capacities is between a minimum capacity and a maximum capacity of free flow (Figure 7).
Highway capacities and metastability of free flow
There is a maximum highway capacity : If the flow rate is close to the maximum capacity , then even small disturbances in free flow at a bottleneck will lead to a spontaneous F → S phase transition. On the other hand, only very large disturbances in free flow at the bottleneck will lead to a spontaneous F → S phase transition, if the flow rate is close to a minimum capacity (see, for example, Sec. 17.2.2 of the book). The probability of a smaller disturbance in free flow is much higher than that of a larger disturbance. Therefore, the higher the flow rate in free flow at a bottleneck, the higher the probability of the spontaneous F → S phase transition. If the flow rate in free flow is lower than the minimum capacity , there will be no traffic breakdown (no F →S phase transition) at the bottleneck.
The infinite number of highway capacities at a bottleneck can be illustrated by the meta-stability of free flow at flow rates with
Metastability of free flow means that for small disturbances free flow remains stable (free flow persists), but with larger disturbances the flow becomes unstable and an F → S phase transition to synchronized flow occurs.
Discussion of capacity definitions
Thus the basic theoretical result of three-phase theory about the understanding of the stochastic capacity of free flow at a bottleneck is as follows:
At any time instant, there is an infinite number of highway capacities of free flow at the bottleneck. The infinite number of flow rates, at which traffic breakdown can be induced at the bottleneck and the infinite number of highway capacities. These capacities are within the flow rate range between a minimum capacity and a maximum capacity (Figure 7).
The range of highway capacities at a bottleneck in Kerner’s three-phase traffic theory contradicts fundamentally the classical understanding of stochastic highway capacity as well as traffic theories and methods for traffic management and traffic control which at any time assume the existence of a particular highway capacity. In contrast, in Kerner’s three-phase traffic theory at any time there is a range of highway capacities, which are between the minimum capacity and maximum capacity . The values and can depend considerably on traffic parameters (the percentage of long vehicles in traffic flow, weather, bottleneck characteristics, etc.).
The existence at any time instant of a range of highway capacities in Kerner’s theory changes crucially methodologies for traffic control, dynamic traffic assignment, and traffic management. In particular, to satisfy the nucleation nature of traffic breakdown, Kerner introduced breakdown minimization principle (BM principle) for the optimization and control of vehicular traffic networks.
Wide moving jams (J)
A moving jam will be called "wide" if its length (in direction of the flow) clearly exceeds the lengths of the jam fronts. The average vehicle speed within wide moving jams is much lower than the average speed in free flow. At the downstream front, the vehicles accelerate to the free flow speed. At the upstream jam front, the vehicles come from free flow or synchronized flow and must reduce their speed. According to the definition [J] the wide moving jam always has the same mean velocity of the downstream front , even if the jam propagates through other traffic phases or bottlenecks. The flow rate is sharply reduced within a wide moving jam.
Characteristic parameters of wide moving jams
Kerner’s empirical results show that some characteristic features of wide moving jams are independent of the traffic volume and bottleneck features (e.g. where and when the jam formed). However, these characteristic features are dependent on weather conditions, road conditions, vehicle technology, percentage of long vehicles, etc.. The velocity of the downstream front of a wide moving jam (in the upstream direction) is a characteristic parameter, as is the flow rate just downstream of the jam (with free flow at this location, see Figure 8). This means that many wide-moving jams have similar features under similar conditions. These parameters are relatively predictable. The movement of the downstream jam front can be illustrated in the flow-density plane by a line, which is called "Line J" (Line J in Figure 8). The slope of Line J is the velocity of the downstream jam front .
Minimum highway capacity and outflow from wide moving jam
Kerner emphasizes that the minimum capacity and the outflow of a wide moving jam describe two qualitatively different features of free flow: the minimum capacity characterizes an F → S phase transition at a bottleneck, i.e., a traffic breakdown. In contrast, the outflow of a wide moving jam determines a condition for the existence of the wide moving jam, i.e., the traffic phase J while the jam propagates in free flow: Indeed, if the jam propagates through free-flow (i.e., both upstream and downstream of the jam free flows occur), then a wide moving jam can persist, only when the jam inflow is equal to or larger than the jam outflow ; otherwise, the jam dissolves over time. Depending on traffic parameters like weather, percentage of long vehicles, et cetera, and characteristics of the bottleneck where the F → S phase transition can occur, the minimum capacity might be smaller (as in Figure 8), or greater than the jam’s outflow .
Synchronized flow phase (S)
In contrast to wide moving jams, both the flow rate and vehicle speed may vary significantly in the synchronized flow phase. The downstream front of synchronized flow is often spatially fixed (see definition [S]), normally at a bottleneck at a certain road location. The flow rate in this phase could remain similar to the one in free flow, even if the vehicle speeds are sharply reduced.
Because the synchronized flow phase does not have the characteristic features of the wide moving jam phase J, Kerner’s three-phase traffic theory assumes that the hypothetical homogeneous states of synchronized flow cover a two-dimensional region in the flow-density plane (dashed regions in Figure 8).
S → J phase transition
Wide moving jams do not emerge spontaneously in free flow, but they can emerge in regions of synchronized flow. This phase transition is called an S → J phase transition.
"Jam without obvious reason" – F → S → J phase transitions
In 1998, Kerner found out that in real field traffic data the emergence of a wide moving jam in free flow is observed as a cascade of F → S → J phase transitions (Figure 9): first, a region of synchronized flow emerges in a region of free flow. As explained above, such an F → S phase transition occurs mostly at a bottleneck. Within the synchronized flow phase a further "self-compression" occurs and vehicle density increases while vehicle speed decreases. This self-compression is called "pinch effect". In "pinch" regions of synchronized flow, narrow moving jams emerge. If these narrow moving jams grow, wide moving jams will emerge labeled by S → J in Figure 9). Thus, wide moving jams emerge later than traffic breakdown (F → S transition) has occurred and at another road location upstream of the bottleneck. Therefore, when Kerner’s F → S → J phase transitions occurring in real traffic (Figure 9 (a)) are presented in the speed-density plane (Figure 9 (b)) (or speed-flow, or else flow-density planes), one should remember that states of synchronized flow and low speed state within a wide moving jam are measured at different road locations. Kerner notes that the frequency of the emergence of wide moving jams increases if the density in synchronized flow increases. The wide moving jams propagate further upstream, even if they propagate through regions of synchronized flow or bottlenecks. Obviously, any combination of return phase transitions (S → F, J → S, and J → F transitions shown in Figure 9) is also possible.
The physics of S → J transition
To further illustrate S → J phase transitions: in Kerner’s three-phase traffic theory Line J divides the homogeneous states of synchronized flow in two (Figure 8). States of homogeneous synchronized flow above Line J are meta-stable. States of homogeneous synchronized flow below Line J are stable states in which no S → J phase transition can occur. Metastable homogeneous synchronized flow means that for small disturbances, the traffic state remains stable. However, when larger disturbances occur, synchronized flow becomes unstable, and an S → J phase transition occurs.
Traffic patterns of S and J
Very complex congested patterns can be observed, caused by F → S and S → J phase transitions.
Classification of synchronized flow traffic patterns (SP)
A congestion pattern of synchronized flow (Synchronized Flow Pattern (SP)) with a fixed downstream and a not continuously propagating upstream front is called Localised Synchronized Flow Pattern (LSP).
Frequently the upstream front of a SP propagates upstream. If only the upstream front propagates upstream, the related SP is called Widening Synchronised Flow Pattern (WSP). The downstream front remains at the bottleneck location and the width of the SP increases.
It is possible that both upstream and downstream front propagates upstream. The downstream front is no longer located at the bottleneck. This pattern has been called Moving Synchronised Flow Pattern (MSP).
Catch effect of synchronized flow at a highway bottleneck
The difference between the SP and the wide moving jam becomes visible in that when a WSP or MSP reaches an upstream bottleneck the so-called "catch-effect" can occur. The SP will be caught at the bottleneck and as a result a new congested pattern emerges. A wide-moving jam will not be caught at a bottleneck and moves further upstream. In contrast to wide moving jams, the synchronized flow, even if it moves as an MSP, has no characteristic parameters. As an example, the velocity of the downstream front of the MSP might vary significantly and can be different for different MSPs. These features of SP and wide moving jams are consequences of the phase
definitions [S] and [J].
General congested traffic pattern (GP)
An often occurring congestion pattern is one that contains both congested phases, [S] and [J]. Such a pattern with [S] and [J] is called General Pattern (GP). An empirical example of GP is shown in Figure 9 (a).
In many freeway infrastructures, bottlenecks are very close to each other. A congestion pattern whose synchronized flow covers two or more bottlenecks is called an Expanded Pattern (EP). An EP could contain synchronized flow only (called ESP: Expanded Synchronized Flow Pattern)), but normally wide moving jams form in the synchronized flow. In those cases, the EP is called EGP (Expanded General Pattern) (see Figure 10).
Applications of three-phase traffic theory in transportation engineering
One of the applications of Kerner’s three-phase traffic theory is the methods called ASDA/FOTO (Automatische StauDynamikAnalyse (Automatic tracking of wide moving jams) and Forecasting Of Traffic Objects). ASDA/FOTO is a software tool able to process large traffic data volumes quickly and efficiently on freeway networks (see examples from three countries, Figure 11). ASDA/FOTO works in an online traffic management system based on measured traffic data. Recognition, tracking, and prediction of [S] and [J] are performed using the features of Kerner’s three-phase traffic theory.
Further applications of the theory are seen in the development of traffic simulation models, a ramp metering system (ANCONA), collective traffic control, traffic assistance, autonomous driving, and traffic state detection, as described in the books by Kerner.
Mathematical models of traffic flow in the framework of Kerner’s three-phase traffic theory
Rather than a mathematical model of traffic flow, Kerner’s three-phase theory is a qualitative traffic flow theory that consists of several hypotheses. The hypotheses of Kerner’s three-phase theory should qualitatively explain spatiotemporal traffic phenomena in traffic networks found in real field traffic data, which was measured over years on a variety of highways in different countries. Some of the hypotheses of Kerner’s theory have been considered above. It can be expected that a diverse variety of different mathematical models of traffic flow can be developed in the framework of Kerner’s three-phase theory.
The first mathematical model of traffic flow in the framework of Kerner’s three-phase theory that mathematical simulations can show and explain traffic breakdown by an F → S phase transition in the metastable free flow at the bottleneck was the Kerner-Klenov model introduced in 2002. The Kerner–Klenov model is a microscopic stochastic model in the framework of Kerner’s three-phase traffic theory. In the Kerner-Klenov model, vehicles move in accordance with stochastic rules of vehicle motion that can be individually chosen for each of the vehicles. Some months later, Kerner, Klenov, and Wolf developed a cellular automaton (CA) traffic flow model in the framework of Kerner’s three-phase theory.
The Kerner-Klenov stochastic three-phase traffic flow model in the framework of Kerner’s theory has further been developed for different applications. In particular, to simulate on-ramp metering, speed limit control, dynamic traffic assignment in traffic and transportation networks, traffic at heavy bottlenecks, and on moving bottlenecks, features of heterogeneous traffic flow consisting of different vehicles and drivers, jam warning methods, vehicle-to-vehicle (V2V) communication for cooperative driving, the performance of self-driving vehicles in mixture traffic flow, traffic breakdown at signals in city traffic, over-saturated city traffic, vehicle fuel consumption in traffic networks (see references in Sec. 1.7 of a review).
Over time several scientific groups have developed new mathematical models in the framework of Kerner’s three-phase theory. In particular, new mathematical models in the framework of Kerner’s three-phase theory have been introduced in the works by Jiang, Wu, Gao, et al., Davis, Lee, Barlovich, Schreckenberg, and Kim (see other references to mathematical models in the framework of Kerner’s three-phase traffic theory and results of their investigations in Sec. 1.7 of a review).
Criticism of the theory
The theory has been criticized for two primary reasons. First, the theory is almost completely based on measurements on the Bundesautobahn 5 in Germany. It may be that this road has this pattern, but other roads in other countries have other characteristics. Future research must show the validity of the theory on other roads in other countries around the world. Second, it is not clear how the data was interpolated. Kerner uses fixed-point measurements (loop detectors), but draws his conclusions on vehicle trajectories, which span the whole length of the road under investigation. These trajectories can only be measured directly if floating car data is used, but as said, only loop detector measurements are used. How the data in between was gathered or interpolated, is not clear.
The above criticism has been responded to in a recent study of data measured in the US and the United Kingdom, which confirms conclusions made based on measurements on the Bundesautobahn 5 in Germany. Moreover, there is a recent validation of the theory based on floating car data. In this article one can also find methods for spatial-temporal interpolations of data measured at road detectors (see article’s appendixes).
Other criticisms have been made, such as that the notion of phases has not been well defined and that so-called two-phase models also succeed in simulating the essential features described by Kerner.
This criticism has been responded to in a review as follows. The most important feature of Kerner’s theory is the explanation of the empirical nucleation nature of traffic breakdown at a road bottleneck by the F → S transition. The empirical nucleation nature of traffic breakdown cannot be explained with earlier traffic flow theories including two-phase traffic flow models studied in.
See also
Active traffic management
Fundamental diagram
Intelligent transportation system
Microscopic traffic flow model
Traffic bottleneck
Traffic flow
Traffic model
Traffic wave
Traffic congestion
Traffic congestion: Reconstruction with Kerner’s three-phase theory
Kerner’s breakdown minimization principle
Transportation forecasting
Two-fluid model
Notes
References
H. Rehborn, S. Klenov, "Traffic Prediction of Congested Patterns", In: R. Meyers (Ed.): Encyclopedia of Complexity and Systems Science, Springer New York, 2009.
H. Rehborn, J. Palmer, "Using ASDA and FOTO to generate RDS/TMC traffic messages", Traffic Engineering and Control, July 2008, pp. 261–266.
Road transport
Transportation engineering
Mathematical physics
Road traffic management | Three-phase traffic theory | [
"Physics",
"Mathematics",
"Engineering"
] | 7,375 | [
"Applied mathematics",
"Theoretical physics",
"Industrial engineering",
"Transportation engineering",
"Civil engineering",
"Mathematical physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.