id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
516,386 | https://en.wikipedia.org/wiki/Fluorescence%20spectroscopy | Fluorescence spectroscopy (also known as fluorimetry or spectrofluorometry) is a type of electromagnetic spectroscopy that analyzes fluorescence from a sample. It involves using a beam of light, usually ultraviolet light, that excites the electrons in molecules of certain compounds and causes them to emit light; typically, but not necessarily, visible light. A complementary technique is absorption spectroscopy. In the special case of single molecule fluorescence spectroscopy, intensity fluctuations from the emitted light are measured from either single fluorophores, or pairs of fluorophores.
Devices that measure fluorescence are called fluorometers.
Theory
Molecules have various states referred to as energy levels. Fluorescence spectroscopy is primarily concerned with electronic and vibrational states. Generally, the species being examined has a ground electronic state (a low energy state) of interest, and an excited electronic state of higher energy. Within each of these electronic states there are various vibrational states.
In fluorescence, the species is first excited, by absorbing a photon, from its ground electronic state to one of the various vibrational states in the excited electronic state. Collisions with other molecules cause the excited molecule to lose vibrational energy until it reaches the lowest vibrational state from the excited electronic state. This process is often visualized with a Jablonski diagram.
The molecule then drops down to one of the various vibrational levels of the ground electronic state again, emitting a photon in the process. As molecules may drop down into any of several vibrational levels in the ground state, the emitted photons will have different energies, and thus frequencies. Therefore, by analysing the different frequencies of light emitted in fluorescent spectroscopy, along with their relative intensities, the structure of the different vibrational levels can be determined.
For atomic species, the process is similar; however, since atomic species do not have vibrational energy levels, the emitted photons are often at the same wavelength as the incident radiation. This process of re-emitting the absorbed photon is "resonance fluorescence" and while it is characteristic of atomic fluorescence, is seen in molecular fluorescence as well.
In a typical fluorescence (emission) measurement, the excitation wavelength is fixed and the detection wavelength varies, while in a fluorescence excitation measurement the detection wavelength is fixed and the excitation wavelength is varied across a region of interest. An emission map is measured by recording the emission spectra resulting from a range of excitation wavelengths and combining them all together. This is a three dimensional surface data set: emission intensity as a function of excitation and emission wavelengths, and is typically depicted as a contour map.
Instrumentation
Two general types of instruments exist: filter fluorometers that use filters to isolate the incident light and fluorescent light and spectrofluorometers that use diffraction grating monochromators to isolate the incident light and fluorescent light.
Both types use the following scheme: the light from an excitation source passes through a filter or monochromator, and strikes the sample. A proportion of the incident light is absorbed by the sample, and some of the molecules in the sample fluoresce. The fluorescent light is emitted in all directions. Some of this fluorescent light passes through a second filter or monochromator and reaches a detector, which is usually placed at 90° to the incident light beam to minimize the risk of transmitted or reflected incident light reaching the detector.
Various light sources may be used as excitation sources, including lasers, LED, and lamps; xenon arcs and mercury-vapor lamps in particular. A laser only emits light of high irradiance at a very narrow wavelength interval, typically under 0.01 nm, which makes an excitation monochromator or filter unnecessary. The disadvantage of this method is that the wavelength of a laser cannot be changed by much. A mercury vapor lamp is a line lamp, meaning it emits light near peak wavelengths. By contrast, a xenon arc has a continuous emission spectrum with nearly constant intensity in the range from 300-800 nm and a sufficient irradiance for measurements down to just above 200 nm.
Filters and/or monochromators may be used in fluorimeters. A monochromator transmits light of an adjustable wavelength with an adjustable tolerance. The most common type of monochromator utilizes a diffraction grating, that is, collimated light illuminates a grating and exits with a different angle depending on the wavelength. The monochromator can then be adjusted to select which wavelengths to transmit. For allowing anisotropy measurements, the addition of two polarization filters is necessary: One after the excitation monochromator or filter, and one before the emission monochromator or filter.
As mentioned before, the fluorescence is most often measured at a 90° angle relative to the excitation light. This geometry is used instead of placing the sensor at the line of the excitation light at a 180° angle in order to avoid interference of the transmitted excitation light. No monochromator is perfect and it will transmit some stray light, that is, light with other wavelengths than the targeted. An ideal monochromator would only transmit light in the specified range and have a high wavelength-independent transmission. When measuring at a 90° angle, only the light scattered by the sample causes stray light. This results in a better signal-to-noise ratio, and lowers the detection limit by approximately a factor 10000, when compared to the 180° geometry. Furthermore, the fluorescence can also be measured from the front, which is often done for turbid or opaque samples
.
The detector can either be single-channeled or multichanneled. The single-channeled detector can only detect the intensity of one wavelength at a time, while the multichanneled one detects the intensity of all wavelengths simultaneously, making the emission monochromator or filter unnecessary.
The most versatile fluorimeters with dual monochromators and a continuous excitation light source can record both an excitation spectrum and a fluorescence spectrum. When measuring fluorescence spectra, the wavelength of the excitation light is kept constant, preferably at a wavelength of high absorption, and the emission monochromator scans the spectrum. For measuring excitation spectra, the wavelength passing through the emission filter or monochromator is kept constant and the excitation monochromator is scanning. The excitation spectrum generally is identical to the absorption spectrum as the fluorescence intensity is proportional to the absorption.
Analysis of data
At low concentrations the fluorescence intensity will generally be proportional to the concentration of the fluorophore.
Unlike in UV/visible spectroscopy, ‘standard’, device independent spectra are not easily attained. Several factors influence and distort the spectra, and corrections are necessary to attain ‘true’, i.e. machine-independent, spectra. The different types of distortions will here be classified as being either instrument- or sample-related. Firstly, the distortion arising from the instrument is discussed. As a start, the light source intensity and wavelength characteristics varies over time during each experiment and between each experiment. Furthermore, no lamp has a constant intensity at all wavelengths. To correct this, a beam splitter can be applied after the excitation monochromator or filter to direct a portion of the light to a reference detector.
Additionally, the transmission efficiency of monochromators and filters must be taken into account. These may also change over time. The transmission efficiency of the monochromator also varies depending on wavelength. This is the reason that an optional reference detector should be placed after the excitation monochromator or filter. The percentage of the fluorescence picked up by the detector is also dependent upon the system. Furthermore, the detector quantum efficiency, that is, the percentage of photons detected, varies between different detectors, with wavelength and with time, as the detector inevitably deteriorates.
Two other topics that must be considered include the optics used to direct the radiation and the means of holding or containing the sample material (called a cuvette or cell). For most UV, visible, and NIR measurements the use of precision quartz cuvettes is necessary. In both cases, it is important to select materials that have relatively little absorption in the wavelength range of interest. Quartz is ideal because it transmits from 200 nm-2500 nm; higher grade quartz can even transmit up to 3500 nm, whereas the absorption properties of other materials can mask the fluorescence from the sample.
Correction of all these instrumental factors for getting a ‘standard’ spectrum is a tedious process, which is only applied in practice when it is strictly necessary. This is the case when measuring the quantum yield or when finding the wavelength with the highest emission intensity for instance.
As mentioned earlier, distortions arise from the sample as well. Therefore, some aspects of the sample must be taken into account too. Firstly, photodecomposition may decrease the intensity of fluorescence over time. Scattering of light must also be taken into account. The most significant types of scattering in this context are Rayleigh and Raman scattering. Light scattered by Rayleigh scattering has the same wavelength as the incident light, whereas in Raman scattering the scattered light changes wavelength usually to longer wavelengths. Raman scattering is the result of a virtual electronic state induced by the excitation light. From this virtual state, the molecules may relax back to a vibrational level other than the vibrational ground state. In fluorescence spectra, it is always seen at a constant wavenumber difference relative to the excitation wavenumber e.g. the peak appears at a wavenumber 3600 cm−1 lower than the excitation light in water.
Other aspects to consider are the inner filter effects. These include reabsorption. Reabsorption happens because another molecule or part of a macromolecule absorbs at the wavelengths at which the fluorophore emits radiation. If this is the case, some or all of the photons emitted by the fluorophore may be absorbed again. Another inner filter effect occurs because of high concentrations of absorbing molecules, including the fluorophore. The result is that the intensity of the excitation light is not constant throughout the solution. Resultingly, only a small percentage of the excitation light reaches the fluorophores that are visible for the detection system. The inner filter effects change the spectrum and intensity of the emitted light and they must therefore be considered when analysing the emission spectrum of fluorescent light.
Tryptophan fluorescence
The fluorescence of a folded protein is a mixture of the fluorescence from individual aromatic residues. Most of the intrinsic fluorescence emissions of a folded protein are due to excitation of tryptophan residues, with some emissions due to tyrosine and phenylalanine; but disulfide bonds also have appreciable absorption in this wavelength range. Typically, tryptophan has a wavelength of maximum absorption of 280 nm and an emission peak that is solvatochromic, ranging from ca. 300 to 350 nm depending in the polarity of the local environment Hence, protein fluorescence may be used as a diagnostic of the conformational state of a protein. Furthermore, tryptophan fluorescence is strongly influenced by the proximity of other residues (i.e., nearby protonated groups such as Asp or Glu can cause quenching of Trp fluorescence). Also, energy transfer between tryptophan and the other fluorescent amino acids is possible, which would affect the analysis, especially in cases where the Förster acidic approach is taken. In addition, tryptophan is a relatively rare amino acid; many proteins contain only one or a few tryptophan residues. Therefore, tryptophan fluorescence can be a very sensitive measurement of the conformational state of individual tryptophan residues. The advantage compared to extrinsic probes is that the protein itself is not changed. The use of intrinsic fluorescence for the study of protein conformation is in practice limited to cases with few (or perhaps only one) tryptophan residues, since each experiences a different local environment, which gives rise to different emission spectra.
Tryptophan is an important intrinsic fluorescent (amino acid), which can be used to estimate the nature of microenvironment of the tryptophan. When performing experiments with denaturants, surfactants or other amphiphilic molecules, the microenvironment of the tryptophan might change. For example, if a protein containing a single tryptophan in its 'hydrophobic' core is denatured with increasing temperature, a red-shifted emission spectrum will appear. This is due to the exposure of the tryptophan to an aqueous environment as opposed to a hydrophobic protein interior. In contrast, the addition of a surfactant to a protein which contains a tryptophan which is exposed to the aqueous solvent will cause a blue-shifted emission spectrum if the tryptophan is embedded in the surfactant vesicle or micelle. Proteins that lack tryptophan may be coupled to a fluorophore.
With fluorescence excitation at 295 nm, the tryptophan emission spectrum is dominant over the weaker tyrosine and phenylalanine fluorescence.
Applications
Fluorescence spectroscopy is used in, among others, biochemical, medical, and chemical research fields for analyzing organic compounds. There has also been a report of its use in differentiating malignant skin tumors from benign.
Atomic Fluorescence Spectroscopy (AFS) techniques are useful in other kinds of analysis/measurement of a compound present in air or water, or other media, such as CVAFS which is used for heavy metals detection, such as mercury.
Fluorescence can also be used to redirect photons, see fluorescent solar collector.
Additionally, Fluorescence spectroscopy can be adapted to the microscopic level using microfluorimetry
In analytical chemistry, fluorescence detectors are used with HPLC.
In the field of water research, fluorescence spectroscopy can be used to monitor water quality by detecting organic pollutants. Recent advances in computer science and machine learning have even enabled detection of bacterial contamination of water.
In biomedical research, fluorescence spectroscopy is used to evaluate the efficiency of drug distribution through the cross-linking of fluorescent agents to various drugs.
Fluorescence spectroscopy in biophysical research enables individuals to visualize and characterize lipid domains within cellular membranes.
See also
Lanthanide probes
Photoluminescence
Laser-induced fluorescence
References
External links
Fluorophores.org, the database of fluorescent dyes
OpenFluor, Community tools supporting chemometric analysis of organic matter fluorescence
Database of fluorescent minerals with pictures, activators and spectra (fluomin.org)
Fluorescence
Spectroscopy | Fluorescence spectroscopy | [
"Physics",
"Chemistry"
] | 3,085 | [
"Luminescence",
"Fluorescence",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy"
] |
516,414 | https://en.wikipedia.org/wiki/Monochromator | A monochromator is an optical device that transmits a mechanically selectable narrow band of wavelengths of light or other radiation chosen from a wider range of wavelengths available at the input. The name is .
Uses
A device that can produce monochromatic light has many uses in science and in optics because many optical characteristics of a material are dependent on wavelength. Although there are a number of useful ways to select a narrow band of wavelengths (which, in the visible range, is perceived as a pure color), there are not as many other ways to easily select any wavelength band from a wide range. See below for a discussion of some of the uses of monochromators.
In hard X-ray and neutron optics, crystal monochromators are used to define wave conditions on the instruments.
Techniques
A monochromator can use either the phenomenon of optical dispersion in a prism, or that of diffraction using a diffraction grating, to spatially separate the colors of light. It usually has a mechanism for directing the selected color to an exit slit. Usually the grating or the prism is used in a reflective mode. A reflective prism is made by making a right triangle prism (typically, half of an equilateral prism) with one side mirrored. The light enters through the hypotenuse face and is reflected back through it, being refracted twice at the same surface. The total refraction, and the total dispersion, is the same as would occur if an equilateral prism were used in transmission mode.
Collimation
The dispersion or diffraction is only controllable if the light is collimated, that is if all the rays of light are parallel, or practically so. A source, like the sun, which is very far away, provides collimated light. Newton used sunlight in his famous experiments. In a practical monochromator, however, the light source is close by, and an optical system in the monochromator converts the diverging light of the source to collimated light. Although some monochromator designs do use focusing gratings that do not need separate collimators, most use collimating mirrors. Reflective optics are preferred because they do not introduce dispersive effects of their own.
Geometrical design of a prism or grating monochromator
There are grating/prism configurations that offer different tradeoffs between simplicity and spectral accuracy.
Czerny–Turner (discussed below)
Paschen-Runge
Eagle
Wadsworth
Ebert-Fasti
Littrow
Pfund
In the common Czerny–Turner design, the broad-band illumination source (A) is aimed at an entrance slit (B). The amount of light energy available for use depends on the intensity of the source in the space defined by the slit (width × height) and the acceptance angle of the optical system. The slit is placed at the effective focus of a curved mirror (the collimator, C) so that the light from the slit reflected from the mirror is collimated (focused at infinity). The collimated light is diffracted from the grating (D) and then is collected by another mirror (E), which refocuses the light, now dispersed, on the exit slit (F). In a prism monochromator, a reflective Littrow prism takes the place of the diffraction grating, in which case the light is refracted by the prism.
At the exit slit, the colors of the light are spread out (in the visible this shows the colors of the rainbow). Because each color arrives at a separate point in the exit-slit plane, there are a series of images of the entrance slit focused on the plane. Because the entrance slit is finite in width, parts of nearby images overlap. The light leaving the exit slit (F) contains the entire image of the entrance slit of the selected color plus parts of the entrance slit images of nearby colors. A rotation of the dispersing element causes the band of colors to move relative to the exit slit, so that the desired entrance slit image is centered on the exit slit. The range of colors leaving the exit slit is a function of the width of the slits. The entrance and exit slit widths are adjusted together.
Stray light
The ideal transfer function of such a monochromator is a triangular shape. The peak of the triangle is at the nominal wavelength selected, so that the image of the selected wavelength completely fills the exit slit. The intensity of the nearby colors then decreases linearly on either side of this peak until some cutoff value is reached, where the intensity stops decreasing. This is called the stray light level. The cutoff level is typically about one thousandth of the peak value, or 0.1%.
Spectral bandwidth
Spectral bandwidth is defined as the width of the triangle at the points where the light has reached half the maximum value (full width at half maximum, abbreviated as FWHM). A typical spectral bandwidth might be one nanometer; however, different values can be chosen to meet the need of analysis. A narrower bandwidth does improve the resolution, but it also decreases the signal-to-noise ratio.
Dispersion
The dispersion of a monochromator is characterized as the width of the band of colors per unit of slit width, 1 nm of spectrum per mm of slit width for instance. This factor is constant for a grating, but varies with wavelength for a prism. If a scanning prism monochromator is used in a constant bandwidth mode, the slit width must change as the wavelength changes. Dispersion depends on the focal length, the grating order and grating resolving power.
Wavelength range
A monochromator's adjustment range might cover the visible spectrum and some part of both or either of the nearby ultraviolet (UV) and infrared (IR) spectra, although monochromators are built for a great variety of optical ranges, and to a great many designs.
Double monochromators
It is common for two monochromators to be connected in series, with their mechanical systems operating in tandem so that they both select the same color. This arrangement is not intended to improve the narrowness of the spectrum, but rather to lower the cutoff level. A double monochromator may have a cutoff about one millionth of the peak value, the product of the two cutoffs of the individual sections. The intensity of the light of other colors in the exit beam is referred to as the stray light level and is the most critical specification of a monochromator for many uses. Achieving low stray light is a large part of the art of making a practical monochromator.
Diffraction gratings and blazed gratings
Grating monochromators disperse ultraviolet, visible, and infrared radiation typically using replica gratings, which are manufactured from a master grating. A master grating consists of a hard, optically flat, surface that has a large number of parallel and closely spaced grooves. The construction of a master grating is a long, expensive process because the grooves must be of identical size, exactly parallel, and equally spaced over the length of the grating (3–10 cm). A grating for the ultraviolet and visible region typically has 300–2000 grooves/mm, however 1200–1400 grooves/mm is most common. For the infrared region, gratings usually have 10–200 grooves/mm. When a diffraction grating is used, care must be taken in the design of broadband monochromators because the diffraction pattern has overlapping orders. Sometimes broadband preselector filters are inserted in the optical path to limit the width of the diffraction orders so they do not overlap. Sometimes this is done by using a prism as one of the monochromators of a dual monochromator design.
The original high-resolution diffraction gratings were ruled. The construction of high-quality ruling engines was a large undertaking (as well as exceedingly difficult, in past decades), and good gratings were very expensive. The slope of the triangular groove in a ruled grating is typically adjusted to enhance the brightness of a particular diffraction order. This is called blazing a grating. Ruled gratings have imperfections that produce faint "ghost" diffraction orders that may raise the stray light level of a monochromator. A later photolithographic technique allows gratings to be created from a holographic interference pattern. Holographic gratings have sinusoidal grooves and so are not as bright, but have lower scattered light levels than blazed gratings. Almost all the gratings actually used in monochromators are carefully made replicas of ruled or holographic master gratings.
Prisms
Prisms have higher dispersion in the UV region. Prism monochromators are favored in some instruments that are principally designed to work in the far UV region. Most monochromators use gratings, however. Some monochromators have several gratings that can be selected for use in different spectral regions. A double monochromator made by placing a prism and a grating monochromator in series typically does not need additional bandpass filters to isolate a single grating order.
Focal length
The narrowness of the band of colors that a monochromator can generate is related to the focal length of the monochromator collimators. Using a longer focal length optical system also unfortunately decreases the amount of light that can be accepted from the source. Very high resolution monochromators might have a focal length of 2 meters. Building such monochromators requires exceptional attention to mechanical and thermal stability. For many applications a monochromator of about 0.4 meters' focal length is considered to have excellent resolution. Many monochromators have a focal length less than 0.1 meters.
Slit height
The most common optical system uses spherical collimators and thus contains optical aberrations that curve the field where the slit images come to focus, so that slits are sometimes curved instead of simply straight, to approximate the curvature of the image. This allows taller slits to be used, gathering more light, while still achieving high spectral resolution. Some designs take another approach and use toroidal collimating mirrors to correct the curvature instead, allowing higher straight slits without sacrificing resolution.
Wavelength vs. energy
Monochromators are often calibrated in units of wavelength. Uniform rotation of a grating produces a sinusoidal change in wavelength, which is approximately linear for small grating angles, so such an instrument is easy to build. Many of the underlying physical phenomena being studied are linear in energy though, and since wavelength and photon energy have a reciprocal relationship, spectral patterns that are simple and predictable when plotted as a function of energy are distorted when plotted as a function of wavelength. Some monochromators are calibrated in units of reciprocal centimeters or some other energy units, but the scale may not be linear.
Dynamic range
A spectrophotometer built with a high quality double monochromator can produce light of sufficient purity and intensity that the instrument can measure a narrow band of optical attenuation of about one million fold (6 AU, Absorbance Units).
Applications
Monochromators are used in many optical measuring instruments and in other applications where tunable monochromatic light is wanted. Sometimes the monochromatic light is directed at a sample and the reflected or transmitted light is measured. Sometimes white light is directed at a sample and the monochromator is used to analyze the reflected or transmitted light. Two monochromators are used in many fluorometers; one monochromator is used to select the excitation wavelength and a second monochromator is used to analyze the emitted light.
An automatic scanning spectrometer includes a mechanism to change the wavelength selected by the monochromator and to record the resulting changes in the measured quantity as a function of the wavelength.
If an imaging device replaces the exit slit, the result is the basic configuration of a spectrograph. This configuration allows the simultaneous analysis of the intensities of a wide band of colors. Photographic film or an array of photodetectors can be used, for instance to collect the light. Such an instrument can record a spectral function without mechanical scanning, although there may be tradeoffs in terms of resolution or sensitivity for instance.
An absorption spectrophotometer measures the absorption of light by a sample as a function of wavelength. Sometimes the result is expressed as percent transmission and sometimes it is expressed as the inverse logarithm of the transmission. The Beer–Lambert law relates the absorption of light to the concentration of the light-absorbing material, the optical path length, and an intrinsic property of the material called molar absorptivity. According to this relation the decrease in intensity is exponential in concentration and path length. The decrease is linear in these quantities when the inverse logarithm of transmission is used. The old nomenclature for this value was optical density (OD), current nomenclature is absorbance units (AU). One AU is a tenfold reduction in light intensity. Six AU is a millionfold reduction.
Absorption spectrophotometers often contain a monochromator to supply light to the sample. Some absorption spectrophotometers have automatic spectral analysis capabilities.
Absorption spectrophotometers have many everyday uses in chemistry, biochemistry, and biology. For example, they are used to measure the concentration or change in concentration of many substances that absorb light. Critical characteristics of many biological materials, many enzymes for instance, are measured by starting a chemical reaction that produces a color change that depends on the presence or activity of the material being studied. Optical thermometers have been created by calibrating the change in absorbance of a material against temperature. There are many other examples.
Spectrophotometers are used to measure the specular reflectance of mirrors and the diffuse reflectance of colored objects. They are used to characterize the performance of sunglasses, laser protective glasses, and other optical filters. There are many other examples.
In the UV, visible and near IR, absorbance and reflectance spectrophotometers usually illuminate the sample with monochromatic light. In the corresponding IR instruments, the monochromator is usually used to analyze the light coming from the sample.
Monochromators are also used in optical instruments that measure other phenomena besides simple absorption or reflection, wherever the color of the light is a significant variable. Circular dichroism spectrometers contain a monochromator, for example.
Lasers produce light which is much more monochromatic than the optical monochromators discussed here, but only some lasers are easily tunable, and these lasers are not as simple to use.
Monochromatic light allows for the measurement of the quantum efficiency (QE) of an imaging device (e.g. CCD or CMOS imager). Light from the exit slit is passed either through diffusers or an integrating sphere on to the imaging device while a calibrated detector simultaneously measures the light. Coordination of the imager, calibrated detector, and monochromator allows one to calculate the carriers (electrons or holes) generated for a photon of a given wavelength, QE.
See also
Atomic absorption spectrometers use light from hollow cathode lamps that emit light generated by atoms of a specific element, for instance iron or lead or calcium. The available colors are fixed, but are very monochromatic and are excellent for measuring the concentration of specific elements in a sample. These instruments behave as if they contained a very high quality monochromator, but their use is limited to analyzing the elements they are equipped for.
A major IR measurement technique, Fourier transform IR, or FTIR, does not use a monochromator. Instead, the measurement is performed in the time domain, using the field autocorrelation technique.
Polychromator
Ultrafast monochromator – a monochromator that compensates for path length delays that would stretch ultrashort pulses
Wien filter – a technique for producing "monochromatic" electron beams, where all the electrons have nearly the same energy
References
External links
Discusses monochromator design in great detail
Double folded-z-configuration monochromator. - contains an extended discussion of the design rationale of this UV-VIS-NIR monochromator
Optical devices
Spectroscopy
ja:分光器#モノクロメーター | Monochromator | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,426 | [
"Glass engineering and science",
"Molecular physics",
"Spectrum (physical sciences)",
"Optical devices",
"Instrumental analysis",
"Spectroscopy"
] |
516,680 | https://en.wikipedia.org/wiki/Excited%20state | In quantum mechanics, an excited state of a system (such as an atom, molecule or nucleus) is any quantum state of the system that has a higher energy than the ground state (that is, more energy than the absolute minimum). Excitation refers to an increase in energy level above a chosen starting point, usually the ground state, but sometimes an already excited state. The temperature of a group of particles is indicative of the level of excitation (with the notable exception of systems that exhibit negative temperature).
The lifetime of a system in an excited state is usually short: spontaneous or induced emission of a quantum of energy (such as a photon or a phonon) usually occurs shortly after the system is promoted to the excited state, returning the system to a state with lower energy (a less excited state or the ground state). This return to a lower energy level is known as de-excitation and is the inverse of excitation.
Long-lived excited states are often called metastable. Long-lived nuclear isomers and singlet oxygen are two examples of this.
Atomic excitation
Atoms can be excited by heat, electricity, or light. The hydrogen atom provides a simple example of this concept.
The ground state of the hydrogen atom has the atom's single electron in the lowest possible orbital (that is, the spherically symmetric "1s" wave function, which, so far, has been demonstrated to have the lowest possible quantum numbers). By giving the atom additional energy (for example, by absorption of a photon of an appropriate energy), the electron moves into an excited state (one with one or more quantum numbers greater than the minimum possible). When the electron find itself between two states, a shift which happens very fast, it's in a superposition of both states. If the photon has too much energy, the electron will cease to be bound to the atom, and the atom will become ionized.
After excitation the atom may return to the ground state or a lower excited state, by emitting a photon with a characteristic energy. Emission of photons from atoms in various excited states leads to an electromagnetic spectrum showing a series of characteristic emission lines (including, in the case of the hydrogen atom, the Lyman, Balmer, Paschen and Brackett series).
An atom in a high excited state is termed a Rydberg atom. A system of highly excited atoms can form a long-lived condensed excited state, Rydberg matter.
Perturbed gas excitation
A collection of molecules forming a gas can be considered in an excited state if one or more molecules are elevated to kinetic energy levels such that the resulting velocity distribution departs from the equilibrium Boltzmann distribution. This phenomenon has been studied in the case of a two-dimensional gas in some detail, analyzing the time taken to relax to equilibrium.
Calculation of excited states
Excited states are often calculated using coupled cluster, Møller–Plesset perturbation theory, multi-configurational self-consistent field, configuration interaction, and time-dependent density functional theory.
Excited-state absorption
The excitation of a system (an atom or molecule) from one excited state to a higher-energy excited state with the absorption of a photon is called excited-state absorption (ESA). Excited-state absorption is possible only when an electron has been already excited from the ground state to a lower excited state. The excited-state absorption is usually an undesired effect, but it can be useful in upconversion pumping. Excited-state absorption measurements are done using pump–probe techniques such as flash photolysis. However, it is not easy to measure them compared to ground-state absorption, and in some cases complete bleaching of the ground state is required to measure excited-state absorption.
Reaction
A further consequence of excited-state formation may be reaction of the atom or molecule in its excited state, as in photochemistry.
See also
Rydberg formula
Stationary state
Repulsive state
References
External links
NASA background information on ground and excited states
Quantum mechanics | Excited state | [
"Physics"
] | 827 | [
"Quantum states",
"Quantum mechanics"
] |
516,756 | https://en.wikipedia.org/wiki/Internal%20conversion | Internal conversion is an atomic decay process where an excited nucleus interacts electromagnetically with one of the orbital electrons of an atom. This causes the electron to be emitted (ejected) from the atom. Thus, in internal conversion (often abbreviated IC), a high-energy electron is emitted from the excited atom, but not from the nucleus. For this reason, the high-speed electrons resulting from internal conversion are not called beta particles, since the latter come from beta decay, where they are newly created in the nuclear decay process.
IC is possible whenever gamma decay is possible, except if the atom is fully ionized. In IC, the atomic number does not change, and thus there is no transmutation of one element to another. Also, neutrinos and the weak force are not involved in IC.
Since an electron is lost from the atom, a hole appears in an electron aura which is subsequently filled by other electrons that descend to the empty, yet lower energy level, and in the process emit characteristic X-ray(s), Auger electron(s), or both. The atom thus emits high-energy electrons and X-ray photons, none of which originate in that nucleus. The atom supplies the energy needed to eject the electron, which in turn causes the latter events and the other emissions.
Since primary electrons from IC carry a fixed (large) part of the characteristic decay energy, they have a discrete energy spectrum, rather than the spread (continuous) spectrum characteristic of beta particles. Whereas the energy spectrum of beta particles plots as a broad hump, the energy spectrum of internally converted electrons plots as a single sharp peak (see example below).
Mechanism
In the quantum model of the electron, there is non-zero probability of finding the electron within the nucleus. In internal conversion, the wavefunction of an inner shell electron (usually an s electron) penetrates the nucleus. When this happens, the electron may couple to an excited energy state of the nucleus and take the energy of the nuclear transition directly, without an intermediate gamma ray being first produced. The kinetic energy of the emitted electron is equal to the transition energy in the nucleus, minus the binding energy of the electron to the atom.
Most IC electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared.
An amount of energy exceeding the atomic binding energy of the s electron must be supplied to that electron in order to eject it from the atom to result in IC; that is to say, internal conversion cannot happen if the decay energy of the nucleus is less than a certain threshold.
Though s electrons are more likely for IC due to their superior nuclear penetration compared to electrons with greater orbital angular momentum, spectral studies show that p electrons (from shells L and higher) are occasionally ejected in the IC process. There are also a few radionuclides in which the decay energy is not sufficient to convert (eject) a 1s (K shell) electron, and these nuclides, to decay by internal conversion, must decay by ejecting electrons from the L or M or N shells (i.e., by ejecting 2s, 3s, or 4s electrons) as these binding energies are lower.
After the IC electron is emitted, the atom is left with a vacancy in one of its electron shells, usually an inner one. This hole will be filled with an electron from one of the higher shells, which causes another outer electron to fill its place in turn, causing a cascade. Consequently, one or more characteristic X-rays or Auger electrons will be emitted as the remaining electrons in the atom cascade down to fill the vacancies.
Example: decay of Hg
The decay scheme on the left shows that Hg produces a continuous beta spectrum with maximum energy 214 keV, that leads to an excited state of the daughter nucleus Tl. This state decays very quickly (within 2.8×10 s) to the ground state of Tl, emitting a gamma quantum of 279 keV.
The figure on the right shows the electron spectrum of Hg, measured by means of a magnetic spectrometer. It includes the continuous beta spectrum and K-, L-, and M-lines due to internal conversion. Since the binding energy of the K electrons in Tl is 85 keV, the K line has an energy of 279 − 85 = 194 keV. Due to lesser binding energies, the L- and M-lines have higher energies. Due to the finite energy resolution of the spectrometer, the "lines" have a Gaussian shape of finite width.
When the process is expected
Internal conversion is favored whenever the energy available for a gamma transition is small, and it is also the primary mode of de-excitation for 0→0 (i.e. E0) transitions. The 0→0 transitions occur where an excited nucleus has zero-spin and positive parity, and decays to a ground state which also has zero-spin and positive parity (such as all nuclides with even number of protons and neutrons). In such cases, de-excitation cannot take place by emission of a gamma ray, since this would violate conservation of angular momentum, hence other mechanisms like IC predominate. This also shows that internal conversion (contrary to its name) is not a two-step process where a gamma ray would be first emitted and then converted.
The competition between IC and gamma decay is quantified in the form of the internal conversion coefficient which is defined as where is the rate of conversion electrons and is the rate of gamma-ray emission observed from a decaying nucleus. For example, in the decay of the excited state at 35 keV of Te (which is produced by the decay of I), 7% of decays emit energy as a gamma ray, while 93% release energy as conversion electrons. Therefore, this excited state of has an IC coefficient of .
For increasing atomic number (Z) and decreasing gamma-ray energy, IC coefficients increase. For example, calculated IC coefficients for electric dipole (E1) transitions, for Z = 40, 60, and 80, are shown in the figure.
The energy of the emitted gamma ray is a precise measure of the difference in energy between the excited states of the decaying nucleus. In the case of conversion electrons, the binding energy must also be taken into account: The energy of a conversion electron is given as , where and are the energies of the nucleus in its initial and final states, respectively, while is the binding energy of the electron.
Similar processes
Nuclei with zero-spin and high excitation energies (more than about 1.022 MeV) also can't rid themselves of energy by (single) gamma emission due to the constraint imposed by conservation of momentum, but they do have enough decay energy to decay by pair production. In this type of decay, an electron and positron are both emitted from the atom at the same time, and conservation of angular momentum is solved by having these two product particles spin in opposite directions.
IC should not be confused with the similar photoelectric effect. When a gamma ray emitted by the nucleus of an atom hits another atom, it may be absorbed producing a photoelectron of well-defined energy (this used to be called "external conversion"). In IC, however, the process happens within one atom, and without a real intermediate gamma ray.
Just as an atom may produce an IC electron instead of a gamma ray if energy is available from within the nucleus, so an atom may produce an Auger electron instead of an X-ray if an electron is missing from one of the low-lying electron shells. (The first process can even precipitate the second one.) Like IC electrons, Auger electrons have a discrete energy, resulting in a sharp energy peak in the spectrum.
Electron capture also involves an inner shell electron, which in this case is retained in the nucleus (changing the atomic number) and leaving the atom (not nucleus) in an excited state. The atom missing an inner electron can relax by a cascade of X-ray emissions as higher energy electrons in the atom fall to fill the vacancy left in the electron cloud by the captured electron. Such atoms also typically exhibit Auger electron emission. Electron capture, like beta decay, also typically results in excited atomic nuclei, which may then relax to a state of lowest nuclear energy by any of the methods permitted by spin constraints, including gamma decay and internal conversion decay.
See also
Internal conversion coefficient
References
Further reading
R.W.Howell, Radiation spectra for Auger-electron emitting radionuclides: Report No. 2 of AAPM Nuclear Medicine Task Group No. 6, 1992, Medical Physics 19(6), 1371–1383
External links
HyperPhysics
Radioactivity
Nuclear physics | Internal conversion | [
"Physics",
"Chemistry"
] | 1,908 | [
"Radioactivity",
"Nuclear physics"
] |
516,762 | https://en.wikipedia.org/wiki/Intersystem%20crossing | Intersystem crossing (ISC) is an isoenergetic radiationless process involving a transition between the two electronic states with different spin multiplicity.
Excited singlet and triplet states
When an electron in a molecule with a singlet ground state is excited (via absorption of radiation) to a higher energy level, either an excited singlet state or an excited triplet state will form. Singlet state is a molecular electronic state such that all electron spins are paired. That is, the spin of the excited electron is still paired with the ground state electron (a pair of electrons in the same energy level must have opposite spins, per the Pauli exclusion principle). In a triplet state the excited electron is no longer paired with the ground state electron; that is, they are parallel (same spin). Since excitation to a triplet state involves an additional "forbidden" spin transition, it is less probable that a triplet state will form when the molecule absorbs radiation.
When a singlet state nonradiatively passes to a triplet state, or conversely a triplet transitions to a singlet, that process is known as intersystem crossing. In essence, the spin of the excited electron is reversed. The probability of this process occurring is more favorable when the vibrational levels of the two excited states overlap, since little or no energy must be gained or lost in the transition. As the spin/orbital interactions in such molecules are substantial and a change in spin is thus more favourable, intersystem crossing is most common in heavy-atom molecules (e.g. those containing iodine or bromine). This process is called "spin-orbit coupling". Simply-stated, it involves coupling of the electron spin with the orbital angular momentum of non-circular orbits. In addition, the presence of paramagnetic species in solution enhances intersystem crossing.
The radiative decay from an excited triplet state back to a singlet state is known as phosphorescence. Since a transition in spin multiplicity occurs, phosphorescence is a manifestation of intersystem crossing. The time scale of intersystem crossing is on the order of 10−8 to 10−3 s, one of the slowest forms of relaxation.
Metal complexes
Once a metal complex undergoes metal-to-ligand charge transfer, the system can undergo intersystem crossing, which, in conjunction with the tunability of MLCT excitation energies, produces a long-lived intermediate whose energy can be adjusted by altering the ligands used in the complex. Another species can then react with the long-lived excited state via oxidation or reduction, thereby initiating a redox pathway via tunable photoexcitation. Complexes containing high atomic number d6 metal centers, such as Ru(II) and Ir(III), are commonly used for such applications due to them favoring intersystem crossing as a result of their more intense spin-orbit coupling.
Complexes that have access to d orbitals are able to access spin multiplicities besides the singlet and triplet states, as some complexes have orbitals of similar or degenerate energies so that it is energetically favorable for electrons to be unpaired. It is possible then for a single complex to undergo multiple intersystem crossings, which is the case in light-induced excited spin-state trapping (LIESST), where, at low temperatures, a low-spin complex can be irradiated and undergo two instances of intersystem crossing. For Fe(II) complexes, the first intersystem crossing occurs from the singlet to the triplet state, which is then followed by intersystem crossing between the triplet and the quintet state. At low temperatures, the low-spin state is favored, but the quintet state is unable to relax back to the low-spin ground state due to their differences in zero-point energy and metal-ligand bond length. The reverse process is also possible for cases such as [Fe(ptz)6](BF4)2, but the singlet state is not fully regenerated, as the energy needed to excite the quintet ground state to the necessary excited state to undergo intersystem crossing to the triplet state overlaps with multiple bands corresponding to excitations of the singlet state that lead back to the quintet state.
Applications
Fluorophores
Fluorescence microscopy relies upon fluorescent compounds, or fluorophores, in order to image biological systems. Since fluorescence and phosphorescence are competitive methods of relaxation, a fluorophore that undergoes intersystem crossing to the triplet excited state no longer fluoresces and instead remains in the triplet excited state, which has a relatively long lifetime, before phosphorescing and relaxing back to the singlet ground state so that it may continue to undergo repeated excitation and fluorescence. This process in which fluorophores temporarily do not fluoresce is called blinking. While in the triplet excited state, the fluorophore may undergo photobleaching, a process in which the fluorophore reacts with another species in the system, which can lead to the loss of the fluorescent characteristic of the fluorophore.
In order to regulate these processes dependent upon the triplet state, the rate of intersystem crossing can be adjusted to either favor or disfavor formation of the triplet state. Fluorescent biomarkers, including both quantum dots and fluorescent proteins, are often optimized in order to maximize quantum yield and intensity of fluorescent signal, which in part is accomplished by decreasing the rate of intersystem crossing. Methods of adjusting the rate of intersystem crossing include the addition of Mn2+ to the system, which increases the rate of intersystem crossing for rhodamine and cyanine dyes. The changing of the metal that is a part of the photosensitizer groups bound to CdTe quantum dots can also affect rate of intersystem crossing, as the use of a heavier metal can cause intersystem crossing to be favored due to the heavy atom effect.
Solar cells
The viability of organometallic polymers in bulk heterojunction organic solar cells has been investigated due to their donor capability. The efficiency of charge separation at the donor-acceptor interface can be improved through the use of heavy metals, as their increased spin-orbit coupling promotes the formation of the triplet MLCT excited state, which could improve exciton diffusion length and reduce the probability of recombination due to the extended lifespan of the spin-forbidden excited state. By improving the efficiency of charge separation step of the bulk heterojunction solar cell mechanism, the power conversion efficiency also improves. Improved charge separation efficiency has been shown to be a result of the formation of the triplet excited state in some conjugated platinum-acetylide polymers. However, as the size of the conjugated system increases, the increased conjugation reduces the impact of the heavy atom effect and instead makes the polymer more efficient due to the increased conjugation reducing the bandgap.
History
In 1933, Aleksander Jabłoński published his conclusion that the extended lifetime of phosphorescence was due to a metastable excited state at an energy lower than the state first achieved upon excitation. Based upon this research, Gilbert Lewis and coworkers, during their investigation of organic molecule luminescence in the 1940s, concluded that this metastable energy state corresponded to the triplet electron configuration. The triplet state was confirmed by Lewis via application of a magnetic field to the excited phosphor, as only the metastable state would have a long enough lifetime to be analyzed and the phosphor would have only responded if it was paramagnetic due to it having at least one unpaired electron. Their proposed pathway of phosphorescence included the forbidden spin transition occurring when the potential energy curves of the singlet excited state and the triplet excited state crossed, from which the term intersystem crossing arose.
See also
Internal conversion (chemistry)
Jablonski diagram
Michael Kasha
Population inversion
Vibrational energy relaxation
References
Quantum mechanics
Rotational symmetry | Intersystem crossing | [
"Physics"
] | 1,702 | [
"Theoretical physics",
"Quantum mechanics",
"Symmetry",
"Rotational symmetry"
] |
517,241 | https://en.wikipedia.org/wiki/Heat%20pipe | A heat pipe is a heat-transfer device that employs phase transition to transfer heat between two solid interfaces.
At the hot interface of a heat pipe, a volatile liquid in contact with a thermally conductive solid surface turns into a vapor by absorbing heat from that surface. The vapor then travels along the heat pipe to the cold interface and condenses back into a liquid, releasing the latent heat. The liquid then returns to the hot interface through capillary action, centrifugal force, or gravity and the cycle repeats.
Due to the very high heat transfer coefficients for boiling and condensation, heat pipes are highly effective thermal conductors. The effective thermal conductivity varies with heat pipe length and can approach for long heat pipes, in comparison with approximately for copper.
Modern CPU heat pipes are typically made of copper and use water as the working fluid. They are common in many consumer electronics like desktops, laptops, tablets, and high-end smartphones.
History
The general principle of heat pipes using gravity, commonly classified as two phase thermosiphons, dates back to the steam age and Angier March Perkins and his son Loftus Perkins and the "Perkins Tube", which saw widespread use in locomotive boilers and working ovens. Capillary-based heat pipes were first suggested by R. S. Gaugler of General Motors in 1942, who patented the idea, but did not develop it further.
George Grover independently developed capillary-based heat pipes at Los Alamos National Laboratory in 1963, with his patent of that year being the first to use the term "heat pipe", and he is often referred to as "the inventor of the heat pipe". He noted in his notebook:
Grover's suggestion was taken up by NASA, which played a large role in heat pipe development in the 1960s, particularly regarding applications and reliability in space flight. This was understandable given the low weight, high heat flux, and zero power draw of heat pipes – and that they would not be adversely affected by operating in a zero gravity environment.
The first application of heat pipes in the space program was the thermal equilibration of satellite transponders. As satellites orbit, one side is exposed to the direct radiation of the sun while the opposite side is completely dark and exposed to the deep cold of outer space. This causes severe discrepancies in the temperature (and thus reliability and accuracy) of the transponders. The heat pipe cooling system designed for this purpose managed the high heat fluxes and demonstrated flawless operation with and without the influence of gravity. The cooling system developed was the first use of variable conductance heat pipes to actively regulate heat flow or evaporator temperature.
NASA has tested heat pipes designed for extreme conditions, with some using liquid sodium metal as the working fluid. Other forms of heat pipes are currently used to cool communication satellites. Publications in 1967 and 1968 by Feldman, Eastman, and Katzoff first discussed applications of heat pipes for wider uses such as in air conditioning, engine cooling, and electronics cooling. These papers were also the first to mention flexible, arterial, and flat plate heat pipes. Publications in 1969 introduced the concept of the rotational heat pipe with its applications to turbine blade cooling and contained the first discussions of heat pipe applications to cryogenic processes.
Starting in the 1980s Sony began incorporating heat pipes into the cooling schemes for some of its commercial electronic products in place of both forced convection and passive finned heat sinks. Initially they were used in receivers and amplifiers, soon spreading to other high heat flux electronics applications.
During the late 1990s increasingly high heat flux microcomputer CPUs spurred a threefold increase in the number of U.S. heat pipe patent applications. As heat pipes evolved from a specialized industrial heat transfer component to a consumer commodity most development and production moved from the U.S. to Asia.
Modern CPU heat pipes are typically made of copper and use water as the working fluid. They are common in many consumer electronics like desktops, laptops, tablets, and high-end smartphones.
Structure, design and construction
A typical heat pipe consists of a sealed pipe or tube made of a material that is compatible with the working fluid such as copper for water heat pipes, or aluminium for ammonia heat pipes. Typically, a vacuum pump is used to remove the air from the empty heat pipe. The heat pipe is partially filled with a working fluid and then sealed. The working fluid mass is chosen so that the heat pipe contains both vapor and liquid over the operating temperature range.
The stated/recommended operating temperature of a given heat pipe system is critically important. Below the operating temperature, the liquid is too cold and cannot vaporize into a gas. Above the operating temperature, all the liquid has turned to gas, and the environmental temperature is too high for any of the gas to condense. Thermal conduction is still possible through the walls of the heat pipe, but at a greatly reduced rate of thermal transfer. In addition, for a given heat input, it is necessary that a minimum temperature of the working fluid be attained; while at the other end, any additional increase (deviation) in the heat transfer coefficient from the initial design will tend to inhibit the heat pipe action. This can be counterintuitive, in the sense that if a heat pipe system is aided by a fan, then the heat pipe operation may break down, resulting in a reduced effectiveness of the thermal management system—potentially severely reduced. The operating temperature and the maximum heat transport capacity of a heat pipe—limited by its capillary or other structure used to return the fluid to the hot area (centrifugal force, gravity, etc.)—are therefore inescapably and closely related.
Working fluids are chosen according to the temperatures at which the heat pipe must operate, with examples ranging from liquid helium for extremely low temperature applications (2–4 K) to mercury (523–923 K), sodium (873–1473 K) and even indium (2000–3000 K) for extremely high temperatures. The vast majority of heat pipes for room temperature applications use ammonia (213–373 K), alcohol (methanol (283–403 K) or ethanol (273–403 K)), or water (298–573 K) as the working fluid. Copper/water heat pipes have a copper envelope, use water as the working fluid and typically operate in the temperature range of 20 to 150 °C. Water heat pipes are sometimes filled by partially filling with water, heating until the water boils and displaces the air, and then sealed while hot.
For the heat pipe to transfer heat, it must contain saturated liquid and its vapor (gas phase). The saturated liquid vaporizes and travels to the condenser, where it is cooled and turned back to a saturated liquid. In a standard heat pipe, the condensed liquid is returned to the evaporator using a wick structure exerting a capillary action on the liquid phase of the working fluid. Wick structures used in heat pipes include sintered metal powder, screen, and grooved wicks, which have a series of grooves parallel to the pipe axis. When the condenser is located above the evaporator in a gravitational field, gravity can return the liquid. In this case, the heat pipe is a thermosiphon. Finally, rotating heat pipes use centrifugal forces to return liquid from the condenser to the evaporator.
Heat pipes contain no mechanical moving parts and typically require no maintenance, though non-condensable gases that diffuse through the pipe's walls, that result from breakdown of the working fluid, or that exist as original impurities in the material, may eventually reduce the pipe's effectiveness at transferring heat.
The advantage of heat pipes over many other heat-dissipation mechanisms is their great efficiency in transferring heat. A pipe one inch in diameter and two feet long can transfer 3.7 kW (12.500 BTU per hour) at with only drop from end to end. Some heat pipes have demonstrated a heat flux of more than 23 kW/cm2, about four times the heat flux through the surface of the Sun.
Heat pipes have an envelope, a wick, and a working fluid. Heat pipes are designed for very long term operation with no maintenance, so the heat pipe wall and wick must be compatible with the working fluid. Some material/working fluids pairs that appear to be compatible are not. For example, water in an aluminum envelope will develop significant amounts of non-condensable gas within hours or days, hindering normal heat pipe operation. This issue is primarily due to the oxidation and corrosion of aluminum in the presence of water, which releases hydrogen gas that accumulates as a non-condensable gas.
Since heat pipes were rediscovered by George Grover in 1963, extensive life tests have been conducted to determine compatible envelope/fluid pairs, some going on for decades. In a heat pipe life test, heat pipes are operated for long periods of time, and monitored for problems such as non-condensable gas generation, material transport, and corrosion.
The most commonly used envelope (and wick)/fluid pairs include:
Copper envelope with water working fluid for electronics cooling. This is by far the most common type of heat pipe.
Copper or steel envelope with refrigerant R134a working fluid for energy recovery in HVAC systems.
Aluminium envelope with ammonia working fluid for spacecraft thermal control.
Superalloy envelope with alkali metal (cesium, potassium, sodium) working fluid for high temperature heat pipes, most commonly used for calibrating primary temperature measurement devices.
Other pairs include stainless steel envelopes with nitrogen, oxygen, neon, hydrogen, or helium working fluids at temperatures below 100 K, copper/methanol heat pipes for electronics cooling when the heat pipe must operate below the water range, aluminium/ethane heat pipes for spacecraft thermal control in environments when ammonia can freeze, and refractory metal envelope/lithium working fluid for high temperature (above ) applications.
Heat pipes must be tuned to particular cooling conditions. The choice of pipe material, size, and coolant all have an effect on the optimal temperatures at which heat pipes work. When used outside of its design heat range, the heat pipe's thermal conductivity is effectively reduced to the heat conduction properties of its solid metal casing alone. In the case of a copper casing, that is around 1/80 of the original flux. This is because outside the intended temperature range the working fluid will not undergo phase change; below the range, the working fluid never vaporizes, and above the range it never condenses.
Most manufacturers cannot make a traditional heat pipe smaller than 3 mm in diameter due to material limitations. Heat pipes containing graphene have been demonstrated which can improve cooling performance in electronics.
Types
In addition to standard, constant conductance heat pipes (CCHPs), there are a number of other types of heat pipes, including:
Vapor chambers (planar heat pipes), which are used for heat flux transformation, and isothermalization of surfaces
Variable conductance heat pipes (VCHPs), which use a Non-Condensable Gas (NCG) to change the heat pipe effective thermal conductivity as power or the heat sink conditions change
Pressure controlled heat pipes (PCHPs), which are a VCHP where the volume of the reservoir, or the NCG mass can be changed, to give more precise temperature control
Diode heat pipes, which have a high thermal conductivity in the forward direction, and a low thermal conductivity in the reverse direction
Thermosyphons, which are heat pipes where the liquid is returned to the evaporator by gravitational/accelerational forces,
Rotating heat pipes, where the liquid is returned to the evaporator by centrifugal forces
Vapor chamber
Thin planar heat pipes (heat spreaders or flat heat pipes) have the same primary components as tubular heat pipes: a hermetically sealed hollow vessel, a working fluid, and a closed-loop capillary recirculation system. In addition, an internal support structure or a series of posts are generally used in a vapor chamber to accommodate clamping pressures sometimes up to 90 PSI. This helps prevent collapse of the flat top and bottom when the pressure is applied.
There are two main applications for vapor chambers. First, they are used when high powers and heat fluxes are applied to a relatively small evaporator. Heat input to the evaporator vaporizes liquid, which flows in two dimensions to the condenser surfaces. After the vapor condenses on the condenser surfaces, capillary forces in the wick return the condensate to the evaporator. Note that most vapor chambers are insensitive to gravity, and will still operate when inverted, with the evaporator above the condenser. In this application, the vapor chamber acts as a heat flux transformer, cooling a high heat flux from an electronic chip or laser diode, and transforming it to a lower heat flux that can be removed by natural or forced convection. With special evaporator wicks, vapor chambers can remove 2000 W over 4 cm2, or 700 W over 1 cm2.
Another major usage of vapor chambers is for cooling purposes in gaming laptops. As vapor chambers are a flatter and more two-dimensional method of heat dissipation, sleeker gaming laptops benefit hugely from them as compared to traditional heat pipes. For example, the vapor chamber cooling in Lenovo's Legion 7i was its most unique selling point (although it was misadvertised as all models having vapor chambers, while in fact only a few had).
Second, compared to a one-dimensional tubular heat pipe, the width of a two-dimensional heat pipe allows an adequate cross section for heat flow even with a very thin device. These thin planar heat pipes are finding their way into "height sensitive" applications, such as notebook computers and surface mount circuit board cores. It is possible to produce flat heat pipes as thin as 1.0 mm (slightly thicker than a 0.76 mm credit card).
Variable conductance
Standard heat pipes are constant conductance devices, where the heat pipe operating temperature is set by the source and sink temperatures, the thermal resistances from the source to the heat pipe, and the thermal resistances from the heat pipe to the sink. In these heat pipes, the temperature drops linearly as the power or condenser temperature is reduced. For some applications, such as satellite or research balloon thermal control, the electronics will be overcooled at low powers, or at the low sink temperatures. Variable Conductance Heat Pipes (VCHPs) are used to passively maintain the temperature of the electronics being cooled as power and sink conditions change.
Variable conductance heat pipes have two additions compared to a standard heat pipe: 1. a reservoir, and 2. a non-condensable gas (NCG) added to the heat pipe, in addition to the working fluid. This non-condensable gas is typically argon for standard Variable conductance heat pipes, and helium for thermosyphons. When the heat pipe is not operating, the non-condensable gas and working fluid vapor are mixed throughout the heat pipe vapor space. When the variable conductance heat pipe is operating, the non-condensable gas is swept toward the condenser end of the heat pipe by the flow of the working fluid vapor. Most of the non-condensable gas is located in the reservoir, while the remainder blocks a portion of the heat pipe condenser. The variable conductance heat pipe works by varying the active length of the condenser. When the power or heat sink temperature is increased, the heat pipe vapor temperature and pressure increase. The increased vapor pressure forces more of the non-condensable gas into the reservoir, increasing the active condenser length and the heat pipe conductance. Conversely, when the power or heat sink temperature is decreased, the heat pipe vapor temperature and pressure decrease, and the non-condensable gas expands, reducing the active condenser length and heat pipe conductance. The addition of a small heater on the reservoir, with the power controlled by the evaporator temperature, will allow thermal control of roughly ±1-2 °C. In one example, the evaporator temperature was maintained in a ±1.65 °C control band, as power was varied from 72 to 150 W, and heat sink temperature varied from +15 °C to -65 °C.
Pressure controlled heat pipes (PCHPs) can be used when tighter temperature control is required. In a pressure controlled heat pipe, the evaporator temperature is used to either vary the reservoir volume, or the amount of non-condensable gas in the heat pipe. Pressure controlled heat pipes have shown milli-Kelvin temperature control.
Diode
Conventional heat pipes transfer heat in either direction, from the hotter to the colder end of the heat pipe. Several different heat pipes act as a thermal diode, transferring heat in one direction, while acting as an insulator in the other:
Thermosyphons, which only transfer heat from the bottom to the top of the thermosyphon, where the condensate returns by gravity. When the thermosyphon is heated at the top, there is no liquid available to evaporate.
Rotating heat pipes, where the heat pipe is shaped so that liquid can only travel by centrifugal forces from the nominal evaporator to the nominal condenser. Again, no liquid is available when the nominal condenser is heated.
Vapor trap diode heat pipes.
Liquid trap diode heat pipes.
A vapor trap diode is fabricated in a similar fashion to a variable conductance heat pipe, with a gas reservoir at the end of the condenser. During fabrication, the heat pipe is charged with the working fluid and a controlled amount of a non-condensable gas (NCG). During normal operation, the flow of the working fluid vapor from the evaporator to the condenser sweeps the non-condensable gas into the reservoir, where it doesn't interfere with the normal heat pipe operation. When the nominal condenser is heated, the vapor flow is from the nominal condenser to the nominal evaporator. The non-condensable gas is dragged along with the flowing vapor, completely blocking the nominal evaporator, and greatly increasing the thermal resistivity of the heat pipe. In general, there is some heat transfer to the nominal adiabatic section. Heat is then conducted through the heat pipe walls to the evaporator. In one example, a vapor trap diode carried 95 W in the forward direction, and only 4.3 W in the reverse direction.
A liquid trap diode has a wicked reservoir at the evaporator end of the heat pipe, with a separate wick that is not in communication with the wick in the remainder of the heat pipe. During normal operation, the evaporator and reservoir are heated. The vapor flows to the condenser, and liquid returns to the evaporator by capillary forces in the wick. The reservoir eventually dries out, since there is no method for returning liquid. When the nominal condenser is heated, liquid condenses in the evaporator and the reservoir. While the liquid can return to the nominal condenser from the nominal evaporator, the liquid in the reservoir is trapped, since the reservoir wick is not connected. Eventually, all of the liquid is trapped in the reservoir, and the heat pipe ceases operation.
Thermosyphons
Most heat pipes use a wick to return the liquid from the condenser to the evaporator, allowing the heat pipe to operate in any orientation. The liquid is sucked up back to the evaporator by capillary action, similar to the way that a sponge sucks up water when an edge is placed in contact with a pool of water. However the maximum adverse elevation (evaporator over condenser) is relatively small, on the order of 25 cm long for a typical water heat pipe.
If however, the evaporator is located below the condenser, the liquid can drain back by gravity instead of requiring a wick, and the distance between the two can be much longer. Such a gravity aided heat pipe is known as a thermosyphon.
In a thermosyphon, liquid working fluid is vaporized by a heat supplied to the evaporator at the bottom of the heat pipe. The vapor travels to the condenser at the top of the heat pipe, where it condenses. The liquid then drains back to the bottom of the heat pipe by gravity, and the cycle repeats. Thermosyphons are diode heat pipes; when heat is applied to the condenser end, there is no condensate available, and hence no way to form vapor and transfer heat to the evaporator.
While a typical terrestrial water heat pipe is less than 30 cm long, thermosyphons are often several meters long. The thermosyphons used to cool the Alaska pipe line were roughly 11 to 12 m long. Even longer thermosyphons have been proposed for the extraction of geothermal energy. For example, Storch et al. fabricated a 53 mm I.D., 92 m long propane thermosyphon that carried roughly 6 kW of heat. Their scalability to large sizes also makes them relevant for solar thermal and HVAC applications.
Loop
A loop heat pipe (LHP) is a passive two-phase transfer device related to the heat pipe. It can carry higher power over longer distances by having co-current liquid and vapor flow, in contrast to the counter-current flow in a heat pipe. This allows the wick in a loop heat pipe to be required only in the evaporator and compensation chamber. Micro loop heat pipes have been developed and successfully employed in a wide sphere of applications both on the ground and in space.
Oscillating or pulsating
An oscillating heat pipe (OHP), also known as a pulsating heat pipe (PHP), is only partially filled with liquid working fluid. The pipe is arranged in a serpentine pattern in which freely moving liquid and vapor segments alternate. Oscillation takes place in the working fluid; the pipe remains motionless. These have been investigated for many applications, including cooling photovoltaic panels, cooling electronic devices, heat recovery systems, fuel cell systems, HVAC systems, and desalination. More and more, PHPs are synergistically combined with phase change materials.
Heat transfer
Heat pipes employ phase change to transfer thermal energy from one point to another by the vaporization and condensation of a working fluid or coolant. Heat pipes rely on a temperature difference between the ends of the pipe, and cannot lower temperatures at either end below the ambient temperature (hence they tend to equalize the temperature within the pipe).
When one end of the heat pipe is heated, the working fluid inside the pipe at that end vaporizes and increases the vapor pressure inside the cavity of the heat pipe. The latent heat of vaporization absorbed by the working fluid reduces the temperature at the hot end of the pipe.
The vapor pressure over the hot liquid working fluid at the hot end of the pipe is higher than the equilibrium vapor pressure over the condensing working fluid at the cooler end of the pipe, and this pressure difference drives a rapid mass transfer to the condensing end where the excess vapor condenses, releases its latent heat, and warms the cool end of the pipe. Non-condensing gases (caused by contamination for instance) in the vapor impede the gas flow and reduce the effectiveness of the heat pipe, particularly at low temperatures, where vapor pressures are low. The speed of molecules in a gas is approximately the speed of sound, and in the absence of noncondensing gases (i.e., if there is only a gas phase present) this is the upper limit to the velocity with which they could travel in the heat pipe. In practice, the speed of the vapor through the heat pipe is limited by the rate of condensation at the cold end and far lower than the molecular speed. Note/explanation: The condensation rate is very close to the sticking coefficient times the molecular speed times the gas density, if the condensing surface is very cold. However, if the surface is close to the temperature of the gas, the evaporation caused by the finite temperature of the surface largely cancels this heat flux. If the temperature difference is more than some tens of degrees, the vaporization from the surface is typically negligible, as can be assessed from the vapor pressure curves. In most cases, with very efficient heat transport through the gas, it is very challenging to maintain such significant temperature differences between the gas and the condensing surface. Moreover, this temperature differences of course corresponds to a large effective thermal resistance by itself. The bottleneck is often less severe at the heat source, as the gas densities are higher there, corresponding to higher maximum heat fluxes.
The condensed working fluid then flows back to the hot end of the pipe. In the case of vertically oriented heat pipes the fluid may be moved by the force of gravity. In the case of heat pipes containing wicks, the fluid is returned by capillary action.
When making heat pipes, there is no need to create a vacuum in the pipe. One simply boils the working fluid in the heat pipe until the resulting vapor has purged the non-condensing gases from the pipe, and then seals the end.
An interesting property of heat pipes is the temperature range over which they are effective. Initially, it might be suspected that a water-charged heat pipe only works when the hot end reaches the boiling point (100 °C, 212 °F, at normal atmospheric pressure) and steam is transferred to the cold end. However, the boiling point of water depends on the absolute pressure inside the pipe. In an evacuated pipe, water vaporizes from its triple point (0.01 °C, 32 °F) to its critical point (374 °C; 705 °F), as long as the heat pipe contains both liquid and vapor. Thus a heat pipe can operate at hot-end temperatures as low as just slightly warmer than the melting point of the working fluid, although the maximum rate of heat transfer is low at temperatures below 25 °C (77 °F). Similarly, a heat pipe with water as a working fluid can work well above the atmospheric boiling point (100 °C, 212 °F). The maximum temperature for long term water heat pipes is 270 °C (518 °F), with heat pipes operating up to 300 °C (572 °F) for short term tests.
The main reason for the effectiveness of heat pipes is the vaporization and condensation of the working fluid. The heat of vaporization greatly exceeds the specific heat capacity. Using water as an example, the energy needed to evaporate one gram of water is 540 times the amount of energy needed to raise the temperature of that same one gram of water by 1 °C. Almost all of that energy is rapidly transferred to the "cold" end when the fluid condenses there, making a very effective heat transfer system with no moving parts.
Applications
Spacecraft
The spacecraft thermal control system has the function to keep all components on the spacecraft within their acceptable temperature range. This is complicated by the following:
Widely varying external conditions, such as eclipses
Micro-g environment
Heat removal from the spacecraft by thermal radiation only
Limited electrical power available, favoring passive solutions
Long lifetimes, with no possibility of maintenance
Some spacecraft are designed to last for 20 years, so heat transport without electrical power or moving parts is desirable. Rejecting the heat by thermal radiation means that large radiator panes (multiple square meters) are required. Heat pipes and loop heat pipes are used extensively in spacecraft, since they don't require any power to operate, operate nearly isothermally, and can transport heat over long distances.
Grooved wicks are used in spacecraft heat pipes, as shown in the first photograph in this section. The heat pipes are formed by extruding aluminium, and typically have an integral flange to increase the heat transfer area, which lowers the temperature drop. Grooved wicks are used in spacecraft, instead of the screen or sintered wicks used for terrestrial heat pipes, since the heat pipes don't have to operate against gravity in space. This allows spacecraft heat pipes to be several meters long, in contrast to the roughly 25 cm maximum length for a water heat pipe operating on Earth. Ammonia is the most common working fluid for spacecraft heat pipes. Ethane is used when the heat pipe must operate at temperatures below the ammonia freezing temperature.
The second figure shows a typical grooved aluminium/ammonia variable conductance heat pipe (VCHP) for spacecraft thermal control. The heat pipe is an aluminium extrusion, similar to that shown in the first figure. The bottom flanged area is the evaporator. Above the evaporator, the flange is machined off to allow the adiabatic section to be bent. The condenser is shown above the adiabatic section. The non-condensable gas (NCG) reservoir is located above the main heat pipe. The valve is removed after filling and sealing the heat pipe. When electric heaters are used on the reservoir, the evaporator temperature can be controlled within ±2 K of the setpoint.
Computer systems
Heat pipes began to be used in computer systems in the late 1990s, when increased power requirements and subsequent increases in heat emission resulted in greater demands on cooling systems. They are now extensively used in many modern computer systems, typically to move heat away from components such as CPUs and GPUs to heat sinks where thermal energy may be dissipated into the environment.
Solar thermal
Heat pipes are also widely used in solar thermal water heating applications in combination with evacuated tube solar collector arrays. In these applications, distilled water is commonly used as the heat transfer fluid inside a sealed length of copper tubing that is located within an evacuated glass tube and oriented towards the Sun. In connecting pipes, the heat transport occurs in the liquid steam phase because the thermal transfer medium is converted into steam in a large section of the collecting pipeline.
In solar thermal water heating applications, an individual absorber tube of an evacuated tube collector is up to 40% more efficient compared to more traditional "flat plate" solar water collectors. This is largely due to the vacuum that exists within the tube, which slows down convective and conductive heat loss. Relative efficiencies of the evacuated tube system are reduced however, when compared to flat plate collectors because the latter have a larger aperture size and can absorb more solar energy per unit area. This means that while an individual evacuated tube has better insulation (lower conductive and convective losses) due to the vacuum created inside the tube, an array of tubes found in a completed solar assembly absorbs less energy per unit area due to there being less absorber surface area pointed toward the Sun because of the rounded design of an evacuated tube collector. Therefore, real world efficiencies of both designs are about the same.
Evacuated tube collectors reduce the need for anti-freeze additives since the vacuum helps slow heat loss. However, under prolonged exposure to freezing temperatures the heat transfer fluid can still freeze and precautions must be taken to ensure that the freezing liquid does not damage the evacuated tube when designing systems for such environments. Properly designed solar thermal water heaters can be frost protected down to more than -3 °C with special additives and are being used in Antarctica to heat water.
Permafrost cooling
Building on permafrost is difficult because heat from the structure can thaw the permafrost. Heat pipes are used in some cases to avoid the risk of destabilization. For example, in the Trans-Alaska Pipeline System residual ground heat remaining in the oil as well as heat produced by friction and turbulence in the moving oil could conduct down the pipe's support legs and melt the permafrost on which the supports are anchored. This would cause the pipeline to sink and possibly be damaged. To prevent this, each vertical support member has been mounted with four vertical heat pipe thermosyphons.
The significant feature of a thermosyphon is that it is passive and does not require any external power to operate. During the winter, the air is colder than the ground around the supports. The liquid at the bottom of the thermosyphon is vaporized by heat absorbed from the ground, cooling the surrounding permafrost and lowering its temperature. During the summer, the thermosyphons stop operating, since there is no gas condensing at the top of the heat pipe, but the extreme air cooling during the winter causes condensation and the liquid to flow down. In the Trans-Alaska Pipeline System initially ammonia was used as the working fluid, however this was replaced with carbon dioxide due to blockages.
Heat pipes are also used to keep the permafrost frozen alongside parts of the Qinghai–Tibet Railway where the embankment and track absorb the sun's heat. Vertical heat pipes on either side of relevant formations prevent that heat from spreading any further into the surrounding permafrost.
Depending on application there are several thermosyphon designs: thermoprobe, thermopile, depth thermosyphon, sloped-thermosyphon foundation, flat loop thermosyphon foundation, and hybrid flat loop thermosyphon foundation.
Cooking
The first commercial heat pipe product was the "Thermal Magic Cooking Pin" developed by Energy Conversion Systems, Inc. and first sold in 1966. The cooking pins used water as the working fluid. The envelope was stainless steel, with an inner copper layer for compatibility. During operation, one end of the heat pipe is poked through the roast. The other end extends into the oven where it draws heat to the middle of the roast. The high effective conductivity of the heat pipe reduces the cooking time for large pieces of meat by one-half.
The principle has also been applied to camping stoves. The heat pipe transfers a large volume of heat at low temperature to allow goods to be baked and other dishes to be cooked in camping-type situations.
Ventilation heat recovery
In heating, ventilation and air-conditioning (HVAC) systems, heat pipes are positioned within the supply and exhaust air streams of an air-handling system or in the exhaust gases of an industrial process, in order to recover the heat energy.
The device consists of a battery of multi-row finned heat pipe tubes located within both the supply and exhaust air streams. The system recovers heat from the exhaust and transfers it to the intake.
Because of the characteristics of the device, better efficiencies are obtained when the unit is positioned upright with the supply-air side mounted over the exhaust air side, which allows the liquid refrigerant to flow quickly back to the evaporator aided by the force of gravity. Generally, gross heat transfer efficiencies of up to 75% are claimed by manufacturers.
Nuclear power conversion
Grover and his colleagues were working on cooling systems for nuclear power cells for space craft, where extreme thermal conditions are encountered. These alkali metal heat pipes transferred heat from the heat source to a thermionic or thermoelectric converter to generate electricity.
Since the early 1990s, numerous nuclear reactor power systems have been proposed using heat pipes for transporting heat between the reactor core and the power conversion system. The first nuclear reactor to produce electricity using heat pipes was first operated on September 13, 2012, in a demonstration using flattop fission.
Wankel rotary combustion engines
Ignition of the fuel mixture always takes place in the same part of Wankel engines, inducing thermal dilatation disparities that reduce power output, impair fuel economy, and accelerate wear. In the SAE paper , by Wei Wu et al., describes: 'A Heat Pipe Assisted Air-Cooled Rotary Wankel Engine for Improved Durability, Power and Efficiency', they obtained a reduction in top engine temperature from 231 °C to 129 °C, and the temperature difference reduced from 159 °C to 18 °C for a typical small-chamber-displacement air-cooled unmanned aerial vehicle engine.
Heat pipe heat exchangers
Heat exchangers transfer heat from a hot stream to a cold stream of air, water or oil. A heat pipe heat exchanger contains several heat pipes of which each acts as an individual heat exchanger itself. This increases efficiency, life span and safety. In case that one heat pipe breaks, only a small amount of liquid is released which is critical for certain industrial processes such as aluminium casting. Additionally, with one broken heat pipe the heat pipe heat exchanger still remains operable.
Currently developed applications
Due to the great adaptability of heat pipes, research explores the implementation of heat pipes into various systems:
Improving the efficiency of geothermal heating to prevent slippery roads during winter in cold climate zones
Increased efficiency of photovoltaic cells by coupling the solar panel to a heat pipe system. This transports heat away from overheated panels to maintain optimal temperature for maximum energy generation. Additionally, the tested set up seizes the recovered thermal heat to warm, for instance, water
Hybrid control rod heat pipes to shut down a nuclear reactor in case of an emergency and simultaneously transferring decay heat away to prevent the reactor from running hot
See also
References
External links
Frontiers in Heat Pipes (FHP) – An International Journal
Previous edition of the Joint International Heat Pipe Conference & International Heat Pipe Symposium (20IHPC & 14IHPS), 7-10 September 2021
Upcoming edition of the Joint International Heat Pipe Conference & International Heat Pipe Symposium (21IHPC & 15IHPS), 5-9 February 2023
House_N Research (mit.edu)
Heat pipe selection guide (pdf)
Heat Pipe Basics and Demonstration
Heating, ventilation, and air conditioning
Computer hardware cooling
Heat conduction
Spacecraft components
Heat transfer | Heat pipe | [
"Physics",
"Chemistry"
] | 7,831 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"Heat conduction"
] |
518,513 | https://en.wikipedia.org/wiki/Thermogenesis | Thermogenesis is the process of heat production in organisms. It occurs in all warm-blooded animals, and also in a few species of thermogenic plants such as the Eastern skunk cabbage, the Voodoo lily (Sauromatum venosum), and the giant water lilies of the genus Victoria. The lodgepole pine dwarf mistletoe, Arceuthobium americanum, disperses its seeds explosively through thermogenesis.
Types
Depending on whether or not they are initiated through locomotion and intentional movement of the muscles, thermogenic processes can be classified as one of the following:
Exercise activity thermogenesis (EAT)
Non-exercise activity thermogenesis (NEAT), energy expended for everything that is not sleeping, eating or sports-like exercise.
Diet-induced thermogenesis (DIT)
Shivering
One method to raise temperature is through shivering. It produces heat because the conversion of the chemical energy of ATP into kinetic energy causes almost all of the energy to show up as heat. Shivering is the process by which the body temperature of hibernating mammals (such as some bats and ground squirrels) is raised as these animals emerge from hibernation.
Non-shivering
Non-shivering thermogenesis occurs in brown adipose tissue (brown fat) that is present in almost all eutherians (swine being the only exception currently known). Brown adipose tissue has a unique uncoupling protein (thermogenin, also known as uncoupling protein 1) that allows for the synthesis of ATP to be uncoupled from the production of protons (H+), thus enabling mitochondria to burn fatty acids and oxygen to generate heat. The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. The structure has the typical fold of a member of the SLC25 family. UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak.
In this process, substances such as free fatty acids (derived from triacylglycerols) remove purine (ADP, GDP and others) inhibition of thermogenin, which causes an influx of H+ into the matrix of the mitochondrion and bypasses the ATP synthase channel. This uncouples oxidative phosphorylation, and the energy from the proton motive force is dissipated as heat rather than producing ATP from ADP, which would store chemical energy for the body's use. Thermogenesis can also be produced by leakage of the sodium-potassium pump and the Ca2+ pump. Thermogenesis is contributed to by futile cycles, such as the simultaneous occurrence of lipogenesis and lipolysis or glycolysis and gluconeogenesis. In a broader context, futile cycles can be influenced by activity/rest cycles such as the Summermatter cycle.
Acetylcholine stimulates muscle to raise metabolic rate.
The low demands of thermogenesis mean that free fatty acids draw, for the most part, on lipolysis as the method of energy production.
A comprehensive list of human and mouse genes regulating cold-induced thermogenesis (CIT) in living animals (in vivo) or tissue samples (ex vivo) has been assembled and is available in CITGeneDB.
Evolutionary history
In avians and eutherians
The biological processes which allow for thermogenesis in animals did not evolve from a singular, common ancestor. Rather, avian (birds) and eutherian (placental mammalian) lineages developed the ability to perform thermogenesis independently through separate evolutionary processes. The fact that the same evolutionary character evolved independently in two different lineages after their last known common ancestor means that thermogenic processes are classified as an example of convergent evolution. However, while both clades are capable of performing thermogenesis, the biological processes involved are different. The reason that avians and eutherians both developed the capacity to perform thermogenesis is a subject of ongoing study by evolutionary biologists, and two competing explanations have been proposed to explain why this character appears in both lineages.
One explanation for the convergence is the "aerobic capacity" model. This theory suggests that natural selection favored individuals with higher resting metabolic rates, and that as the metabolic capacity of birds and eutherians increased, they developed the capacity for endothermic thermogenesis. Researchers have linked high levels of oxygen consumption with high resting metabolic rates, suggesting that the two are directly correlated. Rather than animals developing the capacity to maintain high and stable body temperatures only to be able to thermoregulate without the aid of the environment, this theory suggests that thermogenesis is actually a by-product of natural selection for higher aerobic and metabolic capacities. These higher metabolic capacities may initially have evolved for the simple reason that animals capable of metabolizing more oxygen for longer periods of time would have been better suited to, for example, run from predators or gather food. This model explaining the development of thermogenesis is older and more widely accepted among evolutionary biologists who study thermogenesis.
The second explanation is the "parental care" model. This theory proposes that the convergent evolution of thermogenesis in birds and eutherians is based on shared behavioral traits. Specifically, birds and eutherians both provide high levels of parental care to young offspring. This high level of care is theorized to give new born or hatched animals the opportunity to mature more rapidly because they have to expend less energy to satisfy their food, shelter, and temperature needs. The "parental care" model thus proposes that higher aerobic capacity was selected for in parents as a means of meeting the needs of their offspring. While the "parental care" model does differ from the "aerobic capacity" model, it shares some similarities in that both explanations for the rise of thermogenesis rest on natural selection favoring individuals with higher aerobic capacities for one reason or another. The primary difference between the two theories is that the "parental care" model proposes that a specific biological function (childcare) resulted in selective pressure for higher metabolic rates.
Despite both relying on similar explanations for the process by which organisms gained the capacity to perform non-shivering thermogenesis, neither of these explanations has secured a large enough consensus to be considered completely authoritative on convergent evolution of NST in birds and mammals, and scientists continue to conduct studies which support both positions.
Non-shivering thermogenesis
Brown Adipose Tissue (BAT) thermogenesis is one of the two known forms of non-shivering thermogenesis (NST). This type of heat-generation occurs only in eutherians, not in birds or other thermogenic organisms. BAT NST occurs when Uncoupling Protein 1 (UCP1) performs oxidative phosphorylation in eutherians’ bodies resulting in the generation of heat (Berg et al., 2006, p. 1178). This process generally only begins in eutherians after they have been subjected to low temperatures for an extended period of time, after which the process allows an organism's body to maintain a high and stable temperature without a reliance on environmental thermoregulation mechanisms (such as sunlight/shade). Because eutherians are the only clade which store brown adipose tissue, scientists previously thought that UCP1 evolved in conjunction with brown adipose tissue. However, recent studies have shown that UCP1 can also be found in non-eutherians like fish, birds, and reptiles. This discovery means that UCP1 probably existed in a common ancestor before the radiation of the eutherian lineage. Since this evolutionary split, though, UCP1 has evolved independently in eutherians, through a process which scientists believe was not driven by natural selection, but rather by neutral processes like genetic drift.
Evolution of Skeletal-Muscle Non-Shivering Thermogenesis
The second form of NST occurs in skeletal muscle. While eutherians use both BAT and skeletal muscle NST for thermogenesis, birds only use the latter form. This process has also been shown to occur in rare instances in fish. In skeletal muscle NST, Calcium ions slip across muscle cells to generate heat. Even though BAT NST was originally thought to be the only process by which animals could maintain endothermy, scientists now suspect that skeletal muscle NST was the original form of the process and that BAT NST developed later. Though scientists once also believed that only birds maintained their body temperatures using skeletal muscle NST, research in the late 2010s showed that mammals and other eutherians also use this process when they do not have adequate stores of brown adipose tissue in their bodies.
Skeletal muscle NST might also be used to maintain body temperature in heterothermic mammals during states of torpor or hibernation. Given that early eutherians and the reptiles which later evolved into avian lineages were either heterothermic or ectothermic, both forms of NST are thought not to have developed fully until after the K-pg extinction roughly 66 million years ago. However, some estimates place the evolution of these characters earlier, at roughly 100 mya. It is most likely that the process of evolving the capacity for thermogenesis as it currently exists was a process which began prior to the K-pg extinction and ended well after. The fact that skeletal muscle NST is common among eutherians during periods of torpor and hibernation further supports the theory that this form of thermogenesis is older than BAT NST. This is because early eutherians would not have had the capacity for non-shivering thermogenesis as it currently exists, so they more frequently used torpor and hibernation as means of thermal regulation, relying on systems which, in theory, predate BAT NST. However, there remains no consensus among evolutionary biologists on the order in which the two processes evolved, nor an exact timeframe for their evolution.
Regulation
Non-shivering thermogenesis is regulated mainly by thyroid hormone and the sympathetic nervous system. Some hormones, such as norepinephrine and leptin, may stimulate thermogenesis by activating the sympathetic nervous system. Rising insulin levels after eating may be responsible for diet-induced thermogenesis (thermic effect of food).
Progesterone also increases body temperature.
Thermogenesis from white adipose tissue
A novel and interesting method named the thermogenin-like system (TLS) has recently been proposed to produce thermogenesis from white adipose tissue or from other substantial tissues (such as endothelial or muscle cells). Ultimately, this could lead to new therapeutic methods for treating morbid obesity or severe diabetes. The proposed model is purely theoretical and relies on the use of light-activated PoXeR pumps integrated into the inner membrane of mitochondria. These pumps allow the passage of protons in such a way that the proton motive force is reduced. This would enable greater consumption of blood glucose from white adipose, endothelial, or muscle cells, thereby potentially lowering blood glucose levels. The explanation is that glycolysis is accelerated when glucose enters the cells and undergoes the Krebs cycle in the mitochondria. Since muscle cells have many mitochondria, it is also interesting to express PoXeR pumps in this tissue.
However, the method is invasive, relies on gene therapy, and requires several clinical trials as well as hospitalization to integrate the system at the level of white or muscle adipose tissue in the abdominal fat. It is also a light-responsive system. Since light does not penetrate the skin from the outside, the system must include an under-skin component with alternating activation of green light for a certain duration, followed by deactivation for another period. This cycle repeats over several weeks, particularly to recharge the light system. To ensure that ATP levels do not drop too low (otherwise the cell dies), the system self-regulates. Indeed, for light to be activated in the system, it is necessary to have a mechanism that continuously provides light without significantly lowering ATP levels. As luciferase can emit light in exchange for ATP, if ATP levels decrease too drastically, the light stops, ATP levels rise again, and the light is reactivated to induce thermogenesis.
See also
Thermoregulation
References
External links
Physiology
Heat transfer | Thermogenesis | [
"Physics",
"Chemistry",
"Biology"
] | 2,596 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Physiology",
"Thermodynamics"
] |
518,740 | https://en.wikipedia.org/wiki/Soot | Soot ( ) is a mass of impure carbon particles resulting from the incomplete combustion of hydrocarbons. Soot is considered a hazardous substance with carcinogenic properties. Most broadly, the term includes all the particulate matter produced by this process, including black carbon and residual pyrolysed fuel particles such as coal, cenospheres, charred wood, and petroleum coke classified as cokes or char. It can include polycyclic aromatic hydrocarbons and heavy metals like mercury.
Soot causes various types of cancer and lung disease.
Terminology
Definition
Among scientists, exact definitions for soot vary, depending partly on their field. For example, atmospheric scientists may use a different definition compared to toxicologists. Soot's definition can also vary across time, and from paper to paper even among scientists in the same field. A common feature of the definitions is that soot is composed largely of carbon based particles resulting from the incomplete burning of hydrocarbons or organic fuel such as wood. Some note that soot may be formed by other high temperature processes, not just by burning. Soot typically takes an aerosol form when first created. It tends to eventually settle onto surfaces, though some parts of it may be decomposed while still airborne. In some definitions, soot is defined purely as carbonaceous particles, but in others it is defined to include the whole ensemble of particles resulting from partial combustion of organic matter or fossil fuels - as such it can include non carbon elements like sulphur and even traces of metal. In many definitions, soot is assumed to be black, but in some definitions it can be composed partly or even mainly of brown carbon, and so can also be medium or even light gray in colour.
Related terms
Terms like "soot", "carbon black", and "black carbon" are often used to mean the same thing, even in the scientific literature, but other scientists have stated this is incorrect and that they refer to chemically and physically distinct things.
Carbon black is a term for the industrial production of powdery carbonaceous matter which has been underway since the 19th century. Carbon black is composed almost entirely of elemental carbon. Carbon black is not found in regular soot - only in the special soot that is intentionally produced for its manufacture, mostly from specialised oil furnaces.
Black carbon is a term that arose in the late twentieth century among atmospheric scientists, to describe strongly light absorbing carbonaceous particles which have a significant climate forcing affect - second only to itself as a contributor to short term global warming. The term is sometimes used synonymously with soot, but is now used preferentially in atmospheric science, though some prefer more precise terms like 'light-absorbing carbon'. Unlike carbon black, black carbon is produced unintentionally. The chemical composition of black carbon is much more varied, and typically has a much lower proportion of elemental carbon, compared with carbon black. In some definitions, black carbon also includes charcoal, a type of matter where the chunks tend to be too large to have an aerosol form as is the case with soot.
Sources
Soot as an airborne contaminant in the environment has many different sources, all of which are results of some form of pyrolysis. They include soot from coal burning, internal-combustion engines, power-plant boilers, hog-fuel boilers, ship boilers, central steam-heat boilers, waste incineration, local field burning, house fires, forest fires, fireplaces, and furnaces. These exterior sources also contribute to the indoor environment sources such as smoking of plant matter, cooking, oil lamps, candles, quartz/halogen bulbs with settled dust, fireplaces, exhaust emissions from vehicles, and defective furnaces. Soot in very low concentrations is capable of darkening surfaces or making particle agglomerates, such as those from ventilation systems, appear black. Soot is the primary cause of "ghosting", the discoloration of walls and ceilings or walls and flooring where they meet. It is generally responsible for the discoloration of the walls above baseboard electric heating units.
The formation and properties of soot depend strongly on the fuel composition, but may also be influenced by flame temperature. Regarding fuel composition, the rank ordering of sooting tendency of fuel components is: naphthalenes → benzenes → aliphatics. However, the order of sooting tendencies of the aliphatics (alkanes, alkenes, and alkynes) varies dramatically depending on the flame type. The difference between the sooting tendencies of aliphatics and aromatics is thought to result mainly from the different routes of formation. Aliphatics appear to first form acetylene and polyacetylenes, which is a slow process; aromatics can form soot both by this route and also by a more direct pathway involving ring condensation or polymerization reactions building on the existing aromatic structure.
Description
The Intergovernmental Panel on Climate Change (IPCC) adopted the description of soot particles given in the glossary of Charlson and Heintzenberg (1995), "Particles formed during the quenching of gases at the outer edge of flames of organic vapours, consisting predominantly of carbon, with lesser amounts of oxygen and hydrogen present as carboxyl and phenolic groups and exhibiting an imperfect graphitic structure".
Formation of soot is a complex process, an evolution of matter in which a number of molecules undergo many chemical and physical reactions within a few milliseconds. Soot always contains nanoparticles of graphite and diamond, a phenomenon known as gemmy soot. Soot is a powder-like form of amorphous carbon. Gas-phase soot contains polycyclic aromatic hydrocarbons (PAHs). The PAHs in soot are known mutagens and are classified as a "known human carcinogen" by the International Agency for Research on Cancer (IARC). Soot forms during incomplete combustion from precursor molecules such as acetylene. It consists of agglomerated nanoparticles with diameters between 6 and 30 nm. The soot particles can be mixed with metal oxides and with minerals and can be coated with sulfuric acid.
Soot formation mechanism
Many details of soot formation chemistry remain unanswered and controversial, but there have been a few agreements:
Soot begins with some precursors or building blocks.
Nucleation of heavy molecules occurs to form particles.
Surface growth of a particle proceeds by adsorption of gas phase molecules.
Coagulation happens via reactive particle–particle collisions.
Oxidation of the molecules and soot particles reduces soot formation.
Hazards
Soot, particularly diesel exhaust pollution, accounts for over one-quarter of the total hazardous pollution in the air.
Among these diesel emission components, particulate matter has been a serious concern for human health due to its direct and broad impact on the respiratory organs. In earlier times, health professionals associated PM10 (diameter < 10 μm) with chronic lung disease, lung cancer, influenza, asthma, and increased mortality rate. However, recent scientific studies suggest that these correlations be more closely linked with fine particles (PM2.5) and ultra-fine particles (PM0.1).
Long-term exposure to urban air pollution containing soot increases the risk of coronary artery disease.
Diesel exhaust (DE) gas is a major contributor to combustion-derived particulate-matter air pollution. In human experimental studies using an exposure chamber setup, DE has been linked to acute vascular dysfunction and increased thrombus formation. This serves as a plausible mechanistic link between the previously described association between particulate matter air pollution and increased cardiovascular morbidity and mortality.
Soot also tends to form in chimneys in domestic houses possessing one or more fireplaces. If a large deposit collects in one, it can ignite and create a chimney fire. Regular cleaning by a chimney sweep should eliminate the problem.
Soot modeling
Soot mechanism is difficult to model mathematically because of the large number of primary components of diesel fuel, complex combustion mechanisms, and the heterogeneous interactions during soot formation. Soot models are broadly categorized into three subgroups: empirical (equations that are adjusted to match experimental soot profiles), semi-empirical (combined mathematical equations and some empirical models which used for particle number density and soot volume and mass fraction), and detailed theoretical mechanisms (covers detailed chemical kinetics and physical models in all phases).
First, empirical models use correlations of experimental data to predict trends in soot production. Empirical models are easy to implement and provide excellent correlations for a given set of operating conditions. However, empirical models cannot be used to investigate the underlying mechanisms of soot production. Therefore, these models are not flexible enough to handle changes in operating conditions. They are only useful for testing previously established designed experiments under specific conditions.
Second, semi-empirical models solve rate equations that are calibrated using experimental data. Semi-empirical models reduce computational costs primarily by simplifying the chemistry in soot formation and oxidation. Semi-empirical models reduce the size of chemical mechanisms and use simpler molecules, such as acetylene as precursors.
Detailed theoretical models use extensive chemical mechanisms containing hundreds of chemical reactions in order to predict concentrations of soot. Detailed theoretical soot models contain all the components present in the soot formation with a high level of detailed chemical and physical processes.
Finally, comprehensive models (detailed models) are usually expensive and slow to compute, as they are much more complex than empirical or semi-empirical models. Thanks to recent technological progress in computation, it has become more feasible to use detailed theoretical models and obtain more realistic results; however, further advancement of comprehensive theoretical models is limited by the accuracy of modeling of formation mechanisms.
Additionally, phenomenological models have found wide use recently. Phenomenological soot models, which may be categorized as semi-empirical models, correlate empirically observed phenomena in a way that is consistent with the fundamental theory, but is not directly derived from the theory. These models use sub-models developed to describe the different processes (or phenomena) observed during the combustion process. Examples of sub-models of phenomenological empirical models include spray model, lift-off model, heat release model, ignition delay model, etc. These sub-models can be empirically developed from observation or by using basic physical and chemical relations. Phenomenological models are accurate for their relative simplicity. They are useful, especially when the accuracy of the model parameters is low. Unlike empirical models, phenomenological models are flexible enough to produce reasonable results when multiple operating conditions change.
Applications
Historically soot was used in manufacturing artistic paints and shoe polish, as well as a blackener for Russia leather for boots. With the advent of the printing press it was used in the printing ink well into the 20th century.
See also
Activated carbon
Atmospheric particulate matter
Bistre
Black carbon
Carbon black
Coal
Colorant
Creosote
Diesel particulate matter
Dust
Fullerene
Health effects of coal ash
Health effects of wood smoke
Indian ink
Joss paper
Open burning of waste
Rolling coal
Soot blower
Spheroidal carbonaceous particles
Sulfur dioxide
References
External links
Allotropes of carbon
IARC Group 1 carcinogens
Pollution
Air pollution | Soot | [
"Chemistry"
] | 2,315 | [
"Allotropes of carbon",
"Allotropes"
] |
8,521,549 | https://en.wikipedia.org/wiki/Soft-collinear%20effective%20theory | In quantum field theory, soft-collinear effective theory (or SCET) is a theoretical framework for doing calculations that involve interacting particles carrying widely different energies.
The motivation for developing SCET was to control the infrared divergences that occur in quantum chromodynamics (QCD) calculations that involve particles that are soft—carrying much lower energy or momentum than other particles in the process—or collinear—traveling in the same direction as another particle in the process. SCET is an effective theory for highly energetic quarks interacting with collinear and/or soft gluons. It has been used for calculations of the decays of B mesons (quark-antiquark bound states involving a bottom quark) and the properties of jets (sprays of hadrons that emerge from particle collisions when a quark or gluon is produced). SCET has also been used to calculate electroweak interactions in Higgs boson production.
The new feature of SCET is its ability to handle more than one soft energy scale. For example, processes involving quarks carrying a high energy Q interacting with gluons have two soft scales: the transverse momentum pT of the collinear particles, plus the even softer scale pT2/Q. SCET provides a power-counting formalism for doing perturbation theory in the small parameter ΛQCD/Q.
External links
See the original papers were by Christian Bauer, Sean Fleming, Michael Luke, Dan Pirjol, and Iain Stewart:
References
Quantum chromodynamics | Soft-collinear effective theory | [
"Physics"
] | 318 | [
"Particle physics stubs",
"Particle physics"
] |
27,380,463 | https://en.wikipedia.org/wiki/Sensitivity%20%28control%20systems%29 | In control engineering, the sensitivity (or more precisely, the sensitivity function) of a control system measures how variations in the plant parameters affects the closed-loop transfer function. Since the controller parameters are typically matched to the process characteristics and the process may change, it is important that the controller parameters are chosen in such a way that the closed loop system is not sensitive to variations in process dynamics. Moreover, the sensitivity function is also important to analyse how disturbances affects the system.
Sensitivity function
Let and denote the plant and controller's transfer function in a basic closed loop control system written in the Laplace domain using unity negative feedback.
Sensitivity function as a measure of robustness to parameter variation
The closed-loop transfer function is given by
Differentiating with respect to yields
where is defined as the function
and is known as the sensitivity function. Lower values of implies that relative errors in the plant parameters has less effects in the relative error of the closed-loop transfer function.
Sensitivity function as a measure of disturbance attenuation
The sensitivity function also describes the transfer function from external disturbance to process output. In fact, assuming an additive disturbance n after the output
of the plant, the transfer functions of the closed loop system are given by
Hence, lower values of suggest further attenuation of the external disturbance. The sensitivity function tells us how the disturbances are influenced by feedback. Disturbances with frequencies such that is less than one are reduced by an amount equal to the distance to the critical point and disturbances with frequencies such that is larger than one are amplified by the feedback.
Sensitivity peak and sensitivity circle
Sensitivity peak
It is important that the largest value of the sensitivity function be limited for a control system. The nominal sensitivity peak is defined as
and it is common to require that the maximum value of the sensitivity function, , be in a range of 1.3 to 2.
Sensitivity circle
The quantity is the inverse of the shortest distance from the Nyquist curve of the loop transfer function to the critical point . A sensitivity guarantees that the distance from the critical point to the Nyquist curve is always greater than and the Nyquist curve of the loop transfer function is always outside a circle around the critical point with the radius , known as the sensitivity circle. defines the maximum value of the sensitivity function and the inverse of gives you the shortest distance from the open-loop transfer function to the critical point .
References
See also
Robust control
PID controller
Bode's sensitivity integral
Control theory | Sensitivity (control systems) | [
"Mathematics"
] | 489 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
27,381,708 | https://en.wikipedia.org/wiki/Benthic%20boundary%20layer | The benthic boundary layer (BBL) is the layer of water directly above the sediment at the bottom of a body of water (river, lake, or sea, etc.). Through specific sedimentation processes, certain organisms are able to live in this deep layer of water. The BBL is generated by the friction of the water moving over the surface of the substrate, which decrease the water current significantly in this layer. The thickness of this zone is determined by many factors, including the Coriolis force. The benthic organisms and processes in this boundary layer echo the water column above them.
The BBL serves as a transitional zone between the water column and the sediment layer by regulating biogeochemical processes and the flux of nutrients and organic materials. This zone also serves as the main layer of resistance for the shift of mass, heat, and nutrients from the sediment to the water, or vice versa. It is this area of interaction between the two environments that is important in many species' reproductive strategies, particularly larvae dispersal. The benthic boundary layer also contains nutrients important in fisheries, a wide array of microscopic life, a variety of suspended materials, and sharp energy gradients. It is also the sink for many anthropogenic substances released into the environment as the substances commonly sink to the bottom of the water column.
Life in the Deep Sea Benthic Boundary Layer
The benthic boundary layer (BBL) represents a few tens of meters of the water column directly above the sea floor and constitutes an important zone of biological activity in the ocean. It plays a vital role in the cycling of matter, and has been called the “endpoint” for sedimenting material, which fuels high metabolic rates for microbial populations.
After passing through the BBL, this degraded material is either returned to the water column or mobilized into the sediment, where it may eventually become immobilized. While the supply of POM (particulate organic matter), or marine snow, is relatively limited and inhibits species abundance, it sustains a complex yet understudied microbial loop that can maintain both meiofaunal and macrofaunal populations. In the microbial loop, non-moving benthic organism living in the benthic boundary layer supply nutrients to the loop by releasing unused particles for use by microbial communities. In a study by Will Ritzrau (1996), it was determined that microbial activities were up to a factor of 7.5 higher in the BBL than in adjacent waters. While this study was completed between 100-400m depth, it could have implications for the deep-BBL.
Organisms that live in the benthic boundary layer are known as being benthopelagic. All organisms living predominantly in the benthic boundary layer must acquire their food from falling particles in the water column. Bacterial growth and consumption of falling organic detritus is hindered by the hydrostatic pressure of water and increase in depth. This allows for changeable and consumable matter to reach the ocean flood and be consumed by benthic organisms. The quality and quantity of nutrients reaching the sea floor play a major role in the development of benthic communities. These organisms ultimately play a vital role in the remineralization of matter and aid in breaking down POM that may eventually become permanent sediment. Excluding hydrothermal vents, much of the deep sea benthos is allochthonous, and the importance of bacteria for substrate conversion is paramount.
Presently, it is known that deep-BBL bacterial populations are able to support protozoan bacterivores like foraminifera and some metazoan zooplankton, which in turn can support larger organisms. Meiofauna and macrofauna found in the deep-BBL include: copepods, annelids, nematodes, bivalves, ostracods, isopods, amphipods, arthropods and gastropods, to name a few. The current number of species living in the benthic boundary layer is widely unknown. However, it is theorized that up to 10,000,000 species are living in the BBL. These organisms ultimately play a vital role in the remineralization of matter and aid in breaking down POM that may eventually become permanent sediment.
Sedimentation in the Benthic Boundary Layer
The benthic boundary layer (BBL) plays a vital role in the cycling of matter and is commonly referred to as the “endpoint” or "sink" for sediment material, which fuels high metabolic rates for microbial populations. The particles from the pelagic ecosystem sink to the BBL where they will be used by organisms. Studies have estimated that particles from the photic zone sink at a rate of approximately 100 meters per day. Up to 10% of sediment from the photic zone is able to sink all the way down to the benthic boundary layer. However, the total amount of mass that falls to the BBL is impacted by total pelagic production and seasonal variability. After passing through the BBL, this degraded material is either returned to the water column or mobilized into the sediment, where it may eventually become immobilized due to currents or sediment force. Re-suspension or upward fluxes of particles can occur due to environmental disturbances such as wind, currents, tide fluctuations, and benthic storms. With growing concern over the ultimate fate of matter in the ocean, knowledge of the complex biological processes in the deep sea BBL (deep-BBL) and how they affect future sedimentation and remineralization rates is valuable to the scientific community.
At sea depths of 1800m or greater, the BBL is noted as having a near homogeneous temperature and salinity with periodic fluxes of detritus or particulate organic matter (POM). POM is strongly linked to seasonal variations in surface productivity and hydrodynamic conditions. The amount of POM that sinks into the water is directly correlated with production in the photic zone of the water column.
Future Directions
This zone is of interest to biologist, geologists, sedimentologists, oceanographers, physicists, and engineers, as well as many other scientific disciplines. As the effects of anthropogenic activities begin taking an even greater toll on marine processes, long-term studies are essential in determining the health and stability of the deep-BBL. Current climate variation and warming could also play a major role in changes in the BBL by decimating living species present there and could prompt long-term studies in future scientific communities. Currently, several groups are employing cabled observatories (ALOHA Cabled Observatory, Monterey Accelerated Research System, NEPTUNE, VENUS, and Liquid Jungle Lab (LJL) Panama- PLUTO) to work towards developing these much needed time-series. Cabled underwater networks provide continuous power to cabled instruments to allow for long-term studies. The cables also provide a way for data to be reviewed in real-time from the shore. Time-lapse cameras, sediment traps, bottom-transecting vehicles, baited traps, acoustic arrays, slaved cameras, and autonomous underwater vehicles (AUVs) are also being used to gather more information about the organisms and processes in the benthic boundary layer. Using these research techniques, scientists may begin to find new ways to conserve BBL communities and gather new data about species.
References
Sources
The Benthic Boundary Layer, Transport Processes and Biogeochemistry. Edited by Bernard P. Boudreau and Bo Barker Jørgensen . February 2001, Oxford University Press.
Oceanography
Limnology | Benthic boundary layer | [
"Physics",
"Environmental_science"
] | 1,550 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
27,382,300 | https://en.wikipedia.org/wiki/Cell%20cycle%20analysis | Cell cycle analysis by DNA content measurement is a method that most frequently employs flow cytometry to distinguish cells in different phases of the cell cycle. Before analysis, the cells are usually permeabilised and treated with a fluorescent dye that stains DNA quantitatively, such as propidium iodide (PI) or 4,6-diamidino-2-phenylindole (DAPI). The fluorescence intensity of the stained cells correlates with the amount of DNA they contain. As the DNA content doubles during the S phase, the DNA content (and thereby intensity of fluorescence) of cells in the G0 phase and G1 phase (before S), in the S phase, and in the G2 phase and M phase (after S) identifies the cell cycle phase position in the major phases (G0/G1 versus S versus G2/M phase) of the cell cycle. The cellular DNA content of individual cells is often plotted as their frequency histogram to provide information about relative frequency (percentage) of cells in the major phases of the cell cycle.
Cell cycle anomalies revealed on the DNA content frequency histogram are often observed after different types of cell damage, for example such DNA damage that interrupts the cell cycle progression at certain checkpoints. Such an arrest of the cell cycle progression can lead either to an effective DNA repair, which may prevent transformation of normal into a cancer cell (carcinogenesis), or to cell death, often by the mode of apoptosis. An arrest of cells in G0 or G1 is often seen as a result of lack of nutrients (growth factors), for example after serum deprivation.
Cell cycle analysis was first described in 1969 at Los Alamos Scientific Laboratory by a group from the University of California using the Feulgen staining technique. The first protocol for cell cycle analysis using propidium iodide staining was presented in 1975 by Awtar Krishan from Harvard Medical School and is still widely cited today.
Multiparameter analysis of the cell cycle includes, in addition to measurement of cellular DNA content, other cell cycle related constituents/features. The concurrent measurement of cellular DNA and RNA content, or DNA susceptibility to denaturation at low pH using the metachromatic dye acridine orange, reveals the G1Q, G1A, and G1B cell cycle compartments and also makes it possible to discriminate between S, G2 and mitotic cells. The cells in G1Q are quiescent, temporarily withdrawn from the cell cycle (also identifiable as G0), the G1A are in the growth phase while G1B are the cells just prior entering S, with their growth (RNA and protein content, size) similar to that of the cells initiating DNA replication. Similar cell cycle compartments are also recognized by multiparameter analysis that includes measurement of expression of cyclin D1, cyclin E, cyclin A and cyclin B1, each in relation to DNA content Concurrent measurement of DNA content and of incorporation of DNA precursor 5-bromo-2'-deoxyuridine (BrdU) by flow cytometry is an especially useful assay, that has been widely used in analysis of the cell cycle in vitro and in vivo. However, the incorporation of 5-ethynyl-2'-deoxyuridine (EdU), the precursor whose detection offers certain advantages over BrdU, has now become the preferred methodology do detect DNA replicating (S-phase) cells.
Experimental procedure
Unless staining is performed using Hoechst 33342, the first step in preparing cells for cell cycle analysis is permeabilisation of the cells' plasma membranes. This is usually done by incubating them in a buffer solution containing a mild detergent such as Triton X-100 or NP-40, or by fixating them in ethanol. Most fluorescent DNA dyes (one of exceptions is Hoechst 33342) are not plasma membrane permeant, that is, unable to pass through an intact cell membrane. Permeabilisation is therefore crucial for the success of the next step, the staining of the cells.
Prior to (or during the staining step) the cells are often treated with RNase A to remove RNAs. This is important because
certain dyes that stain DNA will also stain RNA, thus creating artefacts that would distort the results. An exception is the metachromatic fluorochrome acridine orange, which under the specific staining protocol can differentially stain both, RNA (generating red luminescence) and DNA (green fluorescence), or in another protocol, after removal of RNA and partial DNA denaturation, to differentially stain double-stranded DNA (green fluorescence) versus single-stranded DNA (red luminescence)[3]. Aside from propidium iodide and acridine orange, quantifiable dyes that are frequently used include (but are not limited to) DRAQ5, 7-Aminoactinomycin D, DAPI and Hoechst 33342.
Doublet discrimination
Since cells and especially fixed cells tend to stick together, cell aggregates have to be excluded from analysis through a process called doublet discrimination. This is important because a doublet of two G0/G1 cells has the same total content of DNA and thus the same fluorescence intensity as a single G2/M cell. Unless recognized as such the G0/G1 doublets would contribute to false positive identification and count of G2/M cells.
Related methods
Nicoletti assay
The Nicoletti assay, named after its inventor, the Italian physician Ildo Nicoletti, is a modified form of cell cycle analysis. It is used to detect and quantify apoptosis, a form of programmed cell death, by analysing cells with a DNA content less than 2n ("sub-G0/G1 cells"). Such cells are usually the result of apoptotic DNA fragmentation: during apoptosis, the DNA is degraded by cellular endonucleases. Therefore, nuclei of apoptotic cells contain less DNA than nuclei of healthy G0/G1 cells, resulting in a sub-G0/G1 peak in the fluorescence histogram that can be used to determine the relative amount of apoptotic cells in a sample. This method was developed and first described in 1991 by Nicoletti and co-workers at Perugia University School of Medicine. An optimised protocol developed by two of the authors of the original publication was published in 2006. The objects measured within the sub-G0/G1 peak, with DNA content lesser than 5% of that of the G0G1 peak, in all probability are apoptotic bodies and thus do not represent individual apoptotic cells
References
Further reading
Biological techniques and tools
Cell biology
Analysis
Flow cytometry | Cell cycle analysis | [
"Chemistry",
"Biology"
] | 1,437 | [
"Cell biology",
"Cellular processes",
"Flow cytometry",
"nan",
"Cell cycle"
] |
27,386,075 | https://en.wikipedia.org/wiki/McCutcheon%20index | The McCutcheon index or chemotactic ratio is a numerical metric that quantifies the efficiency of movement. It is calculated as the ratio of the net displacement of a moving entity to the total length of the path it has traveled.
The index acts as an evaluative measure of the directness of movement. A value close to 1 indicates that a moving entity performed its movement in a very direct manner, minimizing detours. On the other hand, a lower value indicates that the entity has achieved only a marginal net displacement, despite traveling a considerable distance. The index is used to evaluate movements of, for example, leukocytes, bacteria, or amoebae.
It is named after Morton McCutcheon who introduced it to describe chemotaxis in leukocytes.
References
Biophysics | McCutcheon index | [
"Physics",
"Biology"
] | 166 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
27,386,117 | https://en.wikipedia.org/wiki/Computational%20Materials%20Science%20%28journal%29 | Computational Materials Science is a monthly peer-reviewed scientific journal published by Elsevier. It was established in October 1992. The editor-in-chief is Susan Sinnott. The journal covers computational modeling and practical research for advanced materials and their applications.
Abstracting and indexing
The journal is abstracted and indexed by:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.3.
References
External links
Materials science journals
Academic journals established in 1994
Monthly journals
English-language journals
Elsevier academic journals | Computational Materials Science (journal) | [
"Materials_science",
"Technology",
"Engineering"
] | 105 | [
"Computational fields of study",
"Computing and society",
"Materials science journals",
"Materials science"
] |
27,386,754 | https://en.wikipedia.org/wiki/Paleosalinity | Paleosalinity (or palaeosalinity) is the salinity of the global ocean or of an ocean basin at a point in geological history.
Importance
From Bjerrum plots, it is found that a decrease in the salinity of an aqueous fluid will act to increase the value of the carbon dioxide-carbonate system equilibrium constants, (pK*). This means that the relative proportion of carbonate with respect to carbon dioxide is higher in more saline fluids, e.g. seawater, than in fresher waters. Of crucial importance for paleoclimatology is the observation that an increase in salinity will thus reduce the solubility of carbon dioxide in the oceans. Since there is thought to have been a 120 m depression in sea level at the last glacial maximum due to the extensive formation of ice sheets (which are solely freshwater), this represents a significant fractionation towards saltier seas during glacial periods. Correspondingly, this will cause a net outgassing of carbon dioxide into the atmosphere because of its reduced solubility, acting to increase atmospheric carbon dioxide by 6.5‰. This is thought to partly offset the net decrease of 80-100‰ observed during glacial periods.
Stratification
In addition, it is thought that extensive salinity stratification can lead to a reduction in the meridional overturning circulation (MOC) through the slowing of thermohaline circulation. Increased stratification means that there is effectively a barrier to subduction of parcels of water; isopycnals effectively do not outcrop at the surface and are parallel to the surface. The ocean, in this case, can be described as "less ventilated", and this has been implicated in the slowing down of the MOC.
Measuring paleosalinity
There may exist proxies for salinity, but to date the main way that salinity has been measured has been by directly measuring chlorinity in pore fluids. Adkins et al. (2002) used pore fluid chlorinity in ODP cores, with the paleo-depth estimated from nearby coral horizons. Chlorinity was measured rather than pure salinity because the major ions in seawater are not constant with depth in the sediment column; for example, sulfate reduction and cation-clay interactions can change overall salinity, whereas chlorinity is not heavily affected.
Paleosalinity during the Last Glacial Maximum
Adkins' study found that global salinity increased with a global sea level drop of 120 m. Analyzing 18O data they also found that deep waters were within error of the freezing point, with oceanic waters exhibiting a greater degree of homogeneity in temperatures. In contrast, variations in salinity were much greater than they are today. Modern day salinities are all within 0.5 psu of the global average salinity of 34.7 psu, whereas salinities during the last glacial maximum (LGM) ranged from 35.8 psu in the North Atlantic to 37.1 in the Southern Ocean.
There are some notable differences in the hydrography at the LGM and present day. Today the North Atlantic Deep Water (NADW) is observed to be more saline than Antarctic Bottom Water (AABW), whereas at the last glacial maximum it was observed that the AABW was in fact more saline; a complete reversal. Today the NADW is more salty because of the Gulf Stream; this could thus indicate a reduction of flow through the Florida Straits due to lowered sea level.
Another observation is that the Southern Ocean was vastly more salty at the LGM than today. This is particularly intriguing given the assumed importance of the Southern Ocean in oceanic dynamical regulation of ice ages. The extreme value of 37.1 psu is assumed to be a consequence of an increased degree of sea ice formation and export. This would account for the increased salinity, but would also account for the lack of oxygen isotopic fractionation; brine rejection without oxygen isotopic fractionation is thought to be highly characteristic of sea ice formation.
The increased role of salinity
The presence of waters near the freezing point alters the balance of the relative effects of contrasts in salinity and temperature on sea water density. This is described in the equation,
where is the thermal expansion coefficient and is the haline contraction coefficient. In particular, the ratio is crucial. Using the observed temperatures and salinities, in the modern ocean, is about 10 whilst at the LGM it is estimated to have been closer to 25. The modern thermohaline circulation is thus more controlled by density contrasts due to thermal differences, whereas during the LGM the oceans were more than twice as sensitive to differences in salinity rather than temperature. In this way, the thermohaline circulation can be considered to have been less "thermo" and more "haline".
See also
Fresh water
References
External links
History of salinity.
Chemical oceanography
Aquatic ecology
Oceanography
Coastal geography | Paleosalinity | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 1,043 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical oceanography",
"Ecosystems",
"Aquatic ecology"
] |
27,387,883 | https://en.wikipedia.org/wiki/Nitrile%20anion | Nitrile anions is jargon from the organic product resulting from the deprotonation of alkylnitriles. The proton(s) α to the nitrile group are sufficiently acidic that they undergo deprotonation by strong bases, usually lithium-derived. The products are not anions but covalent organolithium complexes. Regardless, these organolithium compounds are reactive toward various electrophiles.
Although nitrile anions are functionally similar to enolates, the extra multiple bond in nitrile anions provides them with a ketene-like geometry. Additionally, deprotonated cyanohydrins can act as masked acyl anions, giving products impossible to access with enolates alone.
Generation of nitrile anions
The pKas of nitriles span a wide range—at least 20 pKa units. Unstabilized nitriles require either alkali metal amide bases (such as NaNH2) or metal alkyls (such as butyllithium) for effective deprotonation. In the latter case, competitive addition of the alkyl group to the nitrile takes place.
Arylacetonitriles (e.g. phenylacetonitrile) are sufficiently acidic to undergo deprotonation with aqueous base, e.g., under phase-transfer catalysis. Nitrile anions can also be involved in Michael-type additions to activated double bonds and vinylation reactions with a limited number of polarized, unhindered acetylene derivatives.
Nitrile anions also arise by conjugate additions to α,β-unsaturated nitriles, reduction, and transmetallation.
Alkylation of nitrile anions
Nitrile anions are alkylated by alkyl halides.
The primary difficulty for alkylation reactions employing nitrile anions is over-alkylation. In the alkylation of acetonitrile, for instance, yields of monoalkylated product are low in most cases. Two exceptions are alkylations with epoxides (the nearby negative charge of the opened epoxide wards off further alkylation) and alkylations with cyanomethylcopper(I) species. Side reactions may also present a problem; concentrations of the nitrile anion must be high in order to mitigate processes involving self-condensation, such as the Thorpe–Ziegler reaction. Other important side reactions include elimination of the alkyl cyanide product or alkyl halide starting material and amidine formation.
The cyclization of ω-epoxy-1-nitriles provides an interesting example of how stereoelectronic factors may override steric factors in intramolecular substitution reactions. In the cyclization of 1, for instance, only the cyclopropane isomer 2 is observed. This is attributed to better orbital overlap in the SN2 transition state for cyclization. 1,1-disubstituted and tetrasubstituted epoxides also follow this principle.
Conjugated nitriles containing γ hydrogens may be deprotonated at the γ position to give resonance-stabilized anions. These intermediates almost always react with α selectitivity in alkylation reactions, the exception to the rule being anions of ortho-tolyl nitriles.
Formation of cyanohydrins from carbonyl compounds renders the former carbonyl carbon acidic. After protection of the hydroxyl group with an acyl or silyl group, cyanohydrins can function essentially as masked acyl anions. Because ester protecting groups are base labile, mild bases must be employed with ester-protected cyanohydrins. α-(Dialkylamino)nitriles can also be used in this context.
Examples of arylation and acylation reactions are shown below. Although intermolecular arylations using nitrile anions result in modest yields, the intramolecular procedure efficiently gives four-, five-, and six-membered benzo-fused rings.
Acylation can be accomplished using a wide variety of acyl electrophiles, including carbonates, chloroformates, esters, anyhdrides, and acid chlorides. In these reactions, two equivalents of base are used to drive the reaction towards acylated product—the acylated product is more acidic than the starting material.
Polyalkylation
Polyalkylation is a significant problem for primary or secondary nitriles; however, a number of solutions to this problem exist. Alkylation of cyanoacetates followed by decarboxylation provides one solution.
Polyanions of nitriles can also be generated by multiple deprotonations, and these species produce polyalkylated products in the presence of alkyl electrophiles.
Synthetic applications
Alkylation of a nitrile anion followed by reductive decyanation was employed in the synthesis of (Z)-9-dodecen-1-yl acetate, the sex pheromone of Paralobesia viteana.
References
Nitriles
Anions
Reactive intermediates | Nitrile anion | [
"Physics",
"Chemistry"
] | 1,116 | [
"Matter",
"Anions",
"Functional groups",
"Organic compounds",
"Reactive intermediates",
"Physical organic chemistry",
"Nitriles",
"Ions"
] |
6,553,929 | https://en.wikipedia.org/wiki/Airlift%20pump | An airlift pump is a pump that has low suction and moderate discharge of liquid and entrained solids. The pump injects compressed air at the bottom of the discharge pipe which is immersed in the liquid. The compressed air mixes with the liquid causing the air-water mixture to be less dense than the rest of the liquid around it and therefore is displaced upwards through the discharge pipe by the surrounding liquid of higher density. Solids may be entrained in the flow and if small enough to fit through the pipe, will be discharged with the rest of the flow at a shallower depth or above the surface. Airlift pumps are widely used in aquaculture to pump, circulate and aerate water in closed, recirculating systems and ponds. Other applications include dredging, underwater archaeology, salvage operations and collection of scientific specimens.
Principle
The only energy required is provided by compressed air. This air is usually compressed by a compressor or a blower. The air is injected in the lower part of a pipe that transports a liquid. By buoyancy the air, which has a lower density than the liquid, rises quickly. By fluid pressure, the liquid is taken in the ascendant air flow and moves in the same direction as the air. The calculation of the volume flow of the liquid is possible thanks to the physics of two-phase flow.
Use
Airlift pumps are often used in deep dirty wells where sand would quickly abrade mechanical parts. (The compressor is on the surface and no mechanical parts are needed in the well). However airlift wells must be much deeper than the water table to allow for submergence. Air is generally pumped at least as deep under the water as the water is to be lifted. (If the water table is 50 ft below, the air should be pumped 100 feet deep). It is also sometimes used in part of the process on a wastewater treatment plant if a small head is required (typically around 1 foot head).
Airlifts are used to collect fauna samples from sediment. Airlifts can oversample zooplankton and meiofauna but undersample animals that exhibit an escape response.
In an aquarium an airlift pump is sometimes used to pump water to a filter.
In a coffee percolator the airlift principle is used to circulate the coffee.
Inventor
The first airlift pump is considered to be invented by the German engineer in 1797.
Advantages and disadvantages
Advantages
The pump is very reliable. The very simple principle is a clear advantage. Only air with a higher pressure than the liquid is required.
The liquid is not in contact with any mechanical elements. Therefore, neither the pump can be abraded (which is important for sandwater wells), nor the contents in the pipe (which is important for archeological research in the sea).
Act as a water aerator and can in some configurations lift stagnant bottom water to the surface (of water tanks).
Since there are no restrictive pump parts, solids up to 70% of the pipe diameter can be reliably pumped.
Disadvantages
Cost: while in some specific cases the operational cost can be manageable, most of the time the quantity of compressed air, and thus the energy required, is high compared to the liquid flow produced.
Conventional airlift pumps have a flow rate that is very limited. The pump is either on or off. It is very difficult to get a wide range of proportional flow control by varying the volume of compressed air. This is a dramatic disadvantage in some parts of a small wastewater treatment plant, such as the aerator.
the suction is limited.
this pumping system is suitable only if the head is relatively low. If one wants to obtain a high head, one has to choose a conventional pumping system.
because of the principle, air (oxygen) dissolves in the liquid. In certain cases, this can be problematic, as, for example, in a waste water treatment plant, before an anaerobic basin.
Design improvements
A recent (2007) variant called the "geyser pump" can pump with greater suction and less air. It also pumps proportionally to the air flow, permitting use in processes that require varying controlled flows. It arranges to store up the air, and release it in large bubbles that seal to the lift pipe, raising slugs of fluid.
See also
References
Sources
Recovered airlift_basic_calculation.xls via Waybackmachine. Mirrored at filedropper dot com /airliftbasiccalculation.
Pumps
Piping
Fluid dynamics
nl:Gaslift | Airlift pump | [
"Physics",
"Chemistry",
"Engineering"
] | 920 | [
"Pumps",
"Turbomachinery",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
3,676,886 | https://en.wikipedia.org/wiki/Linear-motion%20bearing | A linear-motion bearing or linear slide is a bearing designed to provide free motion in one direction. There are many different types of linear motion bearings.
Motorized linear slides such as machine slides, X-Y tables, roller tables and some dovetail slides are bearings moved by drive mechanisms. Not all linear slides are motorized, and non-motorized dovetail slides, ball bearing slides and roller slides provide low-friction linear movement for equipment powered by inertia or by hand. All linear slides provide linear motion based on bearings, whether they are ball bearings, dovetail bearings, linear roller bearings, magnetic or fluid bearings. X-Y tables, linear stages, machine slides and other advanced slides use linear motion bearings to provide movement along both X and Y multiple axis.
Rolling-element bearing
A rolling-element bearing is generally composed of a sleeve-like outer ring and several rows of balls retained by cages. The cages were originally machined from solid metal and were quickly replaced by stampings. It features smooth motion, low friction, high rigidity and long life. They are economical, and easy to maintain and replace. Thomson Industries (currently owned by Altra Industrial Motion) is generally given credit for first producing [what is now known as] a linear ball bearing.
Rolling-element bearings are generally designed to work well on hardened steel or stainless steel shafting (raceways).
Rolling-element bearings are more rigid than plain bearings.
Rolling-element bearings do not handle contamination well and require seals.
Rolling-element bearings require lubrication.
Rolling-element bearings are manufactured in two forms: ball bearing slides and roller slides.
Ball bearing slides
Also called "ball slides," ball bearing slides are the most common type of linear slide. Ball bearing slides offer smooth precision motion along a single-axis linear design, aided by ball bearings housed in the linear base, with self-lubrication properties that increase reliability. Ball bearing slide applications include delicate instrumentation, robotic assembly, cabinetry, high-end appliances and clean room environments, which primarily serve the manufacturing industry but also the furniture, electronics and construction industries. For example, a widely used ball bearing slide in the furniture industry is a ball bearing drawer slide.
Commonly constructed from materials such as aluminum, hardened cold rolled steel and galvanized steel, ball bearing slides consist of two linear rows of ball bearings contained by four rods and located on differing sides of the base, which support the carriage for smooth linear movement along the ball bearings. This low-friction linear movement can be powered by either a drive mechanism, inertia or by hand. Ball bearing slides tend to have a lower load capacity for their size compared to other linear slides because the balls are less resistant to wear and abrasions. In addition, ball bearing slides are limited by the need to fit into housing or drive systems.
The travelling distance of linear recirculating ball bearings is only limited by the length of their rail, as the balls recirculate inside the bearing's housing. Linear non-recirculating ball bearings have balls installed on a bracket and only move in one axis without recirculation. Since the balls do not recirculate, this type of bearings can provide extremely smooth motion. However, the travelling distance of linear non-recirculating ball bearings is limited by the length of the bracket.
Roller slides
Also known as crossed roller slides, roller slides are non-motorized linear slides that provide low-friction linear movement for equipment powered by inertia or by hand. Roller slides are based on linear roller bearings, which are frequently criss-crossed to provide heavier load capabilities and better movement control. Serving industries such as manufacturing, photonics, medical and telecommunications, roller slides are versatile and can be adjusted to meet numerous applications which typically include clean rooms, vacuum environments, material handling and automation machinery.
Roller slides work similarly to ball bearing slides, except that the bearings housed within the carriage are cylinder-shaped instead of ball shaped. The rollers crisscross each other at a 90° angle and move between the four semi-flat and parallel rods that surround the rollers. The rollers are between "V" grooved bearing races, one being on the top carriage and the other on the base. Typically, bearing housings are constructed from aluminum while the rollers are constructed from steel.
Although roller slides are not self-cleaning, they are suitable for environments with low levels of airborne contaminants such as dirt and dust. As one of the more expensive types of linear slides, roller slides are capable of providing linear motion on more than one axis through stackable slides and double carriages. Roller slides offers line contact versus point contact as with ball bearings, creating a broader contact surface due to the consistency of contact between the carriage and the base and resulting in less erosion.
Plain bearing
Plain bearings are very similar in design to rolling-element bearings, except they slide without the use of ball bearings. If they are cylindrical in shape, they are often called bushings. Bushings can be metal or plastic, or even air.
Plain bearings can run on hardened steel or stainless steel shafting (raceways), or can be run on hard-anodized aluminum or soft steel or aluminum. For plastic bushings, the specific type of polymer/fluoro-polymer will determine what hardness is allowed.
Plain bearings are less rigid than rolling-element bearings.
Plain bearings handle contamination well and often do not need seals/scrapers.
Plain bearings generally handle a wider temperature range than rolling-element bearings
Plain bearings (plastic versions) do not require oil or lubrication (often it can be used to increase performance characteristics)
Dovetail slides
Dovetail slides, or dovetail way slides are typically constructed from cast iron, but can also be constructed from hard-coat aluminum, acetal or stainless steel. Like any bearing, a dovetail slide is composed of a stationary linear base and a moving carriage. a Dovetail carriage has a v-shaped, or dovetail-shaped protruding channel which locks into the linear base's correspondingly shaped groove. Once the dovetail carriage is fitted into its base's channel, the carriage is locked into the channel's linear axis and allows free linear movement. When a platform is attached to the carriage of a dovetail slide, a dovetail table is created, offering extended load carrying capabilities.
Dovetail slides are advantageous when it comes to load capacity, affordability and durability. Capable of long travel, dovetail slides are more resistant to shock than other bearings, and they are mostly immune to chemical, dust and dirt contamination. Dovetail slides can be motorized, mechanical or electromechanical. Electric dovetail slides are driven by a number of different devices, such as ball screws, belts and cables, which are powered by functional motors such as stepper motors, linear motors and handwheels. Dovetail slides are direct contact systems, making them fitting for heavy load applications including CNC machines, shuttle devices, special machines and work holding devices. Mainly used in the manufacturing and laboratory science industries, dovetail slides are ideal for high-precision applications.
Compound slides
Slides can be constructed with two sections or multiple sections. A slide with two sections can only extend approximately 3/4 of the total compressed slide length. A compound slide typically has three sections: fixed, floating intermediate member, and the section attached to the equipment. A compound slide can extend at least as far as the compressed slide length and typically a bit more. In the case of rack slides, this allows the equipment to extend completely out of the rack allowing access for service or connection of cables and such to the back of the equipment.
Rack slides
Rack slides are specifically intended for mounting equipment into 19-inch racks or 23-inch racks. These can be friction bearing, ball bearing, or roller bearing. They are sized to fit into racks with mounting flanges on the ends to mate to the mounting holes in racks. In some cases, one mounting flange is formed into the rack slide with an adapter bracket attached to the other end to accommodate different depths of the rack. The outer fixed member is attached to the rack and the inner moving member is generally screwed to the side of the mounted equipment. Rack slides are typically compound or 3-part slides allowing full extension of the mounted equipment and generally include provision for sliding the inner member completely free to allow removal of the equipment from the rack. They can also include stops to prevent accidentally pulling the equipment out of the rack without releasing the stop mechanism.
There can be proprietary configurations which, for example, may clip to the equipment without the use of screws or can be clipped into an appropriately designed rack. But the basic geometry is the same regardless of how they are mounted.
Ball splines
Ball splines (ball spline bearings) are a special type of linear motion bearing that are used to provide nearly frictionless linear motion while allowing the member to transmit torque simultaneously. There are grooves ground along the length of the shaft (thus forming splines) for the ball bearings to run inside. The outer shell that houses the balls is called a nut rather than a bushing, but is not a nut in the traditional sense—it is not free to rotate about the shaft, but is free to travel up and down the shaft. For a shaft travel of any significant length the nut will have channels that recirculate the balls, operating in the same way as a ball screw.
By increasing the contact area of the ball bearings on the shaft to approximately 45 degrees, the side load and direct load carrying capabilities are greatly increased. Each nut can be individually preloaded at the factory to decrease the available radial play to ensure rigidity. This process not only increases the contact area, increasing direct loading capabilities, but it also restricts any radial movement, increasing the overhung moment capabilities. This creates a sturdier structure that can handle a very strenuous working environment.
See also
Ball screw
Sarrus linkage
Tool Ways
References
Bearings (mechanical)
Linear motion | Linear-motion bearing | [
"Physics"
] | 2,035 | [
"Physical phenomena",
"Motion (physics)",
"Linear motion"
] |
3,677,421 | https://en.wikipedia.org/wiki/Hypersonic%20wind%20tunnel | A hypersonic wind tunnel is designed to generate a hypersonic flow field in the working section, thus simulating the typical flow features of this flow regime - including compression shocks and pronounced boundary layer effects, entropy layer and viscous interaction zones and most importantly high total temperatures of the flow. The speed of these tunnels vary from Mach 5 to 15. The power requirement of a wind tunnel increases linearly with its cross section and flow density, but cubically with the test velocity required. Hence installation of a continuous, closed circuit wind tunnel remains a costly affair. The first continuous Mach 7-10 wind tunnel with 1x1 m test section was planned at Kochel am See, Germany during WW II and finally put into operation as 'Tunnel A' in the late 1950s at AEDC Tullahoma, TN, USA for an installed power of 57 MW. In view of these high facility demands, also intermittently operated experimental facilities like blow-down wind tunnels are designed and installed to simulate the hypersonic flow. A hypersonic wind tunnel comprises in flow direction the main components: heater/cooler arrangements, dryer, convergent/divergent nozzle, test section, second throat and diffuser. A blow-down wind tunnel has a low vacuum reservoir at the back end, while a continuously operated, closed circuit wind tunnel has a high power compressor installation instead. Since the temperature drops with the expanding flow, the air inside the test section has the chance of becoming liquefied. For that reason, preheating is particularly critical (the nozzle may require cooling).
Technological problems
There are several technological problems in designing and constructing a hyper-velocity wind tunnel:
supply of high temperatures and pressures for times long enough to perform a measurement
reproduction of equilibrium conditions
structural damage produced by overheating
fast instrumentation
power requirements to run the tunnel
Simulations of a flow at 5.5 km/s, 45 km altitude would require tunnel temperatures of as much as 9000 K, and a pressure of 3 GPa.
Hot shot wind tunnel
One form of HWT is known as a Gun Tunnel or hot shot tunnel (up to M=27), which can be used for analysis of flows past ballistic missiles, space vehicles in atmospheric entry, and plasma physics or heat transfer at high temperatures. It runs intermittently, but has a very low running time (less than a second).
The method of operation is based on a high temperature and pressurized gas (air or nitrogen) produced in an arc-chamber, and a near-vacuum in the remaining part of the tunnel. The arc-chamber can reach several MPa, while pressures in the vacuum chamber can be as low as 0.1 Pa. This means that the pressure ratios of these tunnels are in the order of 10 million. Also, the temperatures of the hot gas are up to 5000 K. The arc chamber is mounted in the gun barrel. The high pressure gas is separated from the vacuum by a diaphragm.
Prior to a test run commencing, a membrane separates the compressed air from the gun barrel breech. A rifle (or similar) is used to rupture the membrane. Compressed air rushes into the breech of the gun barrel, forcing a small projectile to accelerate rapidly down the barrel. Although the projectile is prevented from leaving the barrel, the air in front of the projectile emerges at hypersonic velocity into the working section. Naturally the duration of the test is extremely brief, so high speed instrumentation is required to get any meaningful data.
Hypersonic Wind Tunnel Facility in India
The Indian Space Research Organization (ISRO) commissioned three major facilities, namely a Hypersonic Wind Tunnel, a Shock Tunnel and a Plasma Tunnel at Vikram Sarabhai Space Center as part of its continuous and concerted efforts to minimize cost of access into space. This integrated facility was named as Satish Dhawan Wind Tunnel Complex as a tribute to Prof. Satish Dhawan, who has made very significant contributions in the field of wind tunnels and aerodynamics. ISRO Chairman A. S. Kiran Kumar said commissioning of such facilities would provide adequate data for design and development of current and future space transportation systems in India.
Defence Research and Development Organisation (DRDO) commissioned an advanced Hypersonic Wind Tunnel (HWT) test facility at Dr APJ Abdul Kalam Missile Complex on 20 December 2020 as part of facility development programme for Hypersonic Technology Demonstrator Vehicle project.
MARHy, Hypersonic Wind Tunnel Facility in Orléans, France
The MARHy Hypersonic low density Wind Tunnel, located at the ICARE Laboratory in Orléans, France, is a research facility used extensively for fundamental and applied research of fluid dynamic phenomena in rarefied compressible flows, applied to space research. Its name is an acronym for Mach Adaptable Rarefied Hypersonic and the wind tunnel is recorded under this name under the European portal MERIL.
See also
Wind tunnel
Low speed wind tunnel
High speed wind tunnel
Supersonic wind tunnel
Ludwieg tube
Shock tube
Hypersonic
NASA
MARHy Wind Tunnel
External links
Hot Shot Wind Tunnel at the Von Karman Institute for Fluid Dynamics
Langley Hot Shot Wind Tunnel Description and Calibration at the Langley Research Center
MERIL, the European facilities platform
References
Fluid dynamics
Aerodynamics
Wind tunnels | Hypersonic wind tunnel | [
"Chemistry",
"Engineering"
] | 1,055 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
3,678,005 | https://en.wikipedia.org/wiki/Instant%20Messaging%20and%20Presence%20Protocol | Instant Messaging and Presence Protocol (IMPP) was an IETF working group created for the purpose of developing an architecture for simple instant messaging and presence awareness/notification. It was created on and concluded on .
Documents
See also
Presence and Instant Messaging (PRIM)
SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE)
Extensible Messaging and Presence Protocol (XMPP) AKA Jabber
References
External links
– IETF Datatracker
Instant messaging protocols
Working groups | Instant Messaging and Presence Protocol | [
"Technology"
] | 95 | [
"Instant messaging",
"Instant messaging protocols"
] |
3,680,074 | https://en.wikipedia.org/wiki/Procrustes%20analysis | In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes () refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off.
In mathematics:
an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another.
a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is an orthogonal matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm.
When a shape is compared to another, or a set of shapes is compared to an arbitrarily selected reference shape, Procrustes analysis is sometimes further qualified as classical or ordinary, as opposed to generalized Procrustes analysis (GPA), which compares three or more shapes to an optimally determined "mean shape".
Introduction
To compare the shapes of two or more objects, the objects must be first optimally "superimposed". Procrustes superimposition (PS) is performed by optimally translating, rotating and uniformly scaling the objects. In other words, both the placement in space and the size of the objects are freely adjusted. The aim is to obtain a similar placement and size, by minimizing a measure of shape difference called the Procrustes distance between the objects. This is sometimes called full, as opposed to partial PS, in which scaling is not performed (i.e. the size of the objects is preserved). Notice that, after full PS, the objects will exactly coincide if their shape is identical. For instance, with full PS two spheres with different radii will always coincide, because they have exactly the same shape. Conversely, with partial PS they will never coincide. This implies that, by the strict definition of the term shape in geometry, shape analysis should be performed using full PS. A statistical analysis based on partial PS is not a pure shape analysis as it is not only sensitive to shape differences, but also to size differences. Both full and partial PS will never manage to perfectly match two objects with different shape, such as a cube and a sphere, or a right hand and a left hand.
In some cases, both full and partial PS may also include reflection. Reflection allows, for instance, a successful (possibly perfect) superimposition of a right hand to a left hand. Thus, partial PS with reflection enabled preserves size but allows translation, rotation and reflection, while full PS with reflection enabled allows translation, rotation, scaling and reflection.
Optimal translation and scaling are determined with much simpler operations (see below).
Ordinary Procrustes analysis
Here we just consider objects made up from a finite number k of points in n dimensions. Often, these points are selected on the continuous surface of complex objects, such as a human bone, and in this case they are called landmark points.
The shape of an object can be considered as a member of an equivalence class formed by removing the translational, rotational and uniform scaling components.
Translation
For example, translational components can be removed from an object by translating the object so that the mean of all the object's points (i.e. its centroid) lies at the origin.
Mathematically: take points in two dimensions, say
.
The mean of these points is where
Now translate these points so that their mean is translated to the origin , giving the point .
Uniform scaling
Likewise, the scale component can be removed by scaling the object so that the root mean square distance (RMSD) from the points to the translated origin is 1. This RMSD is a statistical measure of the object's scale or size:
The scale becomes 1 when the point coordinates are divided by the object's initial scale:
.
Notice that other methods for defining and removing the scale are sometimes used in the literature.
Rotation
Removing the rotational component is more complex, as a standard reference orientation is not always available. Consider two objects composed of the same number of points with scale and translation removed. Let the points of these be , . One of these objects can be used to provide a reference orientation. Fix the reference object and rotate the other around the origin, until you find an optimum angle of rotation such that the sum of the squared distances (SSD) between the corresponding points is minimised (an example of least squares technique).
A rotation by angle gives
.
where (u,v) are the coordinates of a rotated point. Taking the derivative of with respect to and solving for when the derivative is zero gives
When the object is three-dimensional, the optimum rotation is represented by a 3-by-3 rotation matrix R, rather than a simple angle, and in this case singular value decomposition can be used to find the optimum value for R (see the solution for the constrained orthogonal Procrustes problem, subject to det(R) = 1).
Shape comparison
The difference between the shape of two objects can be evaluated only after "superimposing" the two objects by translating, scaling and optimally rotating them as explained above. The square root of the above mentioned SSD between corresponding points can be used as a statistical measure of this difference in shape:
This measure is often called Procrustes distance. Notice that other more complex definitions of Procrustes distance, and other measures of "shape difference" are sometimes used in the literature.
Superimposing a set of shapes
We showed how to superimpose two shapes. The same method can be applied to superimpose a set of three or more shapes, as far as the above mentioned reference orientation is used for all of them. However, Generalized Procrustes analysis provides a better method to achieve this goal.
Generalized Procrustes analysis (GPA)
GPA applies the Procrustes analysis method to optimally superimpose a set of objects, instead of superimposing them to an arbitrarily selected shape.
Generalized and ordinary Procrustes analysis differ only in their determination of a reference orientation for the objects, which in the former technique is optimally determined, and in the latter one is arbitrarily selected. Scaling and translation are performed the same way by both techniques. When only two shapes are compared, GPA is equivalent to ordinary Procrustes analysis.
The algorithm outline is the following:
arbitrarily choose a reference shape (typically by selecting it among the available instances)
superimpose all instances to current reference shape
compute the mean shape of the current set of superimposed shapes
if the Procrustes distance between mean and reference shape is above a threshold, set reference to mean shape and continue to step 2.
Variations
There are many ways of representing the shape of an object.
The shape of an object can be considered as a member of an equivalence class formed by taking the set of all sets of k points in n dimensions, that is Rkn and factoring out the set of all translations, rotations and scalings. A particular representation of shape is found by choosing a particular representation of the equivalence class. This will give a manifold of dimension kn-4. Procrustes is one method of doing this with particular statistical justification.
Bookstein obtains a representation of shape by fixing the position of two points
called the bases line. One point will be fixed at the origin and the other at (1,0)
the remaining points form the Bookstein coordinates.
It is also common to consider shape and scale that is with translational and rotational components removed.
Examples
Shape analysis is used in biological data to identify the variations of anatomical features characterised by landmark data, for example in considering the shape of jaw bones.
One study by David George Kendall examined the triangles formed by standing stones to deduce if these were often arranged in straight lines. The shape of a triangle can be represented as a point on the sphere, and the distribution of all shapes can be thought of a distribution over the sphere.
The sample distribution from the standing stones was compared with the theoretical distribution to show that the occurrence of straight lines was no more than average.
See also
Active shape model
Alignments of random points
Biometrics
Generalized Procrustes analysis
Image registration
Kent distribution
Morphometrics
Orthogonal Procrustes problem
Procrustes
References
F.L. Bookstein, Morphometric tools for landmark data, Cambridge University Press, (1991).
J.C. Gower, G.B. Dijksterhuis, Procrustes Problems, Oxford University Press (2004).
I.L.Dryden, K.V. Mardia, Statistical Shape Analysis, Wiley, Chichester, (1998).
External links
Extensions to continuum of points and distributions Procrustes Methods, Shape Recognition, Similarity and Docking, by Michel Petitjean.
Multivariate statistics
Euclidean symmetries
Biometrics
Greek mythology studies
Greek words and phrases | Procrustes analysis | [
"Physics",
"Mathematics"
] | 1,888 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Mathematical relations",
"Symmetry"
] |
3,682,580 | https://en.wikipedia.org/wiki/Plasmid%20preparation | A plasmid preparation is a method of DNA extraction and purification for plasmid DNA. It is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology. Many methods have been developed to purify plasmid DNA from bacteria. During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA.
These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. Purification of plasmids is central to molecular cloning. A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells.
Growth of the bacterial culture
Plasmids are almost always purified from liquid bacteria cultures, usually E. coli, which have been transformed and isolated. Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker, for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow.
Bacteria are grown under favourable conditions.
Harvesting and lysis of the bacteria
There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis.
Alkaline lysis
The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide, to lyse the bacterial cells. When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured; the plasmid DNA however, remains stable. Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA. After the addition of acetate-containing neutralization buffer to lower the pH to around 7, the large and less supercoiled chromosomal DNA and proteins form large complexes and precipitate; but the small bacterial DNA plasmids stay in solution.
Mechanical lysis
Mechanical lysis involves the use of physical force, such as grinding or sonication, to break down bacterial cells and release the plasmid DNA. There are several different mechanical lysis methods that can be used, including French press, bead-beating, and ultrasonication.
Enzymatic lysis
Enzymatic lysis, also called Lysozyme lysis, involves the use of enzymes to digest the cell wall and release the plasmid DNA. The most commonly used enzyme for this purpose is lysozyme, which breaks down the peptidoglycan in the cell wall of Gram-positive bacteria. Lysozyme is usually added to the bacterial culture, followed by heating and/or shaking the culture to release the plasmid DNA.
Preparations by size
Plasmid preparation can be divided into five main categories based on the scale of the preparation: minipreparation, midipreparation, maxipreparation, megapreparation, and gigapreparation. The choice of which method to use will depend on the amount of plasmid DNA required, as well as the specific application for which it will be used.
Kits are available from varying manufacturers to purify plasmid DNA, which are named by size of bacterial culture and corresponding plasmid yield. In increasing order they are: miniprep, midiprep, maxiprep, megaprep, and gigaprep. The plasmid DNA yield will vary depending on the plasmid copy number, type and size, the bacterial strain, the growth conditions, and the kit.
Minipreparation
Minipreparation of plasmid DNA is a rapid, small-scale isolation of plasmid DNA from bacteria. Commonly used miniprep methods include alkaline lysis and spin-column based kits. It is based on the alkaline lysis method. The extracted plasmid DNA resulting from performing a miniprep is itself often called a "miniprep".
Minipreps are used in the process of molecular cloning to analyze bacterial clones. A typical plasmid DNA yield of a miniprep is 5 to 50 μg depending on the cell strain.
Miniprep of a large number of plasmids can also be done conveniently on filter paper by lysing the cell and eluting the plasmid on to filter paper.
Midipreparation
The starting E. coli culture volume is 15-25 mL of Lysogeny broth (LB) and the expected DNA yield is 100-350 μg.
Maxipreparation
The starting E. coli culture volume is 100-200 mL of LB and the expected DNA yield is 500-850 μg.
Megapreparation
The starting E. coli culture volume is 500 mL – 2.5 L of LB and the expected DNA yield is 1.5-2.5 mg.
Gigapreparation
The starting E. coli culture volume is 2.5-5 L of LB and the expected DNA yield is 7.5–10 mg.
Purification of plasmid DNA
It is important to consider the downstream applications of the plasmid DNA when choosing a purification method. For example, if the plasmid is to be used for transfection or electroporation, a purification method that results in high purity and low endotoxin levels is desirable. Similarly, if the plasmid is to be used for sequencing or PCR, a purification method that results in high yield and minimal contaminants is desirable. However, multiple methods of nucleic acid purification exist. All work on the principle of generating conditions where either only the nucleic acid precipitates, or only other biomolecules precipitate, allowing the nucleic acid to be separated.
In high-throughput DNA extraction workflows, laboratory equipment such as 96 well plate template can be utilized to efficiently process multiple samples in parallel. These templates allow for the automation of extraction protocols, significantly increasing the throughput of plasmid DNA isolation while maintaining consistency across large sample sets. When used in combination with automated liquid handling systems, a 96 well plate template helps streamline the process of extracting plasmid DNA from bacterial cultures, ensuring uniformity and reducing manual errors during the purification steps.
Ethanol precipitation
Ethanol precipitation is a widely used method for purifying and concentrating nucleic acids, including plasmid DNA. The basic principle of this method is that nucleic acids are insoluble in ethanol or isopropanol but soluble in water. Therefore, it works by using ethanol as an antisolvent of DNA, causing it to precipitate out of solution and then it can be collected by centrifugation. The soluble fraction is discarded to remove other biomolecules.
Spin column
Spin column-based nucleic acid purification is a method of purifying DNA, RNA or plasmid from a sample using a spin column filter. The method is based on the principle of selectively binding nucleic acids to a solid matrix in the spin column, while other contaminants, such as proteins and salts, are washed away. The conditions are then changed to elute the purified nucleic acid off the column using a suitable elution buffer.
Phenol–chloroform extraction
The basic principle of the phenol-chloroform extraction is that DNA and RNA are relatively insoluble in phenol and chloroform, while other cellular components are relatively soluble in these solvents. The addition of a phenol/chloroform mixture will dissolve protein and lipid contaminants, leaving the nucleic acids in the aqueous phase. It also denatures proteins, like DNase, which is especially important if the plasmids are to be used for enzyme digestion. Otherwise, smearing may occur in enzyme restricted form of plasmid DNA.
Beads-based extraction
In beads-based extraction, addition of a mixture containing magnetic beads commonly made of iron ions binds to plasmid DNA, separating them from unwanted compounds by a magnetic rod or stand. The plasmid-bound beads are then released by removal of the magnetic field and extracted in an elution solution for down-stream experiments such as transformation or restriction digestion. This form of miniprep can also be automated, which increases the conveniency while reducing mechanical error.
References
Further reading
External links
http://www.protocol-online.org/prot/Molecular_Biology/Plasmid/Miniprep/
A miniprep procedure using diatomaceous earth to bind DNA during purification and washing.
Biological techniques and tools
Genetics techniques
Molecular biology | Plasmid preparation | [
"Chemistry",
"Engineering",
"Biology"
] | 1,899 | [
"Genetics techniques",
"Genetic engineering",
"nan",
"Molecular biology",
"Biochemistry"
] |
3,683,672 | https://en.wikipedia.org/wiki/Helicon%20%28physics%29 | In electromagnetism, a helicon is a low-frequency electromagnetic wave that can exist in bounded plasmas in the presence of a magnetic field. The first helicons observed were atmospheric whistlers, but they also exist in solid conductors or any other electromagnetic plasma. The electric field in the waves is dominated by the Hall effect, and is nearly at right angles to the electric current (rather than parallel as it would be without the magnetic field); so that the propagating component of the waves is corkscrew-shaped (helical) – hence the term “helicon,” coined by Aigrain.
Helicons have the special ability to propagate through pure metals, given conditions of low temperature and high magnetic fields. Most electromagnetic waves in a normal conductor are not able to do this, since the high conductivity of metals (due to their free electrons) acts to screen out the electromagnetic field. Indeed, normally an electromagnetic wave would experience a very thin skin depth in a metal: the electric or magnetic fields are quickly reflected upon trying to enter the metal. (Hence the shine of metals.) However, skin depth depends on an inverse proportionality to the square root of angular frequency. Thus a low-frequency electromagnetic wave may be able to overcome the skin depth problem, and thereby propagate throughout the material.
One property of the helicon waves (readily demonstrated by a rudimentary calculation, using only the Hall effect terms and a resistivity term) is that at places where the sample surface runs parallel to the magnetic field, one of the modes contains electric currents that “go to infinity" in the limit of perfect conductivity; so that the Joule heating loss in such surface regions tends to a non-zero limit. The surface mode is especially prevalent in cylindrical samples parallel to the magnetic field, a configuration for which an exact solution has been found for the equations, and which figures importantly in subsequent experiments.
The practical significance of the surface mode, and its ultra-high current density, was not recognized in the original papers, but came to prominence a few years later when Boswell discovered the superior plasma generating ability of helicons – achieving plasma charge densities 10 times higher than had been achieved with earlier methods, without a magnetic field.
Since then, helicons found use in a variety of scientific and industrial applications – wherever highly efficient plasma generation was required, as in nuclear fusion reactors and in space propulsion (where the helicon double-layer thruster and the Variable Specific Impulse Magnetoplasma Rocket both make use of helicons in their plasma heating phase). Helicons are also utilized in the procedure of plasma etching, used in the manufacture of computer microcircuits.
A helicon discharge is an excitation of plasma by helicon waves induced through radio frequency heating. The difference between a helicon plasma source and an inductively coupled plasma (ICP) is the presence of a magnetic field directed along the axis of the antenna. The presence of this magnetic field creates a helicon mode of operation with higher ionization efficiency and greater electron density than a typical ICP. The Australian National University, in Canberra, Australia, is currently researching applications for this technology. A commercially developed magnetoplasmadynamic engine called VASIMR also uses helicon discharge for generation of plasma in its engine. Potentially, helicon double-layer thruster plasma-based rockets are suitable for interplanetary travel.
See also
Helicon double-layer thruster
Variable Specific Impulse Magnetoplasma Rocket
References
Electromagnetic radiation | Helicon (physics) | [
"Physics"
] | 738 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
3,684,088 | https://en.wikipedia.org/wiki/Color%20superconductivity | Color superconductivity is a phenomenon where matter carries color charge without loss, analogous to the way conventional superconductors can carry electric charge without loss. Color superconductivity is predicted to occur in quark matter if the baryon density is sufficiently high (i.e., well above the density and energies of an atomic nucleus) and the temperature is not too high (well below 1012 kelvins). Color superconducting phases are to be contrasted with the normal phase of quark matter, which is just a weakly interacting Fermi liquid of quarks.
In theoretical terms, a color superconducting phase is a state in which the quarks near the Fermi surface become correlated in Cooper pairs, which condense. In phenomenological terms, a color superconducting phase breaks some of the symmetries of the underlying theory, and has a very different spectrum of excitations and very different transport properties from the normal phase.
Description
Analogy with superconducting metals
It is well known that at low temperature many metals become superconductors. A metal can be viewed in part as a Fermi liquid of electrons, and below a critical temperature, an attractive phonon-mediated interaction between the electrons near the Fermi surface causes them to pair up and form a condensate of Cooper pairs, which via the Anderson–Higgs mechanism makes the photon massive, leading to characteristic behaviors of a superconductor: infinite conductivity and the exclusion of magnetic fields (Meissner effect). The crucial ingredients for this to occur are:
a liquid of charged fermions.
an attractive interaction between the fermions
low temperature (below the critical temperature)
These ingredients are also present in sufficiently dense quark matter, leading physicists to expect that something similar will happen in that context:
quarks carry both electric charge and color charge;
the strong interaction between two quarks is powerfully attractive;
the critical temperature is expected to be given by the QCD scale, which is of order 100 MeV, or 1012 kelvins, the temperature of the universe a few minutes after the Big Bang, so quark matter that we may currently observe in compact stars or other natural settings will be below this temperature.
The fact that a Cooper pair of quarks carries a net color charge, as well as a net electric charge, means that some of the gluons (which mediate the strong interaction just as photons mediate electromagnetism) become massive in a phase with a condensate of quark Cooper pairs, so such a phase is called a "color superconductor". Actually, in many color superconducting phases the photon itself does not become massive, but mixes with one of the gluons to yield a new massless "rotated photon". This is an MeV-scale echo of the mixing of the hypercharge and W3 bosons that originally yielded the photon at the TeV scale of electroweak symmetry breaking.
Diversity of color superconducting phases
Unlike an electrical superconductor, color-superconducting quark matter comes in many varieties, each of which is a separate phase of matter. This is because quarks, unlike electrons, come in many species. There are three different colors (red, green, blue) and in the core of a compact star we expect three different flavors (up, down, strange), making nine species in all. Thus in forming the Cooper pairs there is a 9×9 color-flavor matrix of possible pairing patterns. The differences between these patterns are very physically significant: different patterns break different symmetries of the underlying theory, leading to different excitation spectra and different transport properties.
It is very hard to predict which pairing patterns will be favored in nature. In principle this question could be decided by a QCD calculation, since QCD is the theory that fully describes the strong interaction. In the limit of infinite density, where the strong interaction becomes weak because of asymptotic freedom, controlled calculations can be performed, and it is known that the favored phase in three-flavor quark matter is the color-flavor-locked phase. But at the densities that exist in nature these calculations are unreliable, and the only known alternative is the brute-force computational approach of lattice QCD, which unfortunately has a technical difficulty (the "sign problem") that renders it useless for calculations at high quark density and low temperature.
Physicists are currently pursuing the following lines of research on color superconductivity:
Performing calculations in the infinite density limit, to get some idea of the behavior at one edge of the phase diagram.
Performing calculations of the phase structure down to medium density using a highly simplified model of QCD, the Nambu–Jona-Lasinio (NJL) model, which is not a controlled approximation, but is expected to yield semi-quantitative insights.
Writing down an effective theory for the excitations of a given phase, and using it to calculate the physical properties of that phase.
Performing astrophysical calculations, using NJL models or effective theories, to see if there are observable signatures by which one could confirm or rule out the presence of specific color superconducting phases in nature (i.e. in compact stars: see next section).
Possible occurrence in nature
The only known place in the universe where the baryon density might possibly be high enough to produce quark matter, and the temperature is low enough for color superconductivity to occur, is the core of a compact star (often called a "neutron star", a term which prejudges the question of its actual makeup). There are many open questions here:
We do not know the critical density at which there would be a phase transition from nuclear matter to some form of quark matter, so we do not know whether compact stars have quark matter cores or not.
On the other extreme, it is conceivable that nuclear matter in bulk is actually metastable, and decays into quark matter (the "stable strange matter hypothesis"). In this case, compact stars would consist completely of quark matter all the way to their surface.
Assuming that compact stars do contain quark matter, we do not know whether that quark matter is in a color superconducting phase or not. At infinite density one expects color superconductivity, and the attractive nature of the dominant strong quark-quark interaction leads one to expect that it will survive down to lower densities, but there may be a transition to some strongly coupled phase (e.g. a Bose–Einstein condensate of spatially bound di- or hexaquarks).
See also
Further reading
References
Phases of matter
Quantum chromodynamics
Quark matter | Color superconductivity | [
"Physics",
"Chemistry"
] | 1,406 | [
"Quark matter",
"Phases of matter",
"Astrophysics",
"Nuclear physics",
"Matter"
] |
867,671 | https://en.wikipedia.org/wiki/Importance%20sampling | Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally attributed to a paper by Teun Kloek and Herman K. van Dijk in 1978, but its precursors can be found in statistical physics as early as 1949. Importance sampling is also related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.
Basic theory
Let be a random variable in some probability space . We wish to estimate the expected value of X under P, denoted E[X;P]. If we have statistically independent random samples , generated according to P, then an empirical estimate of E[X;P] is
and the precision of this estimate depends on the variance of X:
The basic idea of importance sampling is to sample the states from a different distribution to lower the variance of the estimation of E[X;P], or when sampling from P is difficult.
This is accomplished by first choosing a random variable such that E[L;P] = 1 and that P-almost everywhere .
With the variable L we define a probability that satisfies
The variable X/L will thus be sampled under P(L) to estimate E[X;P] as above and this estimation is improved when
.
When X is of constant sign over Ω, the best variable L would clearly be , so that X/L* is the searched constant E[X;P] and a single sample under P(L*) suffices to give its value. Unfortunately we cannot take that choice, because E[X;P] is precisely the value we are looking for! However this theoretical best case L* gives us an insight into what importance sampling does:
to the right, is one of the infinitesimal elements that sum up to E[X;P]:
therefore, a good probability change P(L) in importance sampling will redistribute the law of X so that its samples' frequencies are sorted directly according to their weights in E[X;P]. Hence the name "importance sampling."
Importance sampling is often used as a Monte Carlo integrator.
When is the uniform distribution and , E[X;P] corresponds to the integral of the real function .
Application to probabilistic inference
Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically. Examples include Bayesian networks and importance weighted variational autoencoders.
Application to simulation
Importance sampling is a variance reduction technique that can be used in the Monte Carlo method. The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the estimator variance can be reduced. Hence, the basic methodology in importance sampling is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased. The weight is given by the likelihood ratio, that is, the Radon–Nikodym derivative of the true underlying distribution with respect to the biased simulation distribution.
The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the "art" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling.
Consider to be the sample and to be the likelihood ratio, where is the probability density (mass) function of the desired distribution and is the probability density (mass) function of the biased/proposal/sample distribution. Then the problem can be characterized by choosing the sample distribution that minimizes the variance of the scaled sample:
It can be shown that the following distribution minimizes the above variance:
Notice that when , this variance becomes 0.
Mathematical approach
Consider estimating by simulation the probability of an event , where is a random variable with cumulative distribution function and probability density function , where prime denotes derivative. A -length independent and identically distributed (i.i.d.) sequence is generated from the distribution , and the number of random variables that lie above the threshold are counted. The random variable is characterized by the Binomial distribution
One can show that , and , so in the limit we are able to obtain . Note that the variance is low if . Importance sampling is concerned with the determination and use of an alternate density function (for ), usually referred to as a biasing density, for the simulation experiment. This density allows the event to occur more frequently, so the sequence lengths gets smaller for a given estimator variance. Alternatively, for a given , use of the biasing density results in a variance smaller than that of the conventional Monte Carlo estimate. From the definition of , we can introduce as below.
where
is a likelihood ratio and is referred to as the weighting function. The last equality in the above equation motivates the estimator
This is the importance sampling estimator of and is unbiased. That is, the estimation procedure is to generate i.i.d. samples from and for each sample which exceeds , the estimate is incremented by the weight evaluated at the sample value. The results are averaged over trials. The variance of the importance sampling estimator is easily shown to be
Now, the importance sampling problem then focuses on finding a biasing density such that the variance of the importance sampling estimator is less than the variance of the general Monte Carlo estimate. For some biasing density function, which minimizes the variance, and under certain conditions reduces it to zero, it is called an optimal biasing density function.
Conventional biasing methods
Although there are many kinds of biasing methods, the following two methods are most widely used in the applications of importance sampling.
Scaling
Shifting probability mass into the event region by positive scaling of the random variable with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods.
In importance sampling by scaling, the simulation density is chosen as the density function of the scaled random variable , where usually for tail probability estimation. By transformation,
and the weighting function is
While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary region which is undesirable. If is a sum of random variables, the spreading of mass takes place in an dimensional space. The consequence of this is a decreasing importance sampling gain for increasing , and is called the dimensionality effect.
A modern version of importance sampling by scaling is e.g. so-called sigma-scaled sampling (SSS) which is running multiple Monte Carlo (MC) analysis with different scaling factors. In opposite to many other high yield estimation methods (like worst-case distances WCD) SSS does not suffer much from the dimensionality problem. Also addressing multiple MC outputs causes no degradation in efficiency. On the other hand, as WCD, SSS is only designed for Gaussian statistical variables, and in opposite to WCD, the SSS method is not designed to provide accurate statistical corners. Another SSS disadvantage is that the MC runs with large scale factors may become difficult, e. g. due to model and simulator convergence problems. In addition, in SSS we face a strong bias-variance trade-off: Using large scale factors, we obtain quite stable yield results, but the larger the scale factors, the larger the bias error. If the advantages of SSS does not matter much in the application of interest, then often other methods are more efficient.
Translation
Another simple and effective biasing technique employs translation of the density function (and hence random variable) to place much of its probability mass in the rare event region. Translation does not suffer from a dimensionality effect and has been successfully used in several applications relating to simulation of digital communication systems. It often provides better simulation gains than scaling. In biasing by translation, the simulation density is given by
where is the amount of shift and is to be chosen to minimize the variance of the importance sampling estimator.
Effects of system complexity
The fundamental problem with importance sampling is that designing good biased distributions becomes more complicated as the system complexity increases. Complex systems are the systems with long memory since complex processing of a few inputs is much easier to handle. This dimensionality or memory can cause problems in three ways:
long memory (severe intersymbol interference (ISI))
unknown memory (Viterbi decoders)
possibly infinite memory (adaptive equalizers)
In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems. Examples of techniques to break the simulation down are conditioning and error-event simulation (EES) and regenerative simulation.
Evaluation of importance sampling
In order to identify successful importance sampling techniques, it is useful to be able to quantify the run-time savings due to the use of the importance sampling approach. The performance measure commonly used is , and this can be interpreted as the speed-up factor by which the importance sampling estimator achieves the same precision as the MC estimator. This has to be computed empirically since the estimator variances are not likely to be analytically possible when their mean is intractable. Other useful concepts in quantifying an importance sampling estimator are the variance bounds and the notion of asymptotic efficiency. One related measure is the so-called Effective Sample Size (ESS).
Variance cost function
Variance is not the only possible cost function for a simulation, and other cost functions, such as the mean absolute deviation, are used in various statistical applications. Nevertheless, the variance is the primary cost function addressed in the literature, probably due to the use of variances in confidence intervals and in the performance measure .
An associated issue is the fact that the ratio overestimates the run-time savings due to importance sampling since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to importance sampling is the time taken to devise and program the technique and analytically derive the desired weight function.
Multiple and adaptive importance sampling
When different proposal distributions, , are jointly used for drawing the samples different proper weighting functions can be employed (e.g., see ). In an adaptive setting, the proposal distributions, , and are updated each iteration of the adaptive importance sampling algorithm. Hence, since a population of proposal densities is used, several suitable combinations of sampling and weighting schemes can be employed.
See also
Monte Carlo method
Variance reduction
Stratified sampling
Recursive stratified sampling
VEGAS algorithm
Particle filter — a sequential Monte Carlo method, which uses importance sampling
Auxiliary field Monte Carlo
Rejection sampling
Variable bitrate — a common audio application of importance sampling
Notes
References
External links
Sequential Monte Carlo Methods (Particle Filtering) homepage on University of Cambridge
Introduction to importance sampling in rare-event simulations European journal of Physics. PDF document.
Adaptive Monte Carlo methods for rare event simulations Winter Simulation Conference
Monte Carlo methods
Variance reduction
Stochastic simulation | Importance sampling | [
"Physics"
] | 2,465 | [
"Monte Carlo methods",
"Computational physics"
] |
867,975 | https://en.wikipedia.org/wiki/Aeolipile | An aeolipile, aeolipyle, or eolipile, from the Greek "Αἰόλου πύλη," , also known as a Hero's (or Heron's) engine, is a simple, bladeless radial steam turbine which spins when the central water container is heated. Torque is produced by steam jets exiting the turbine. The Greek-Egyptian mathematician and engineer Hero of Alexandria described the device in the 1st century AD, and many sources give him the credit for its invention. However, Vitruvius was the first to describe this appliance in his De architectura ().
The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.
The name – derived from the Greek word Αἴολος and Latin word pila – translates to "the ball of Aeolus", Aeolus being the Greek god of the air and wind.
Because it applies steam to perform work, an Aeolipile (depicted in profile) is used as the symbol for the U.S. Navy's Boiler Technician Rate, as it was for the earlier Watertender, Boilermaker, and Boilerman ratings.
Physics
The aeolipile usually consists of a spherical or cylindrical vessel with oppositely bent or curved nozzles projecting outwards. It is designed to rotate on its axis. When the vessel is pressurised with steam, the gas is expelled out of the nozzles, which generates thrust due to the rocket principle as a consequence of the 2nd and 3rd of Newton's laws of motion. When the nozzles, pointing in different directions, produce forces along different lines of action perpendicular to the axis of the bearings, the thrusts combine to result in a rotational moment (mechanical couple), or torque, causing the vessel to spin about its axis. Aerodynamic drag and frictional forces in the bearings build up quickly with increasing rotational speed (rpm) and consume the accelerating torque, eventually cancelling it and achieving a steady state speed.
Typically, and as Hero described the device, the water is heated in a simple boiler which forms part of a stand for the rotating vessel. Where this is the case, the boiler is connected to the rotating chamber by a pair of pipes that also serve as the pivots for the chamber. Alternatively the rotating chamber may itself serve as the boiler, and this arrangement greatly simplifies the pivot/bearing arrangements, as they then do not need to pass steam. This can be seen in the illustration of a classroom model shown here.
History
Both Hero and Vitruvius draw on the much earlier work by Ctesibius (285–222 BC), also known as Ktēsíbios or Tesibius, who was an inventor and mathematician in Alexandria, Ptolemaic Egypt. He wrote the first treatises on the science of compressed air and its uses in pumps.
Vitruvius's description
Vitruvius (c. 80 BC – c. 15 BC) mentions aeolipiles by name:
Hero's description
Hero (c. 10–70 AD) takes a more practical approach, in that he gives instructions how to make one:
Practical usage
It is not known whether the aeolipile was put to any practical use in ancient times, and if it was seen as a pragmatic device, a whimsical novelty, an object of reverence, or some other thing. A source described it as a mere curiosity for the ancient Greeks, or a "party trick". Hero's drawing shows a standalone device, and was presumably intended as a "temple wonder", like many of the other devices described in Pneumatica.
Vitruvius, on the other hand, mentions use of the aeolipile for demonstrating the physical properties of the weather. He describes them as:
After describing the device's construction (see above) he concludes:
In 1543, Blasco de Garay, a scientist and a captain in the Spanish navy, allegedly demonstrated before the Holy Roman Emperor, Charles V and a committee of high officials an invention he claimed could propel large ships in the absence of wind using an apparatus consisted of copper boiler and moving wheels on either side of the ship. This account was preserved by the royal Spanish archives at Simancas. It is proposed that de Garay used Hero's aeolipile and combined it with the technology used in Roman boats and late medieval galleys. Here, de Garay's invention introduced an innovation where the aeolipile had practical usage, which was to generate motion to the paddlewheels, demonstrating the feasibility of steam-driven boats. This claim was denied by Spanish authorities.
See also
Catherine wheel (firework)
Rocket engine
Segner wheel
Steam engine
Steam locomotive
Steam rocket
Tip jet
References
Further reading
History of thermodynamics
Steam engines
Rocket engines
Industrial design
Hellenistic engineering
Early rocketry
Ancient inventions
Ancient Egyptian technology
Egyptian inventions
History of technology | Aeolipile | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,037 | [
"Industrial design",
"Design engineering",
"Engines",
"Rocket engines",
"History of thermodynamics",
"Science and technology studies",
"Thermodynamics",
"History of technology",
"Design",
"History of science and technology"
] |
868,108 | https://en.wikipedia.org/wiki/Nanomaterials | Nanomaterials describe, in principle, chemical substances or materials of which a single unit is sized (in at least one dimension) between 1 and 100 nm (the usual definition of nanoscale).
Nanomaterials research takes a materials science-based approach to nanotechnology, leveraging advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, thermo-physical or mechanical properties.
Nanomaterials are slowly becoming commercialized and beginning to emerge as commodities.
Definition
In ISO/TS 80004, nanomaterial is defined as the "material with any external dimension in the nanoscale or having internal structure or surface structure in the nanoscale", with nanoscale defined as the "length range approximately from 1 nm to 100 nm". This includes both nano-objects, which are discrete pieces of material, and nanostructured materials, which have internal or surface structure on the nanoscale; a nanomaterial may be a member of both these categories.
On 18 October 2011, the European Commission adopted the following definition of a nanomaterial:A natural, incidental or manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm – 100 nm. In specific cases and where warranted by concerns for the environment, health, safety or competitiveness the number size distribution threshold of 50% may be replaced by a threshold between 1% to 50%.
Sources
Engineered
Engineered nanomaterials have been deliberately engineered and manufactured by humans to have certain required properties.
Legacy nanomaterials are those that were in commercial production prior to the development of nanotechnology as incremental advancements over other colloidal or particulate materials. They include carbon black and titanium dioxide nanoparticles.
Incidental
Nanomaterials may be unintentionally produced as a byproduct of mechanical or industrial processes through combustion and vaporization. Sources of incidental nanoparticles include vehicle engine exhausts, smelting, welding fumes, combustion processes from domestic solid fuel heating and cooking. For instance, the class of nanomaterials called fullerenes are generated by burning gas, biomass, and candle. It can also be a byproduct of wear and corrosion products. Incidental atmospheric nanoparticles are often referred to as ultrafine particles, which are unintentionally produced during an intentional operation, and could contribute to air pollution.
Natural
Biological systems often feature natural, functional nanomaterials. The structure of foraminifera (mainly chalk) and viruses (protein, capsid), the wax crystals covering a lotus or nasturtium leaf, spider and spider-mite silk, the blue hue of tarantulas, the "spatulae" on the bottom of gecko feet, some butterfly wing scales, natural colloids (milk, blood), horny materials (skin, claws, beaks, feathers, horns, hair), paper, cotton, nacre, corals, and even our own bone matrix are all natural organic nanomaterials.
Natural inorganic nanomaterials occur through crystal growth in the diverse chemical conditions of the Earth's crust. For example, clays display complex nanostructures due to anisotropy of their underlying crystal structure, and volcanic activity can give rise to opals, which are an instance of a naturally occurring photonic crystals due to their nanoscale structure. Fires represent particularly complex reactions and can produce pigments, cement, fumed silica etc.
Natural sources of nanoparticles include combustion products forest fires, volcanic ash, ocean spray, and the radioactive decay of radon gas. Natural nanomaterials can also be formed through weathering processes of metal- or anion-containing rocks, as well as at acid mine drainage sites.
Gallery of natural nanomaterials
Types
Nano-materials are often categorized as to how many of their dimensions fall in the nanoscale. A nanoparticle is defined a nano-object with all three external dimensions in the nanoscale, whose longest and the shortest axes do not differ significantly. A nanofiber has two external dimensions in the nanoscale, with nanotubes being hollow nanofibers and nanorods being solid nanofibers. A nanoplate/nanosheet has one external dimension in the nanoscale, and if the two larger dimensions are significantly different it is called a nanoribbon. For nanofibers and nanoplates, the other dimensions may or may not be in the nanoscale, but must be significantly larger. In all of these cases, a significant difference is noted to typically be at least a factor of 3.
Nanostructured materials are often categorized by what phases of matter they contain. A nanocomposite is a solid containing at least one physically or chemically distinct region or collection of regions, having at least one dimension in the nanoscale. A nanofoam has a liquid or solid matrix, filled with a gaseous phase, where one of the two phases has dimensions on the nanoscale. A nanoporous material is a solid material containing nanopores, voids in the form of open or closed pores of sub-micron lengthscales. A nanocrystalline material has a significant fraction of crystal grains in the nanoscale.
Nanoporous materials
The term nanoporous materials contain subsets of microporous and mesoporous materials. Microporous materials are porous materials with a mean pore size smaller than 2 nm, while mesoporous materials are those with pores sizes in the region 2–50 nm. Microporous materials exhibit pore sizes with comparable length-scale to small molecules. For this reason such materials may serve valuable applications including separation membranes. Mesoporous materials are interesting towards applications that require high specific surface areas, while enabling penetration for molecules that may be too large to enter the pores of a microporous material. In some sources, nanoporous materials and nanofoam are sometimes considered nanostructures but not nanomaterials because only the voids and not the materials themselves are nanoscale. Although the ISO definition only considers round nano-objects to be nanoparticles, other sources use the term nanoparticle for all shapes.
Nanoparticles
Nanoparticles have all three dimensions on the nanoscale. Nanoparticles can also be embedded in a bulk solid to form a nanocomposite.
Fullerenes
The fullerenes are a class of allotropes of carbon which conceptually are graphene sheets rolled into tubes or spheres. These include the carbon nanotubes (or silicon nanotubes) which are of interest both because of their mechanical strength and also because of their electrical properties.
The first fullerene molecule to be discovered, and the family's namesake, buckminsterfullerene (C60), was prepared in 1985 by Richard Smalley, Robert Curl, James Heath, Sean O'Brien, and Harold Kroto at Rice University. The name was a homage to Buckminster Fuller, whose geodesic domes it resembles. Fullerenes have since been found to occur in nature. More recently, fullerenes have been detected in outer space.
For the past decade, the chemical and physical properties of fullerenes have been a hot topic in the field of research and development, and are likely to continue to be for a long time. In April 2003, fullerenes were under study for potential medicinal use: binding specific antibiotics to the structure of resistant bacteria and even target certain types of cancer cells such as melanoma. The October 2005 issue of Chemistry and Biology contains an article describing the use of fullerenes as light-activated antimicrobial agents. In the field of nanotechnology, heat resistance and superconductivity are among the
properties attracting intense research.
A common method used to produce fullerenes is to send a large current between two nearby graphite electrodes in an inert atmosphere. The resulting carbon plasma arc between the electrodes cools into sooty residue from which many fullerenes can be isolated.
There are many calculations that have been done using ab-initio Quantum Methods applied to fullerenes. By DFT and TDDFT methods one can obtain IR, Raman, and UV spectra. Results of such calculations can be compared with experimental results.
Metal-based nanoparticles
Inorganic nanomaterials, (e.g. quantum dots, nanowires, and nanorods) because of their interesting optical and electrical properties, could be used in optoelectronics. Furthermore, the optical and electronic properties of nanomaterials which depend on their size and shape can be tuned via synthetic techniques. There are the possibilities to use those materials in organic material based optoelectronic devices such as organic solar cells, OLEDs etc. The operating principles of such devices are governed by photoinduced processes like electron transfer and energy transfer. The performance of the devices depends on the efficiency of the photoinduced process responsible for their functioning. Therefore, better understanding of those photoinduced processes in organic/inorganic nanomaterial composite systems is necessary in order to use them in optoelectronic devices.
Nanoparticles or nanocrystals made of metals, semiconductors, or oxides are of particular interest for their mechanical, electrical, magnetic, optical, chemical and other properties. Nanoparticles have been used as quantum dots and as chemical catalysts such as nanomaterial-based catalysts. Recently, a range of nanoparticles are extensively investigated for biomedical applications including tissue engineering, drug delivery, biosensor.
Nanoparticles are of great scientific interest as they are effectively a bridge between bulk materials and atomic or molecular structures. A bulk material should have constant physical properties regardless of its size, but at the nano-scale this is often not the case. Size-dependent properties are observed such as quantum confinement in semiconductor particles, surface plasmon resonance in some metal particles, and superparamagnetism in magnetic materials.
Nanoparticles exhibit a number of special properties relative to bulk material. For example, the bending of bulk copper (wire, ribbon, etc.) occurs with movement of copper atoms/clusters at about the 50 nm scale. Copper nanoparticles smaller than 50 nm are considered super hard materials that do not exhibit the same malleability and ductility as bulk copper. The change in properties is not always desirable. Ferroelectric materials smaller than 10 nm can switch their polarization direction using room temperature thermal energy, thus making them useless for memory storage. Suspensions of nanoparticles are possible because the interaction of the particle surface with the solvent is strong enough to overcome differences in density, which usually result in a material either sinking or floating in a liquid. Nanoparticles often have unexpected visual properties because they are small enough to confine their electrons and produce quantum effects. For example, gold nanoparticles appear deep red to black in solution.
The often very high surface area to volume ratio of nanoparticles provides a tremendous driving force for diffusion, especially at elevated temperatures. Sintering is possible at lower temperatures and over shorter durations than for larger particles. This theoretically does not affect the density of the final product, though flow difficulties and the tendency of nanoparticles to agglomerate do complicate matters. The surface effects of nanoparticles also reduces the incipient melting temperature.
One-dimensional nanostructures
The smallest possible crystalline wires with cross-section as small as a single atom can be engineered in cylindrical confinement. Carbon nanotubes, a natural semi-1D nanostructure, can be used as a template for synthesis. Confinement provides mechanical stabilization and prevents linear atomic chains from disintegration; other structures of 1D nanowires are predicted to be mechanically stable even upon isolation from the templates.
Two-dimensional nanostructures
2D materials are crystalline materials consisting of a two-dimensional single layer of atoms. The most important representative graphene was discovered in 2004.
Thin films with nanoscale thicknesses are considered nanostructures, but are sometimes not considered nanomaterials because they do not exist separately from the substrate.
Bulk nanostructured materials
Some bulk materials contain features on the nanoscale, including nanocomposites, nanocrystalline materials, nanostructured films, and nanotextured surfaces.
Box-shaped graphene (BSG) nanostructure is an example of 3D nanomaterial. BSG nanostructure has appeared after mechanical cleavage of pyrolytic graphite. This nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm.
Applications
Nano materials are used in a variety of, manufacturing processes, products and healthcare including paints, filters, insulation and lubricant additives. In healthcare Nanozymes are nanomaterials with enzyme-like characteristics. They are an emerging type of artificial enzyme, which have been used for wide applications in such as biosensing, bioimaging, tumor diagnosis, antibiofouling and more. High quality filters may be produced using nanostructures, these filters are capable of removing particulate as small as a virus as seen in a water filter created by Seldon Technologies. Nanomaterials membrane bioreactor (NMs-MBR), the next generation of conventional MBR, are recently proposed for the advanced treatment of wastewater. In the air purification field, nano technology was used to combat the spread of MERS in Saudi Arabian hospitals in 2012. Nanomaterials are being used in modern and human-safe insulation technologies; in the past they were found in Asbestos-based insulation. As a lubricant additive, nano materials have the ability to reduce friction in moving parts. Worn and corroded parts can also be repaired with self-assembling anisotropic nanoparticles called TriboTEX. Nanomaterials have also been applied in a range of industries and consumer products. Mineral nanoparticles such as titanium-oxide have been used to improve UV protection in sunscreen. Phosphorus, carbon and nitrogen doped titanium-oxide nanoparticles are used as additive to water based paint for self-cleaning properties. In the sports industry, lighter bats to have been produced with carbon nanotubes to improve performance. Another application is in the military, where mobile pigment nanoparticles have been used to create more effective camouflage. Nanomaterials can also be used in three-way-catalyst applications, which have the advantage of controlling the emission of nitrogen oxides (NOx), which are precursors to acid rain and smog. In core-shell structure, nanomaterials form shell as the catalyst support to protect the noble metals such as palladium and rhodium. The primary function is that the supports can be used for carrying catalysts active components, making them highly dispersed, reducing the use of noble metals, enhancing catalysts activity, and potentially improving the stability.
Synthesis
The goal of any synthetic method for nanomaterials is to yield a material that exhibits properties that are a result of their characteristic length scale being in the nanometer range (1 – 100 nm). Accordingly, the synthetic method should exhibit control of size in this range so that one property or another can be attained. Often the methods are divided into two main types, "bottom up" and "top down".
Bottom-up methods
Bottom-up methods involve the assembly of atoms or molecules into nanostructured arrays. In these methods the raw material sources can be in the form of gases, liquids, or solids. The latter require some sort of disassembly prior to their incorporation onto a nanostructure. Bottom up methods generally fall into two categories: chaotic and controlled.
Chaotic processes involve elevating the constituent atoms or molecules to a chaotic state and then suddenly changing the conditions so as to make that state unstable. Through the clever manipulation of any number of parameters, products form largely as a result of the insuring kinetics. The collapse from the chaotic state can be difficult or impossible to control and so ensemble statistics often govern the resulting size distribution and average size. Accordingly, nanoparticle formation is controlled through manipulation of the end state of the products. Examples of chaotic processes are laser ablation, exploding wire, arc, flame pyrolysis, combustion, and precipitation synthesis techniques.
Controlled processes involve the controlled delivery of the constituent atoms or molecules to the site(s) of nanoparticle formation such that the nanoparticle can grow to a prescribed sizes in a controlled manner. Generally the state of the constituent atoms or molecules are never far from that needed for nanoparticle formation. Accordingly, nanoparticle formation is controlled through the control of the state of the reactants. Examples of controlled processes are self-limiting growth solution, self-limited chemical vapor deposition, shaped pulse femtosecond laser techniques, plant and microbial approaches and molecular beam epitaxy.
Top-down methods
Top-down methods adopt some 'force' (e. g. mechanical force, laser) to break bulk materials into nanoparticles. A popular method involves mechanical break apart bulk materials into nanomaterials is 'ball milling'. Besides that, nanoparticles can also be made by laser ablation which apply short pulse lasers (e. g. femtosecond laser) to ablate a target (solid).
Characterization
Novel effects can occur in materials when structures are formed with sizes comparable to any one of many possible length scales, such as the de Broglie wavelength of electrons, or the optical wavelengths of high energy photons. In these cases quantum mechanical effects can dominate material properties. One example is quantum confinement where the electronic properties of solids are altered with great reductions in particle size. The optical properties of nanoparticles, e.g. fluorescence, also become a function of the particle diameter. This effect does not come into play by going from macrosocopic to micrometer dimensions, but becomes pronounced when the nanometer scale is reached.
In addition to optical and electronic properties, the novel mechanical properties of many nanomaterials is the subject of nanomechanics research. When added to a bulk material, nanoparticles can strongly influence the mechanical properties of the material, such as the stiffness or elasticity. For example, traditional polymers can be reinforced by nanoparticles (such as carbon nanotubes) resulting in novel materials which can be used as lightweight replacements for metals. Such composite materials may enable a weight reduction accompanied by an increase in stability and improved functionality.
Finally, nanostructured materials with small particle size, such as zeolites and asbestos, are used as catalysts in a wide range of critical industrial chemical reactions. The further development of such catalysts can form the basis of more efficient, environmentally friendly chemical processes.
The first observations and size measurements of nano-particles were made during the first decade of the 20th century. Zsigmondy made detailed studies of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. He used an ultramicroscope that employs a dark field method for seeing particles with sizes much less than light wavelength.
There are traditional techniques developed during the 20th century in interface and colloid science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section.
These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Others apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions.
There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. This information is required for proper system stabilization, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering, and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems.
Mechanical properties
The ongoing research has shown that mechanical properties can vary significantly in nanomaterials compared to bulk material. Nanomaterials have substantial mechanical properties due to the volume, surface, and quantum effects of nanoparticles. This is observed when the nanoparticles are added to common bulk material, the nanomaterial refines the grain and forms intergranular and intragranular structures which improve the grain boundaries and therefore the mechanical properties of the materials. Grain boundary refinements provide strengthening by increasing the stress required to cause intergranular or transgranular fractures. A common example where this can be observed is the addition of nano Silica to cement, which improves the tensile strength, compressive strength, and bending strength by the mechanisms just mentioned. The understanding of these properties will enhance the use of nanoparticles in novel applications in various fields such as surface engineering, tribology, nanomanufacturing, and nanofabrication.
Techniques used:
Steinitz in 1943 used the micro-indentation technique to test the hardness of microparticles, and now nanoindentation has been employed to measure elastic properties of particles at about 5-micron level. These protocols are frequently used to calculate the mechanical characteristics of nanoparticles via atomic force microscopy (AFM) techniques. To measure the elastic modulus; indentation data is obtained via AFM force-displacement curves being converted to force-indentation curves. Hooke's law is used to determine the cantilever deformation and depth of the tip, and in conclusion, the pressure equation can be written as:
P=k (ẟc - ẟc0)
ẟc : cantilever deformation
ẟc0 : deflection ofset
AFM allows us to obtain a high-resolution image of multiple types of surfaces while the tip of the cantilever can be used to obtain information about mechanical properties. Computer simulations are also being progressively used to test theories and complement experimental studies. The most used computer method is molecular dynamics simulation, which uses newton's equations of motion for the atoms or molecules in the system. Other techniques such direct probe method are used to determine the adhesive properties of nanomaterials. Both the technique and simulation are coupled with transmission electron microscope (TEM) and AFM techniques to provide results.
Mechanical properties of common nanomaterials classes:
Crystalline metal nanomaterials: Dislocations are one of the major contributors toward elastic properties within nanomaterials similar to bulk crystalline materials. Despite the traditional view of there being no dislocations in nanomaterials. Ramos, experimental work has shown that the hardness of gold nanoparticles is much higher than their bulk counterparts, as there are stacking faults and dislocations forming that activate multiple strengthening mechanisms in the material. Through these experiments, more research has shown that via nanoindentation techniques, material strength; compressive stress, increases under compression with decreasing particle size, because of nucleating dislocations. These dislocations have been observed using TEM techniques, coupled with nanoindentation. Silicon nanoparticles strength and hardness are four times more than the value of the bulk material. The resistance to pressure applied can be attributed to the line defects inside the particles as well as a dislocation that provides strengthening of the mechanical properties of the nanomaterial. Furthermore, the addition of nanoparticles strengthens a matrix because the pinning of particles inhibits grain growth. This refines the grain, and hence improves the mechanical properties. However, not all additions of nanomaterials lead to an increase in properties for example nano-Cu. But this is attributed to the inherent properties of the material being weaker than the matrix.
Nonmetallic nanoparticles and nanomaterials: Size-dependent behavior of mechanical properties is still not clear in the case of polymer nanomaterials however, in one research by Lahouij they found that the compressive moduli of polystyrene nanoparticles were found to be less than that of the bulk counterparts. This can be associated with the functional groups being hydrated. Furthermore, nonmetallic nanomaterials can lead to agglomerates forming inside the matrix they are being added to and hence decrease the mechanical properties by leading to fracture under even low mechanical loads, such as the addition of CNTs. The agglomerates will act as slip planes as well as planes in which cracks can easily propagate (9). However, most organic nanomaterials are flexible and these and the mechanical properties such as hardness etc. are not dominant.
Nanowires and nanotubes: The elastic moduli of some nanowires namely lead and silver, decrease with increasing diameter. This has been associated with surface stress, oxidation layer, and surface roughness. However, the elastic behavior of ZnO nanowires does not get affected by surface effects but their fracture properties do. So, it is generally dependent on material behavior and their bonding as well.
The reason why mechanical properties of nanomaterials are still a hot topic for research is that measuring the mechanical properties of individual nanoparticles is a complicated method, involving multiple control factors. Nonetheless, Atomic force microscopy has been widely used to measure the mechanical properties of nanomaterials.
Adhesion and friction of nanoparticles
When talking about the application of a material adhesion and friction play a critical role in determining the outcome of the application. Therefore, it is critical to see how these properties also get affected by the size of a material. Again, AFM is a technique most used to measure these properties and to determine the adhesive strength of nanoparticles to any solid surface, along with the colloidal probe technique and other chemical properties. Furthermore, the forces playing a role in providing these adhesive properties to nanomaterials are either the electrostatic forces, VdW, capillary forces, solvation forces, structure force, etc. It has been found that the addition of nanomaterials in bulk materials substantially increases their adhesive capabilities by increasing their strength through various bonding mechanisms. Nanomaterials dimension approaches zero, which means that the fraction of the particle's surface to overall atoms increases.
Along with surface effects, the movement of nanoparticles also plays a role in dictating their mechanical properties such as shearing capabilities. The movement of particles can be observed under TEM. For example, the movement behavior of MoS2 nanoparticles dynamic contact was directly observed in situ which led to the conclusion that fullerenes can shear via rolling or sliding. However, observing these properties is again a very complicated process due to multiple contributing factors.
Applications specific to Mechanical Properties:
Lubrication
Nano-manufacturing
Coatings
Uniformity
The chemical processing and synthesis of high performance technological components for the private, industrial and military sectors requires the use of high purity ceramics, polymera, glass-ceramics, and composite materials. In condensed bodies formed from fine powders, the irregular sizes and shapes of nanoparticles in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact.
Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved.
In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws.
It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. A number of dispersants such as ammonium citrate (aqueous) and imidazoline or oleyl alcohol (nonaqueous) are promising solutions as possible additives for enhanced dispersion and deagglomeration. Monodisperse nanoparticles and colloids provide this potential.
Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in high performance materials and components.
Nanomaterials in articles, patents, and products
The quantitative analysis of nanomaterials showed that nanoparticles, nanotubes, nanocrystalline materials, nanocomposites, and graphene have been mentioned in 400,000, 181,000, 144,000, 140,000, and 119,000 ISI-indexed articles, respectively, by September 2018. As far as patents are concerned, nanoparticles, nanotubes, nanocomposites, graphene, and nanowires have been played a role in 45,600, 32,100, 12,700, 12,500, and 11,800 patents, respectively. Monitoring approximately 7,000 commercial nano-based products available on global markets revealed that the properties of around 2,330 products have been enabled or enhanced aided by nanoparticles. Liposomes, nanofibers, nanocolloids, and aerogels were also of the most common nanomaterials in consumer products.
The European Union Observatory for Nanomaterials (EUON) has produced a database (NanoData) that provides information on specific patents, products, and research publications on nanomaterials.
Health and safety
World Health Organization guidelines
The World Health Organization (WHO) published a guideline on protecting workers from potential risk of manufactured nanomaterials at the end of 2017. WHO used a precautionary approach as one of its guiding principles. This means that exposure has to be reduced, despite uncertainty about the adverse health effects, when there are reasonable indications to do so. This is highlighted by recent scientific studies that demonstrate a capability of nanoparticles to cross cell barriers and interact with cellular structures. In addition, the hierarchy of controls was an important guiding principle. This means that when there is a choice between control measures, those measures that are closer to the root of the problem should always be preferred over measures that put a greater burden on workers, such as the use of personal protective equipment (PPE). WHO commissioned systematic reviews for all important issues to assess the current state of the science and to inform the recommendations according to the process set out in the WHO Handbook for guideline development. The recommendations were rated as "strong" or "conditional" depending on the quality of the scientific evidence, values and preferences, and costs related to the recommendation.
The WHO guidelines contain the following recommendations for safe handling of manufactured nanomaterials (MNMs)
A. Assess health hazards of MNMs
WHO recommends assigning hazard classes to all MNMs according to the Globally Harmonized System (GHS) of Classification and Labelling of Chemicals for use in safety data sheets. For a limited number of MNMs this information is made available in the guidelines (strong recommendation, moderate-quality evidence).
WHO recommends updating safety data sheets with MNM-specific hazard information or indicating which toxicological end-points did not have adequate testing available (strong recommendation, moderate-quality evidence).
For the respirable fibres and granular biopersistent particles' groups, the GDG suggests using the available classification of MNMs for provisional classification of nanomaterials of the same group (conditional recommendation, low-quality evidence).
B. Assess exposure to MNMs
WHO suggests assessing workers' exposure in workplaces with methods similar to those used for the proposed specific occupational exposure limit (OEL) value of the MNM (conditional recommendation, low-quality evidence).
Because there are no specific regulatory OEL values for MNMs in workplaces, WHO suggests assessing whether workplace exposure exceeds a proposed OEL value for the MNM. A list of proposed OEL values is provided in an annex of the guidelines. The chosen OEL should be at least as protective as a legally mandated OEL for the bulk form of the material (conditional recommendation, low-quality evidence).
If specific OELs for MNMs are not available in workplaces, WHO suggests a step-wise approach for inhalation exposure with, first an assessment of the potential for exposure; second, conducting basic exposure assessment and third, conducting a comprehensive exposure assessment such as those proposed by the Organisation for Economic Cooperation and Development (OECD) or Comité Européen de Normalisation (the European Committee for Standardization, CEN) (conditional recommendation, moderate quality evidence).
For dermal exposure assessment, WHO found that there was insufficient evidence to recommend one method of dermal exposure assessment over another.
C. Control exposure to MNMs
Based on a precautionary approach, WHO recommends focusing control of exposure on preventing inhalation exposure with the aim of reducing it as much as possible (strong recommendation, moderate-quality evidence).
WHO recommends reduction of exposures to a range of MNMs that have been consistently measured in workplaces especially during cleaning and maintenance, collecting material from reaction vessels and feeding MNMs into the production process. In the absence of toxicological information, WHO recommends implementing the highest level of controls to prevent workers from any exposure. When more information is available, WHO recommends taking a more tailored approach (strong recommendation, moderate-quality evidence).
WHO recommends taking control measures based on the principle of hierarchy of controls, meaning that the first control measure should be to eliminate the source of exposure before implementing control measures that are more dependent on worker involvement, with PPE being used only as a last resort. According to this principle, engineering controls should be used when there is a high level of inhalation exposure or when there is no, or very little, toxicological information available. In the absence of appropriate engineering controls PPE should be used, especially respiratory protection, as part of a respiratory protection programme that includes fit-testing (strong recommendation, moderate-quality evidence).
WHO suggests preventing dermal exposure by occupational hygiene measures such as surface cleaning, and the use of appropriate gloves (conditional recommendation, low quality evidence).
When assessment and measurement by a workplace safety expert is not available, WHO suggests using control banding for nanomaterials to select exposure control measures in the workplace. Owing to a lack of studies, WHO cannot recommend one method of control banding over another (conditional recommendation, very low-quality evidence).
For health surveillance WHO could not make a recommendation for targeted MNM-specific health surveillance programmes over existing health surveillance programmes that are already in use owing to the lack of evidence. WHO considers training of workers and worker involvement in health and safety issues to be best practice but could not recommend one form of training of workers over another, or one form of worker involvement over another, owing to the lack of studies available. It is expected that there will be considerable progress in validated measurement methods and risk assessment and WHO expects to update these guidelines in five years' time, in 2022.
Other guidance
Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern. Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Acute inhalation exposure of healthy animals to biodegradable inorganic nanomaterials have not demonstrated significant toxicity effects. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials, although no reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. Additional concerns include skin contact and ingestion exposure, and dust explosion hazards.
Elimination and substitution are the most desirable approaches to hazard control. While the nanomaterials themselves often cannot be eliminated or substituted with conventional materials, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. Handling procedures can also be improved, for example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Engineering controls are physical changes to the workplace that isolate workers from hazards, mainly ventilation systems such as fume hoods, gloveboxes, biosafety cabinets, and vented balance enclosures. Administrative controls are changes to workers' behavior to mitigate a hazard, including training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture. Personal protective equipment must be worn on the worker's body and is the least desirable option for controlling hazards. Personal protective equipment normally used for typical chemicals are also appropriate for nanomaterials, including long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves, goggles, and impervious laboratory coats. In some circumstances respirators may be used.
Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. The assessment should use both particle counters, which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis. As of 2016, quantitative occupational exposure limits have not been determined for most nanomaterials. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits for carbon nanotubes, carbon nanofibers, and ultrafine titanium dioxide. Agencies and organizations from other countries, including the British Standards Institute and the Institute for Occupational Safety and Health in Germany, have established OELs for some nanomaterials, and some companies have supplied OELs for their products.
Nanoscale diagnostics
Nanotechnology has been making headlines in the medical field, being responsible for biomedical imaging. The unique optical, magnetic and chemical properties of materials on the Nano scale has allowed the development of imaging probes with multi-functionality such as better contrast enhancement, better spatial information, controlled bio distribution, and multi-modal imaging across various scanning devices. These developments have had advantages such as being able to detect the location of tumors and inflammations, accurate assessment of disease progression, and personalized medicine.
Silica nanoparticles- Silica nanoparticles can be classified into solid, non-porous, and mesoporous. They have large surface are, hydrophilic surface, and chemical and physical stabilities. Silica nanoparticles are made by the use of the Stöber process. Which is the hydrolysis of silyl ethers such as tetraethyl silicate into silanols (Si-OH) using ammonia in a mixture of water and alcohol followed by the condensation of silanols into 50–2000 nm silica particles. The size of the particle can be controlled by varying the concentration of silyl ether and alcohol or the micro emulsion method. Mesoporous silica nanoparticles are synthesized by the sol-gel process. They have pores that range in diameter from 2 nm to 50 nm. They are synthesized in a water-based solution in the presence of a base catalyst and a pore forming agent known as a surfactant. Surfactants are molecules that present the particularity to have a hydrophobic tail (alkyl chain) and a hydrophilic head (charged group, such as a quaternary amine for example). As these surfactants are added to a water-based solution, they will coordinate to form micelles with increasing concentration in order to stabilize the hydrophobic tails. Varying the pH of the solution and composition of the solvents, and the addition of certain swelling agents can control the pore size. Their hydrophilic surface is what makes silica nanoparticles so important and allows them to carry out functions such as drug and gene delivery, bio imaging and therapy. In order for this application to be successful, assorted surface functional groups are necessary and can be added either by the co-condensation process during preparation or by post surface modification. The high surface area of silica nanoparticles allows them to carry much larger amounts of the desired drug than through conventional methods like polymers and liposomes. It allows for site specific targeting, especially in the treatment of cancer. Once the particles have reached their destination, they can act as a reporter, release a compound, or be remotely heated to damage biological structures in close proximity. Targeting is typically accomplished by modifying the surface of the nanoparticle with a chemical or biological compound. They accumulate at tumor sites through Enhanced Permeability Retention (EPR), where the tumor vessels accelerate the delivery of the nanoparticles directly into the tumor. The porous shell of the silica allows control over the rate at which the drug diffuses out of the nanoparticle. The shell can be modified to have an affinity for the drug, or even to be triggered by pH, heat, light, salts, or other signaling molecules. Silica nanoparticles are also used in bio imaging because they can accommodate fluorescent/MRI/PET/ SPECT contrast agents and drug/DNA molecules to their adaptable surface and pores. This is made possible by using the silica nanoparticle as a vector for the expression of fluorescent proteins. Several different types of fluorescent probes, like cyanine dyes, methyl violegen, or semiconductor quantum dots can be conjugated to silica nanoparticles and delivered into specific cells or injected in vivo. Carrier molecule RGD peptide has been very useful of targeted in vivo imaging.
Topically applied surface-enhanced resonance Raman ratiometric spectroscopy (TAS3RS)- TAS3RS is another technique that is starting to make advancement in the medical field. It is an imaging technique that uses Folate Receptors (FR) to detect tumor lesions as small as 370 micrometers. Folate Receptors are membrane bound surface proteins that bind folates and folate conjugates with high affinity. FR is frequently overexpressed in a number of human malignancies including cancer of the ovary, lung, kidney, breast, bladder, brain, and endometrium. Raman imaging is a type of spectroscopy that is used in chemistry to provide structural fingerprint by which molecules can be identified. It relies upon inelastic scattering of photons, which result in ultra high sensitivity. There was a study that was done where two different surface enhanced resonance Raman scattering were synthesized (SERRS). One of the SERRS was a "targeted nanoprobe functionalized with an anti-folate-receptor antibody (αFR-Ab) via a PEG-maleimide-succinimide and using the infrared dye IR780 as the Raman reporter, henceforth referred to as αFR-NP, and a nontargeted probe (nt-NP) coated with PEG5000-maleimide and featuring the IR140 infrared dye as the Raman reporter." These two different mixtures were injected into tumor bearing mice and healthy controlled mice. The mice were imaged with Bioluminescence (BLI) signal that produces light energy within an organism's body. They were also scanned with the Raman microscope in order to be able to see the correlation between the TAS3RS and the BLI map. TAS3RS did not show anything in the healthy mice, but was able to locate the tumor lesions in the infected mice and also able to create a TAS3RS map that could be used as guidance during surgery. TAS3RS shows to be promising in being able to combat ovarian and peritoneal cancer as it allows early detection with high accuracy. This technique can be administered locally, which is an advantage as it does not have to enter the bloodstream and therefore bypassing the toxicity concerns circulating nanoprobes. This technique is also more photostable than fluorochromes because SERRS nanoparticles cannot form from biomolecules and therefore there would not be any false positives in TAS3RS as there is in fluorescence imaging.
See also
Artificial enzyme
Directional freezing
List of software for nanostructures modeling
Nanostructure
Nanotopography
Nanozymes
References
External links
European Union Observatory for Nanomaterials (EUON)
Acquisition, evaluation and public orientated presentation of societal relevant data and findings for nanomaterials (DaNa)
Safety of Manufactured Nanomaterials: OECD Environment Directorate
Assessing health risks of nanomaterials summary by GreenFacts of the European Commission SCENIHR assessment
Textiles Nanotechnology Laboratory at Cornell University
Nano Structured Material
Online course MSE 376-Nanomaterials by Mark C. Hersam (2006)
Nanomaterials: Quantum Dots, Nanowires and Nanotubes online presentation by Dr Sands
Lecture Videos for the Second International Symposium on the Risk Assessment of Manufactured Nanomaterials, NEDO 2012
Nader Engheta: Wave interaction with metamaterials, SPIE Newsroom 2016
Managing nanomaterials in the Workplace by the European Agency for Safety and Health at Work. | Nanomaterials | [
"Materials_science"
] | 9,874 | [
"Nanotechnology",
"Nanomaterials"
] |
868,153 | https://en.wikipedia.org/wiki/Quasiperiodic%20tiling | A quasiperiodic tiling is a tiling of the plane that exhibits local periodicity under some transformations: every finite subset of its tiles reappears infinitely often throughout the tiling, but there is no nontrivial way of superimposing the whole tiling onto itself so that all tiles overlap perfectly.
See also
Aperiodic tiling and Penrose tiling for a mathematical viewpoint.
Quasicrystal for a physics viewpoint.
References
Tessellation | Quasiperiodic tiling | [
"Physics",
"Mathematics"
] | 97 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
868,790 | https://en.wikipedia.org/wiki/Chrysiogenaceae | Chrysiogenaceae is a family of bacteria.
Phylogeny
The phylogeny is based on 16S rRNA based LTP_08_2023 and 120 marker proteins based GTDB 08-RS214.
See also
List of bacterial orders
List of bacteria genera
References
Bacteria families
Chrysiogenota | Chrysiogenaceae | [
"Biology"
] | 68 | [
"Bacteria stubs",
"Bacteria"
] |
868,936 | https://en.wikipedia.org/wiki/Paris%20Gun | The Paris Gun () was a type of German long-range siege gun, several of which were used to bombard Paris during World War I. They were in service from March to August 1918. When the guns were first employed, Parisians believed they had been bombed by a high-altitude Zeppelin, as the sound of neither an airplane nor a gun could be heard. They were the largest pieces of artillery used during the war by barrel length, and qualify under the (later) formal definition of large-calibre artillery.
Also called the "Kaiser Wilhelm Geschütz" ("Kaiser Wilhelm Gun"), they were often confused with Big Bertha, the German howitzer used against Belgian forts in the Battle of Liège in 1914; indeed, the French called them by this name as well. They were also confused with the smaller "Langer Max" (Long Max) cannon, from which they were derived. Although the famous Krupp-family artillery makers produced all these guns, the resemblance ended there.
As military weapons, the Paris Guns were not a great success: the payload was small, the barrel required frequent replacement, and the guns' accuracy was good enough for only city-sized targets. The German objective was to build a psychological weapon to attack the morale of the Parisians, not to destroy the city itself.
Description
Due to the weapon's apparent total destruction by the Germans in the face of the final Entente offensives, its capabilities are not known with full certainty. Figures stated for the weapon's size, range, and performance varied widely depending on the source—not even the number of shells fired is certain. In the 1980s, a long note on the gun was discovered and published. This was written by Dr. Fritz Rausenberger (in German), the Krupp engineer in charge of the gun's development, shortly before his death in 1926. Thanks to this, the details of the gun's design and capabilities were considerably clarified.
The gun was capable of firing a shell to a range of and a maximum altitude of —the greatest height reached by a human-made projectile until the first successful V-2 flight test in October 1942. At the start of its 182-second flight, each shell from the Paris Gun reached a speed of .
The distance was so far that the Coriolis effect—the rotation of the Earth—was substantial enough to affect trajectory calculations. The gun was fired at an azimuth of 232 degrees (southwest) from Crépy-en-Laon, which was at a latitude of 49.5 degrees north.
Seven barrels were constructed. They used worn-out 38 cm SK L/45 "Max" long gun barrels that were fitted with an internal tube that reduced the caliber from to . The tube was long and projected out of the end of the gun, so an extension was bolted to the old gun-muzzle to cover and reinforce the lining tube. A further, long smooth-bore extension was attached to the end of this, giving a total barrel length of . This smooth section was intended to improve accuracy and reduce the dispersion of the shells, as it reduced the slight yaw a shell might have immediately after leaving the gun barrel produced by the gun's rifling. The barrel was braced to counteract barrel drop due to its length and weight, and vibrations while firing; it was mounted on a special rail-transportable carriage and fired from a prepared, concrete emplacement with a turntable. The original breech of the old gun did not require modification or reinforcement.
Since it was based on a naval weapon, the gun was manned by a crew of 80 Imperial Navy sailors under the command of Vice-Admiral Maximilian Rogge, chief of the Ordnance branch of the Admiralty. It was surrounded by several batteries of standard army artillery to create a "noise-screen" chorus around the big gun so that it could not be located by French and British spotters.
The projectile flew significantly higher than projectiles from previous guns. Writer and journalist Adam Hochschild put it this way: "It took about three minutes for each giant shell to cover the distance to the city, climbing to an altitude of at the top of its trajectory. This was by far the highest point ever reached by a man-made object, so high that gunners, in calculating where the shells would land, had to take into account the rotation of the Earth. For the first time in warfare, deadly projectiles rained down on civilians from the stratosphere". This reduced drag from air resistance, allowing the shell to achieve a range of over .
The unfinished V-3 cannon would have been able to fire larger projectiles to a longer range, and with a substantially higher rate of fire. The unfinished Iraqi super gun would also have been substantially bigger.
Projectiles
The Paris Gun shells weighed . The shells initially used had a diameter of and a length of . The main body of the shell was composed of thick steel, containing of TNT.
The small amount of explosive—around 6.6% of the weight of the shell—meant that the effect of its shellburst was small for the shell's size. The thickness of the shell casing, to withstand the forces of firing, meant that shells would explode into a comparatively small number of large fragments, limiting their destructive effect. A crater produced by a shell falling in the Tuileries Garden was described by an eyewitness as being across and deep.
The shells were propelled at such a high velocity that each successive shot wore away a considerable amount of steel from the rifled bore. Each shell was sequentially numbered according to its increasing diameter, and had to be fired in numeric order, lest the projectile lodge in the bore and the gun explode. Also, when the shell was rammed into the gun, the chamber was precisely measured to determine the difference in its length: a few inches off would cause a great variance in the velocity, and with it, the range. Then, with the variance determined, the additional quantity of propellant was calculated, and its measure taken from a special car and added to the regular charge. After 65 rounds had been fired, each of progressively larger caliber to allow for wear, the barrel was sent back to Krupp and rebored with a new set of shells.
The shell's explosive was contained in two compartments, separated by a wall. This strengthened the shell and supported the explosive charge under the acceleration of firing. One of the shell's two fuzes was mounted in the wall, with the other in the base of the shell. The fuzes proved very reliable as every single one of the 303 shells that landed in and around Paris successfully detonated.
The shell's nose was fitted with a streamlined, lightweight, ballistic cap and the side had grooves that engaged with the rifling of the gun barrel, spinning the shell as it was fired so its flight was stable. Two copper driving bands provided a gas-tight seal against the gun barrel during firing.
Use in World War I
The Paris gun was used to shell Paris at a range of . The gun was fired from a wooded hill (Le mont de Joie) near Crépy, and the first shell landed at 7:18 a.m. on 23 March 1918 on the Quai de la Seine, the explosion being heard across the city. Shells continued to land at 15-minute intervals, with 21 counted on the first day. On the first day, fifteen people were killed and thirty-six wounded. The effect on morale in Paris was immediate: by 27 March, queues of thousands had started at the Gare d'Orsay and, at the Gare Montparnasse, ticket sales out of the capital were suspended due to demand.
The initial assumption was these were bombs dropped from an airplane or Zeppelin flying too high to be seen or heard, or perhaps an "aerial torpedo". Within a few hours, sufficient casing fragments had been collected to show that the explosions were the result of shells, not bombs. By the end of the day, military authorities were aware the shells were being fired from behind German lines by a new long-range gun, although there was initial press speculation on the origin of the shells. This included the theory they were being fired by German agents close by Paris, or even within the city itself, so abandoned quarries close to the city were searched for a hidden gun. Another possibility was that German forces had penetrated the front line, but authorities realized that such heavy artillery could not be moved and emplaced so quickly.
The press reported the German gun's range as about , which amazed American ordnance officers, and the shells as , compared to the caliber of heavy German siege shells. The previous world distance record was German bombardment of Dunkirk from , while the best American gun had a range of . Experts thought that the German weapon might be a product of the Škoda Works. Three emplacements for the gun were located within days by the French reconnaissance pilot Didier Daurat, the path of the shells which landed in Paris having revealed the direction from which they were being fired. The closest emplacement was engaged by a 34 cm railway gun while the other two sites were bombed by aircraft, although this failed to interrupt the German bombardment.
Between 320 and 367 shells were fired, at a maximum rate of around 20 per day. The shells killed 250 people and wounded 620, and caused considerable damage to property. The worst incident was on 29 March 1918, when a shell hit the roof of the St-Gervais-et-St-Protais Church, collapsing the roof onto the congregation then hearing the Good Friday service. A total of 91 people were killed and 68 were wounded. There was no firing between 25 and 29 March, when the first barrel was being replaced; an unconfirmed intelligence report claimed that it had exploded. Barrels were probably changed again between 7–11 April and again between 21–24 April. The diameter of the later shells increased from , indicating that the used barrels had been re-bored.
A further emplacement, later identified as specifically designed for the Paris Gun, was found by advancing US troops at the beginning of August, on the north side of the wooded hill at Coucy-le-Château-Auffrique, some from Paris.
The gun was taken back to Germany in August 1918 as Allied advances threatened its security. No guns were ever captured by the Allies. It is believed that near the end of the war they were completely destroyed by the Germans. One spare mounting was captured by American troops in Bruyères-sur-Fère, near Château-Thierry, but the gun was never found; the construction plans seem to have been destroyed as well.
After World War I
Under the terms of the Treaty of Versailles, the Germans were required to turn over a complete Paris Gun to the Allies, but they never complied with this.
In the 1930s, the German Army became interested in rockets for long-range artillery as a replacement for the Paris Gun—which was specifically banned under the Versailles Treaty. This work eventually led to the V-2 rocket that was used in World War II.
Despite the ban, Krupp continued theoretical work on long-range guns. They started experimental work after the Nazi government began funding the project upon coming to power in 1933. This research led to the 21 cm K 12 (E), a refinement of the Paris Gun design concept. Although it was broadly similar in size and range to its predecessor, Krupp's engineers had significantly reduced the problem of barrel wear. They also improved mobility over the fixed Paris Gun by making the K 12 a railway gun.
The first K 12 was delivered to the German Army in 1939 and a second in 1940. During World War II, they were deployed in the Nord-Pas-de-Calais region of France; they were used to shell Kent in Southern England between late 1940 and early 1941. One gun was captured by Allied forces in the Netherlands in 1945.
In popular culture
A parody of the Paris Gun appears in the Charlie Chaplin movie The Great Dictator. Firing at the Cathedral of Notre Dame, the "Tomanians" (the fictional country that represented Germany) succeed in blowing up a small outhouse.
The destruction of the St-Gervais-et-St-Protais Church inspired Romain Rolland to write his novel Pierre et Luce.
See also
Krupp K5, a , World War II German gun with a range.
Notes
References
Bibliography
Henry W. Miller, Railway Artillery: A Report on the Characteristics, Scope of Utility, etc. of Railway Artillery, United States Government Printing Office, 1921
Henry W. Miller, The Paris Gun: The Bombardment of Paris by the German Long Range Guns and the Great German Offensive of 1918, Jonathan Cape, Harrison Smith, New York, 1930
Ian V. Hogg, The Guns 1914 -18, Ballantine Books, New York, 1971
External links
The Paris Gun in the First World War.com Encyclopedia
Paris Gun at S. Berliner, III's ORDNANCE Superguns
Une page sur le canon de Paris
210 mm artillery
Siege artillery
World War I railway artillery of Germany
Lost objects
Paris in World War I | Paris Gun | [
"Physics"
] | 2,694 | [
"Lost objects",
"Physical objects",
"Matter"
] |
869,238 | https://en.wikipedia.org/wiki/Metmyoglobin | Metmyoglobin is the oxidized form of the oxygen-carrying hemeprotein myoglobin.
Metmyoglobin is the cause of the characteristic brown colouration of meat that occurs as it ages.
In living muscle, the concentration of metmyoglobin is vanishingly small, due to the presence of the enzyme metmyoglobin reductase, which, in the presence of the cofactor NADH and the coenzyme cytochrome b4 converts the Fe3+ in the heme prosthetic group of metmyoglobin back to the Fe2+ of normal myoglobin.
In meat, which is dead muscle, the normal processes of removing metmyoglobin are prevented from effecting this repair, or alternatively the rate of metmyoglobin formation exceeds their capacity, so that there is a net accumulation of metmyoglobin as the meat ages.
Metmyoglobin reduction helps limit the oxidation of myoglobin and the oxidation of myoglobin is specific to each species. In other words, metmyoglobin gains electrons in order to limit myoglobin from losing electrons. Metmyoglobin after being oxidized by myoglobin shows the undesirable brown color which can be seen in many types of meat. Metmyoglobin is more susceptible to oxidation when being compared to oxymyoglobin. The metmyoglobin reducing activity varies across species and was studied particularly in beef, porcine, bison, deer, emu, equine, goats and sheep.
Currently there is not a standard technique in measuring the metmyoglobin in all species. But many techniques are used including reflectance spectrophotometry and absorbance spectrophotometry are used.
References
External links
Hemoproteins | Metmyoglobin | [
"Chemistry"
] | 388 | [
"Biochemistry stubs",
"Protein stubs"
] |
869,267 | https://en.wikipedia.org/wiki/Methemoglobin | Methemoglobin (British: methaemoglobin, shortened MetHb) (pronounced "met-hemoglobin") is a hemoglobin in the form of metalloprotein, in which the iron in the heme group is in the Fe3+ (ferric) state, not the Fe2+ (ferrous) of normal hemoglobin. Sometimes, it is also referred to as ferrihemoglobin. Methemoglobin cannot bind oxygen, which means it cannot carry oxygen to tissues. It is bluish chocolate-brown in color. In human blood a trace amount of methemoglobin is normally produced spontaneously, but when present in excess the blood becomes abnormally dark bluish brown. The NADH-dependent enzyme methemoglobin reductase (a type of diaphorase) is responsible for converting methemoglobin back to hemoglobin.
Normally one to two percent of a person's hemoglobin is methemoglobin; a higher percentage than this can be genetic or caused by exposure to various chemicals and depending on the level can cause health problems known as methemoglobinemia. A higher level of methemoglobin will tend to cause a pulse oximeter to read closer to 85% regardless of the true level of oxygen saturation.
Etymology
The word methemoglobin derives from the Ancient Greek prefix μετα- (meta-: behind, later, subsequent) and the word hemoglobin.
The name hemoglobin is itself derived from the words heme and globin, each subunit of hemoglobin being a globular protein with an embedded heme group.
Common causes of elevated methemoglobin
Reduced cellular defense mechanisms
Children younger than 4 months exposed to various environmental agents
Pregnant women are considered vulnerable to exposure of high levels of nitrates in drinking water
Cytochrome b5 reductase deficiency
G6PD deficiency
Hemoglobin M disease
Pyruvate kinase deficiency
Various pharmaceutical compounds
Local anesthetic agents, especially prilocaine and benzocaine.
Amyl nitrite, chloroquine, dapsone, nitrates, nitrites, nitroglycerin, nitroprusside, phenacetin, phenazopyridine, primaquine, quinones and sulfonamides
Environmental agents
Aromatic amines (e.g. p-nitroaniline, patient case)
Arsine
Chlorobenzene
Chromates
Nitrates/nitrites
Umbellulone
Inherited disorders
Some family members of the Fugate family in Kentucky, due to a recessive gene, had blue skin from an excess of methemoglobin.
In cats
Ingestion of paracetamol (i.e. acetaminophen, tylenol)
Therapeutic uses
Amyl nitrite is administered to treat cyanide poisoning. It works by converting hemoglobin to methemoglobin, which allows for the binding of cyanide (CN–) anions by ferric (Fe3+) cations and the formation of cyanomethemoglobin. The immediate goal of forming this cyanide adduct is to prevent the binding of free cyanide to the cytochrome a3 group in cytochrome c oxidase.
Methemoglobin saturation
Methemoglobin is expressed as a concentration or a percentage. Percentage of methemoglobin is calculated by dividing the concentration of methemoglobin by the concentration of total hemoglobin. Percentage of methemoglobin is likely a better indicator of illness severity than overall concentration, as underlying medical conditions play an important role. For example, a methemoglobin concentration of 1.5 g/dL may represent a percentage of 10% in an otherwise healthy patient with a baseline hemoglobin of 15 mg/dL, whereas the presence of the same concentration of 1.5 g/dL of methemoglobin in an anemic patient with a baseline hemoglobin of 8 g/dL would represent a percentage of 18.75%. The former patient will be left with a functional hemoglobin concentration of 13.5 g/dL and potentially remain asymptomatic while the latter patient with a functional hemoglobin concentration 6.5 g/dL may be severely symptomatic with a methemoglobin of less than 20%.
1–2% Normal
Less than 10% metHb - No symptoms
10–20% metHb - Skin discoloration only (most notably on mucous membranes)
20–30% metHb - Anxiety, headache, dyspnea on exertion
30–50% metHb - Fatigue, confusion, dizziness, tachypnea, palpitations
50–70% metHb - Coma, seizures, arrhythmias, acidosis
Greater than 70% metHb - High risk of death
This may be further compounded by the "functional hemoglobin's" decreased ability to release oxygen in the presence of methemoglobin. Anemia, congestive heart failure, chronic obstructive pulmonary disease, and essentially any pathology that impairs the ability to deliver oxygen may worsen the symptoms of methemoglobinemia.
Blood stains
Increased levels of methemoglobin are found in blood stains. Upon exiting the body, bloodstains transit from bright red to dark brown, which is attributed to oxidation of oxy-hemoglobin (HbO2) to methemoglobin (met-Hb) and hemichrome (HC).
See also
Blue baby syndrome
Carboxyhemoglobin
Methemoglobinemia
References
External links
MetHb Formation
The Blue people of Troublesome Creek
Cellular respiration
Hemoglobins
Hemoproteins
Respiratory physiology | Methemoglobin | [
"Chemistry",
"Biology"
] | 1,250 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
869,444 | https://en.wikipedia.org/wiki/Laser-induced%20fluorescence | Laser-induced fluorescence (LIF) or laser-stimulated fluorescence (LSF) is a spectroscopic method in which an atom or molecule is excited to a higher energy level by the absorption of laser light followed by spontaneous emission of light. It was first reported by Zare and coworkers in 1968.
LIF is used for studying structure of molecules, detection of selective species and flow visualization and measurements. The wavelength is often selected to be the one at which the species has its largest cross section. The excited species will after some time, usually in the order of few nanoseconds to microseconds, de-excite and emit light at a wavelength longer than the excitation wavelength. This fluorescent light is typically recorded with a photomultiplier tube (PMT) or filtered photodiodes.
Types
Two different kinds of spectra exist, disperse spectra and excitation spectra.
The disperse spectra are performed with a fixed lasing wavelength, as above and the fluorescence spectrum is analyzed. Excitation scans on the other hand collect fluorescent light at a fixed emission wavelength or range of wavelengths. Instead the lasing wavelength is changed.
An advantage over absorption spectroscopy is that it is possible to get two- and three-dimensional images since fluorescence takes place in all directions (i.e. the fluorescence signal is usually isotropic). The signal-to-noise ratio of the fluorescence signal is very high, providing a good sensitivity to the process. It is also possible to distinguish between more species, since the lasing wavelength can be tuned to a particular excitation of a given species which is not shared by other species.
LIF is useful in the study of the electronic structure of molecules and their interactions. It has also been successfully applied for quantitative measurement of concentrations in fields like combustion, plasma, spray and flow phenomena (such as molecular tagging velocimetry), in some cases visualizing concentrations down to nanomolar levels. LED-induced fluorescence has been used in situ to delineate aromatic hydrocarbon contamination as a cone penetrometer add on module and also as a percussive capable asset.
Applications
Detection of purity
Optical tumor diagnosis
Imaging of paleontological specimens
Detection and quantification of biomolecules and biological processes (e.g. DNA sequencing, trace protein analysis, polymerase chain reaction products, and single-cell analysis)
In plasma diagnostics, which measures plasma properties such as the ion distribution function and velocity space diffusion and convection in a plasma
See also
Fluorescence microscope
Planar laser-induced fluorescence
Ultrafast laser spectroscopy
References
Fluorescence
Plasma diagnostics | Laser-induced fluorescence | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 543 | [
"Luminescence",
"Fluorescence",
"Plasma physics",
"Measuring instruments",
"Plasma diagnostics"
] |
869,590 | https://en.wikipedia.org/wiki/Canonical%20quantization | In physics, canonical quantization is a procedure for quantizing a classical theory, while attempting to preserve the formal structure, such as symmetries, of the classical theory to the greatest extent possible.
Historically, this was not quite Werner Heisenberg's route to obtaining quantum mechanics, but Paul Dirac introduced it in his 1926 doctoral thesis, the "method of classical analogy" for quantization, and detailed it in his classic text Principles of Quantum Mechanics. The word canonical arises from the Hamiltonian approach to classical mechanics, in which a system's dynamics is generated via canonical Poisson brackets, a structure which is only partially preserved in canonical quantization.
This method was further used by Paul Dirac in the context of quantum field theory, in his construction of quantum electrodynamics. In the field theory context, it is also called the second quantization of fields, in contrast to the semi-classical first quantization of single particles.
History
When it was first developed, quantum physics dealt only with the quantization of the motion of particles, leaving the electromagnetic field classical, hence the name quantum mechanics.
Later the electromagnetic field was also quantized, and even the particles themselves became represented through quantized fields, resulting in the development of quantum electrodynamics (QED) and quantum field theory in general. Thus, by convention, the original form of particle quantum mechanics is denoted first quantization, while quantum field theory is formulated in the language of second quantization.
First quantization
Single particle systems
The following exposition is based on Dirac's treatise on quantum mechanics.
In the classical mechanics of a particle, there are dynamic variables which are called coordinates () and momenta (). These specify the state of a classical system. The canonical structure (also known as the symplectic structure) of classical mechanics consists of Poisson brackets enclosing these variables, such as . All transformations of variables which preserve these brackets are allowed as canonical transformations in classical mechanics. Motion itself is such a canonical transformation.
By contrast, in quantum mechanics, all significant features of a particle are contained in a state , called a quantum state. Observables are represented by operators acting on a Hilbert space of such quantum states.
The eigenvalue of an operator acting on one of its eigenstates represents the value of a measurement on the particle thus represented. For example, the energy is read off by the Hamiltonian operator acting on a state , yielding
where is the characteristic energy associated to this eigenstate.
Any state could be represented as a linear combination of eigenstates of energy; for example,
where are constant coefficients.
As in classical mechanics, all dynamical operators can be represented by functions of the position and momentum ones, and , respectively. The connection between this representation and the more usual wavefunction representation is given by the eigenstate of the position operator representing a particle at position , which is denoted by an element in the Hilbert space, and which satisfies . Then, .
Likewise, the eigenstates of the momentum operator specify the momentum representation: .
The central relation between these operators is a quantum analog of the above Poisson bracket of classical mechanics, the canonical commutation relation,
This relation encodes (and formally leads to) the uncertainty principle, in the form . This algebraic structure may be thus considered as the quantum analog of the canonical structure of classical mechanics.
Many-particle systems
When turning to N-particle systems, i.e., systems containing N identical particles (particles characterized by the same quantum numbers such as mass, charge and spin), it is necessary to extend the single-particle state function to the N-particle state function . A fundamental difference between classical and quantum mechanics concerns the concept of indistinguishability of identical particles. Only two species of particles are thus possible in quantum physics, the so-called bosons and fermions which obey the following rules for each kind of particle:
for bosons:
for fermions:
where we have interchanged two coordinates of the state function. The usual wave function is obtained using the Slater determinant and the identical particles theory. Using this basis, it is possible to solve various many-particle problems.
Issues and limitations
Classical and quantum brackets
Dirac's book details his popular rule of supplanting Poisson brackets by commutators:
One might interpret this proposal as saying that we should seek a "quantization map" mapping a function on the classical phase space to an operator on the quantum Hilbert space such that
It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions and
Groenewold's theorem
One concrete version of the above impossibility claim is Groenewold's theorem (after Dutch theoretical physicist Hilbrand J. Groenewold), which we describe for a system with one degree of freedom for simplicity. Let us accept the following "ground rules" for the map . First, should send the constant function 1 to the identity operator. Second, should take and to the usual position and momentum operators and . Third, should take a polynomial in and to a "polynomial" in and , that is, a finite linear combinations of products of and , which may be taken in any desired order. In its simplest form, Groenewold's theorem says that there is no map satisfying the above ground rules and also the bracket condition
for all polynomials and .
Actually, the nonexistence of such a map occurs already by the time we reach polynomials of degree four. Note that the Poisson bracket of two polynomials of degree four has degree six, so it does not exactly make sense to require a map on polynomials of degree four to respect the bracket condition. We can, however, require that the bracket condition holds when and have degree three. Groenewold's theorem can be stated as follows:
The proof can be outlined as follows. Suppose we first try to find a quantization map on polynomials of degree less than or equal to three satisfying the bracket condition whenever has degree less than or equal to two and has degree less than or equal to two. Then there is precisely one such map, and it is the Weyl quantization. The impossibility result now is obtained by writing the same polynomial of degree four as a Poisson bracket of polynomials of degree three in two different ways. Specifically, we have
On the other hand, we have already seen that if there is going to be a quantization map on polynomials of degree three, it must be the Weyl quantization; that is, we have already determined the only possible quantization of all the cubic polynomials above.
The argument is finished by computing by brute force that
does not coincide with
Thus, we have two incompatible requirements for the value of .
Axioms for quantization
If represents the quantization map that acts on functions in classical phase space, then the following properties are usually considered desirable:
and (elementary position/momentum operators)
is a linear map
(Poisson bracket)
(von Neumann rule).
However, not only are these four properties mutually inconsistent, any three of them are also inconsistent! As it turns out, the only pairs of these properties that lead to self-consistent, nontrivial solutions are 2 & 3, and possibly 1 & 3 or 1 & 4. Accepting properties 1 & 2, along with a weaker condition that 3 be true only asymptotically in the limit (see Moyal bracket), leads to deformation quantization, and some extraneous information must be provided, as in the standard theories utilized in most of physics. Accepting properties 1 & 2 & 3 but restricting the space of quantizable observables to exclude terms such as the cubic ones in the above example amounts to geometric quantization.
Second quantization: field theory
Quantum mechanics was successful at describing non-relativistic systems with fixed numbers of particles, but a new framework was needed to describe systems in which particles can be created or destroyed, for example, the electromagnetic field, considered as a collection of photons. It was eventually realized that special relativity was inconsistent with single-particle quantum mechanics, so that all particles are now described relativistically by quantum fields.
When the canonical quantization procedure is applied to a field, such as the electromagnetic field, the classical field variables become quantum operators. Thus, the normal modes comprising the amplitude of the field are simple oscillators, each of which is quantized in standard first quantization, above, without ambiguity. The resulting quanta are identified with individual particles or excitations. For example, the quanta of the electromagnetic field are identified with photons. Unlike first quantization, conventional second quantization is completely unambiguous, in effect a functor, since the constituent set of its oscillators are quantized unambiguously.
Historically, quantizing the classical theory of a single particle gave rise to a wavefunction. The classical equations of motion of a field are typically identical in form to the (quantum) equations for the wave-function of one of its quanta. For example, the Klein–Gordon equation is the classical equation of motion for a free scalar field, but also the quantum equation for a scalar particle wave-function. This meant that quantizing a field appeared to be similar to quantizing a theory that was already quantized, leading to the fanciful term second quantization in the early literature, which is still used to describe field quantization, even though the modern interpretation detailed is different.
One drawback to canonical quantization for a relativistic field is that by relying on the Hamiltonian to determine time dependence, relativistic invariance is no longer manifest. Thus it is necessary to check that relativistic invariance is not lost. Alternatively, the Feynman integral approach is available for quantizing relativistic fields, and is manifestly invariant. For non-relativistic field theories, such as those used in condensed matter physics, Lorentz invariance is not an issue.
Field operators
Quantum mechanically, the variables of a field (such as the field's amplitude at a given point) are represented by operators on a Hilbert space. In general, all observables are constructed as operators on the Hilbert space, and the time-evolution of the operators is governed by the Hamiltonian, which must be a positive operator. A state annihilated by the Hamiltonian must be identified as the vacuum state, which is the basis for building all other states. In a non-interacting (free) field theory, the vacuum is normally identified as a state containing zero particles. In a theory with interacting particles, identifying the vacuum is more subtle, due to vacuum polarization, which implies that the physical vacuum in quantum field theory is never really empty. For further elaboration, see the articles on the quantum mechanical vacuum and the vacuum of quantum chromodynamics. The details of the canonical quantization depend on the field being quantized, and whether it is free or interacting.
Real scalar field
A scalar field theory provides a good example of the canonical quantization procedure. Classically, a scalar field is a collection of an infinity of oscillator normal modes. It suffices to consider a 1+1-dimensional space-time in which the spatial direction is compactified to a circle of circumference 2, rendering the momenta discrete.
The classical Lagrangian density describes an infinity of coupled harmonic oscillators, labelled by which is now a label (and not the displacement dynamical variable to be quantized), denoted by the classical field ,
where is a potential term, often taken to be a polynomial or monomial of degree 3 or higher. The action functional is
The canonical momentum obtained via the Legendre transformation using the action is , and the classical Hamiltonian is found to be
Canonical quantization treats the variables and as operators with canonical commutation relations at time = 0, given by
Operators constructed from and can then formally be defined at other times via the time-evolution generated by the Hamiltonian,
However, since and no longer commute, this expression is ambiguous at the quantum level. The problem is to construct a representation of the relevant operators on a Hilbert space and to construct a positive operator as a quantum operator on this Hilbert space in such a way that it gives this evolution for the operators as given by the preceding equation, and to show that contains a vacuum state on which has zero eigenvalue. In practice, this construction is a difficult problem for interacting field theories, and has been solved completely only in a few simple cases via the methods of constructive quantum field theory. Many of these issues can be sidestepped using the Feynman integral as described for a particular in the article on scalar field theory.
In the case of a free field, with , the quantization procedure is relatively straightforward. It is convenient to Fourier transform the fields, so that
The reality of the fields implies that
The classical Hamiltonian may be expanded in Fourier modes as
where .
This Hamiltonian is thus recognizable as an infinite sum of classical normal mode oscillator excitations , each one of which is quantized in the standard manner, so the free quantum Hamiltonian looks identical. It is the s that have become operators obeying the standard commutation relations, , with all others vanishing. The collective Hilbert space of all these oscillators is thus constructed using creation and annihilation operators constructed from these modes,
for which for all , with all other commutators vanishing.
The vacuum is taken to be annihilated by all of the , and is the Hilbert space constructed by applying any combination of the infinite collection of creation operators † to . This Hilbert space is called Fock space. For each , this construction is identical to a quantum harmonic oscillator. The quantum field is an infinite array of quantum oscillators. The quantum Hamiltonian then amounts to
where may be interpreted as the number operator giving the number of particles in a state with momentum .
This Hamiltonian differs from the previous expression by the subtraction of the zero-point energy of each harmonic oscillator. This satisfies the condition that must annihilate the vacuum, without affecting the time-evolution of operators via the above exponentiation operation. This subtraction of the zero-point energy may be considered to be a resolution of the quantum operator ordering ambiguity, since it is equivalent to requiring that all creation operators appear to the left of annihilation operators in the expansion of the Hamiltonian. This procedure is known as Wick ordering or normal ordering.
Other fields
All other fields can be quantized by a generalization of this procedure. Vector or tensor fields simply have more components, and independent creation and destruction operators must be introduced for each independent component. If a field has any internal symmetry, then creation and destruction operators must be introduced for each component of the field related to this symmetry as well. If there is a gauge symmetry, then the number of independent components of the field must be carefully analyzed to avoid over-counting equivalent configurations, and gauge-fixing may be applied if needed.
It turns out that commutation relations are useful only for quantizing bosons, for which the occupancy number of any state is unlimited. To quantize fermions, which satisfy the Pauli exclusion principle, anti-commutators are needed. These are defined by .
When quantizing fermions, the fields are expanded in creation and annihilation operators, , , which satisfy
The states are constructed on a vacuum annihilated by the , and the Fock space is built by applying all products of creation operators to . Pauli's exclusion principle is satisfied, because , by virtue of the anti-commutation relations.
Condensates
The construction of the scalar field states above assumed that the potential was minimized at = 0, so that the vacuum minimizing the Hamiltonian satisfies , indicating that the vacuum expectation value (VEV) of the field is zero. In cases involving spontaneous symmetry breaking, it is possible to have a non-zero VEV, because the potential is minimized for a value = . This occurs for example, if with and , for which the minimum energy is found at . The value of in one of these vacua may be considered as condensate of the field . Canonical quantization then can be carried out for the shifted field , and particle states with respect to the shifted vacuum are defined by quantizing the shifted field. This construction is utilized in the Higgs mechanism in the standard model of particle physics.
Mathematical quantization
Deformation quantization
The classical theory is described using a spacelike foliation of spacetime with the state at each slice being described by an element of a symplectic manifold with the time evolution given by the symplectomorphism generated by a Hamiltonian function over the symplectic manifold. The quantum algebra of "operators" is an -deformation of the algebra of smooth functions over the symplectic space such that the leading term in the Taylor expansion over of the commutator expressed in the phase space formulation is . (Here, the curly braces denote the Poisson bracket. The subleading terms are all encoded in the Moyal bracket, the suitable quantum deformation of the Poisson bracket.) In general, for the quantities (observables) involved,
and providing the arguments of such brackets, ħ-deformations are highly nonunique—quantization is an "art", and is specified by the physical context.
(Two different quantum systems may represent two different, inequivalent, deformations of the same classical limit, .)
Now, one looks for unitary representations of this quantum algebra. With respect to such a unitary representation, a symplectomorphism in the classical theory would now deform to a (metaplectic) unitary transformation. In particular, the time evolution symplectomorphism generated by the classical Hamiltonian deforms to a unitary transformation generated by the corresponding quantum Hamiltonian.
A further generalization is to consider a Poisson manifold instead of a symplectic space for the classical theory and perform an ħ-deformation of the corresponding Poisson algebra or even Poisson supermanifolds.
Geometric quantization
In contrast to the theory of deformation quantization described above, geometric quantization seeks to construct an actual Hilbert space and operators on it. Starting with a symplectic manifold , one first constructs a prequantum Hilbert space consisting of the space of square-integrable sections of an appropriate line bundle over . On this space, one can map all classical observables to operators on the prequantum Hilbert space, with the commutator corresponding exactly to the Poisson bracket. The prequantum Hilbert space, however, is clearly too big to describe the quantization of .
One then proceeds by choosing a polarization, that is (roughly), a choice of variables on the -dimensional phase space. The quantum Hilbert space is then the space of sections that depend only on the chosen variables, in the sense that they are covariantly constant in the other directions. If the chosen variables are real, we get something like the traditional Schrödinger Hilbert space. If the chosen variables are complex, we get something like the Segal–Bargmann space.
See also
Correspondence principle
Creation and annihilation operators
Dirac bracket
Moyal bracket
Phase space formulation (of quantum mechanics)
Geometric quantization
References
Historical References
Silvan S. Schweber: QED and the men who made it, Princeton Univ. Press, 1994,
General Technical References
Alexander Altland, Ben Simons: Condensed matter field theory, Cambridge Univ. Press, 2009,
James D. Bjorken, Sidney D. Drell: Relativistic quantum mechanics, New York, McGraw-Hill, 1964
.
An introduction to quantum field theory, by M.E. Peskin and H.D. Schroeder,
Franz Schwabl: Advanced Quantum Mechanics, Berlin and elsewhere, Springer, 2009
External links
Pedagogic Aides to Quantum Field Theory Click on the links for Chaps. 1 and 2 at this site to find an extensive, simplified introduction to second quantization. See Sect. 1.5.2 in Chap. 1. See Sect. 2.7 and the chapter summary in Chap. 2.
Quantum field theory
Mathematical quantization | Canonical quantization | [
"Physics"
] | 4,266 | [
"Quantum field theory",
"Mathematical quantization",
"Quantum mechanics"
] |
869,825 | https://en.wikipedia.org/wiki/Optical%20flow | Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image.
The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.
The term optical flow is also used by roboticists, encompassing related techniques from image processing and control of navigation including motion detection, object segmentation, time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement.
Estimation
Optical flow can be estimated in a number of ways. Broadly, optical flow estimation approaches can be divided into machine learning based models (sometimes called data-driven models), classical models (sometimes called knowledge-driven models) which do not use machine learning and hybrid models which use aspects of both learning based models and classical models.
Classical Models
Many classical models use the intuitive assumption of brightness constancy; that even if a point moves between frames, its brightness stays constant.
To formalise this intuitive assumption, consider two consecutive frames from a video sequence, with intensity , where refer to pixel coordinates and refers to time.
In this case, the brightness constancy constraint is
where is the displacement vector between a point in the first frame and the corresponding point in the second frame.
By itself, the brightness constancy constraint cannot be solved for and at each pixel, since there is only one equation and two unknowns.
This is known as the aperture problem.
Therefore, additional constraints must be imposed to estimate the flow field.
Regularized Models
Perhaps the most natural approach to addressing the aperture problem is to apply a smoothness constraint or a regularization constraint to the flow field.
One can combine both of these constraints to formulate estimating optical flow as an optimization problem, where the goal is to minimize the cost function of the form,
where is the extent of the images , is the gradient operator, is a constant, and is a loss function.
This optimisation problem is difficult to solve owing to its non-linearity.
To address this issue, one can use a variational approach and linearise the brightness constancy constraint using a first order Taylor series approximation. Specifically, the brightness constancy constraint is approximated as,
For convenience, the derivatives of the image, , and are often condensed to become , and .
Doing so, allows one to rewrite the linearised brightness constancy constraint as,
The optimization problem can now be rewritten as
For the choice of , this method is the same as the Horn-Schunck method.
Of course, other choices of cost function have been used such as , which is a differentiable variant of the norm.
To solve the aforementioned optimization problem, one can use the Euler-Lagrange equations to provide a system of partial differential equations for each point in . In the simplest case of using , these equations are,
where denotes the Laplace operator.
Since the image data is made up of discrete pixels, these equations are discretised.
Doing so yields a system of linear equations which can be solved for at each pixel, using an iterative scheme such as Gauss-Seidel.
An alternate approach is to discretize the optimisation problem and then perform a search of the possible values without linearising it.
This search is often performed using Max-flow min-cut theorem algorithms, linear programming or belief propagation methods.
Parametric Models
Instead of applying the regularization constraint on a point by point basis as per a regularized model, one can group pixels into regions and estimate the motion of these regions.
This is known as a parametric model, since the motion of these regions is parameterized.
In formulating optical flow estimation in this way, one makes the assumption that the motion field in each region be fully characterised by a set of parameters.
Therefore, the goal of a parametric model is to estimate the motion parameters that minimise a loss function which can be written as,
where is the set of parameters determining the motion in the region , is data cost term, is a weighting function that determines the influence of pixel on the total cost, and and are frames 1 and 2 from a pair of consecutive frames.
The simplest parametric model is the Lucas-Kanade method. This uses rectangular regions and parameterises the motion as purely translational. The Lucas-Kanade method uses the original brightness constancy constrain as the data cost term and selects .
This yields the local loss function,
Other possible local loss functions include the negative normalized cross-correlation between the two frames.
Learning Based Models
Instead of seeking to model optical flow directly, one can train a machine learning system to estimate optical flow. Since 2015, when FlowNet was proposed, learning based models have been applied to optical flow and have gained prominence. Initially, these approaches were based on Convolutional Neural Networks arranged in a U-Net architecture. However, with the advent of transformer architecture in 2017, transformer based models have gained prominence.
Uses
Motion estimation and video compression have developed as a major aspect of optical flow research. While the optical flow field is superficially similar to a dense motion field derived from the techniques of motion estimation, optical flow is the study of not only the determination of the optical flow field itself, but also of its use in estimating the three-dimensional nature and structure of the scene, as well as the 3D motion of objects and the observer relative to the scene, most of them using the image Jacobian.
Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry. Optical flow information has been recognized as being useful for controlling micro air vehicles.
The application of optical flow includes the problem of inferring not only the motion of the observer and objects in the scene, but also the structure of objects and the environment. Since awareness of motion and the generation of mental maps of the structure of our environment are critical components of animal (and human) vision, the conversion of this innate ability to a computer capability is similarly crucial in the field of machine vision.
Consider a five-frame clip of a ball moving from the bottom left of a field of vision, to the top right. Motion estimation techniques can determine that on a two dimensional plane the ball is moving up and to the right and vectors describing this motion can be extracted from the sequence of frames. For the purposes of video compression (e.g., MPEG), the sequence is now described as well as it needs to be. However, in the field of machine vision, the question of whether the ball is moving to the right or if the observer is moving to the left is unknowable yet critical information. Not even if a static, patterned background were present in the five frames, could we confidently state that the ball was moving to the right, because the pattern might have an infinite distance to the observer.
Optical flow sensor
Various configurations of optical flow sensors exist. One configuration is an image sensor chip connected to a processor programmed to run an optical flow algorithm. Another configuration uses a vision chip, which is an integrated circuit having both the image sensor and the processor on the same die, allowing for a compact implementation. An example of this is a generic optical mouse sensor used in an optical mouse. In some cases the processing circuitry may be implemented using analog or mixed-signal circuits to enable fast optical flow computation using minimal current consumption.
One area of contemporary research is the use of neuromorphic engineering techniques to implement circuits that respond to optical flow, and thus may be appropriate for use in an optical flow sensor. Such circuits may draw inspiration from biological neural circuitry that similarly responds to optical flow.
Optical flow sensors are used extensively in computer optical mice, as the main sensing component for measuring the motion of the mouse across a surface.
Optical flow sensors are also being used in robotics applications, primarily where there is a need to measure visual motion or relative motion between the robot and other objects in the vicinity of the robot. The use of optical flow sensors in unmanned aerial vehicles (UAVs), for stability and obstacle avoidance, is also an area of current research.
See also
Ambient optic array
Optical mouse
Range imaging
Vision processing unit
Continuity Equation
Motion field
References
External links
Finding Optic Flow
Art of Optical Flow article on fxguide.com (using optical flow in visual effects)
Optical flow evaluation and ground truth sequences.
Middlebury Optical flow evaluation and ground truth sequences.
mrf-registration.net - Optical flow estimation through MRF
The French Aerospace Lab: GPU implementation of a Lucas-Kanade based optical flow
CUDA Implementation by CUVI (CUDA Vision & Imaging Library)
Horn and Schunck Optical Flow: Online demo and source code of the Horn and Schunck method
TV-L1 Optical Flow: Online demo and source code of the Zach et al. method
Robust Optical Flow: Online demo and source code of the Brox et al. method
Motion in computer vision | Optical flow | [
"Physics"
] | 1,947 | [
"Physical phenomena",
"Motion (physics)",
"Motion in computer vision"
] |
869,832 | https://en.wikipedia.org/wiki/Massive%20particle | The physics technical term massive particle refers to a massful particle which has real non-zero rest mass (such as baryonic matter), the counter-part to the term massless particle. According to special relativity, the velocity of a massive particle is always less than the speed of light. When highlighting relativistic speeds, the synonyms bradyon (from , bradys, “slow”), tardyon or ittyon
are sometimes used to contrast with luxon (which moves at light speed) and hypothetical tachyon (which moves faster than light).
See also
Elementary particle
Massless particle
Particle
Tachyon
References
Particle physics | Massive particle | [
"Physics"
] | 135 | [
"Particle physics stubs",
"Particle physics"
] |
870,261 | https://en.wikipedia.org/wiki/Aviation%20light%20signals | In the case of a radio failure or aircraft not equipped with a radio, or in the case of a deaf pilot, air traffic control may use a signal lamp (called a "signal light gun" or "light gun" by the FAA) to direct the aircraft. ICAO regulations require air traffic control towers to possess such signal lamps. The signal lamp has a focused bright beam and is capable of emitting three different colors: red, white and green. These colors may be flashed or steady, and have different meanings to aircraft in flight or on the ground. Planes can acknowledge the instruction by rocking their wings, moving the ailerons if on the ground, or by flashing their landing or navigation lights during hours of darkness. Air traffic control signal light guns are typically specified with a (white) center beam brightness of > 180,000 - 200,000 candela, and are visible for roughly 4 miles in clear daylight conditions. The table below describes the meaning of the signals. The use of handheld combination red/green/white signal lamps for air traffic control dates back to at least the 1930s.
See also
Navigation light
Formation light
Landing light
References
Airport infrastructure
Aviation communications
Air traffic control
Optical communications | Aviation light signals | [
"Engineering"
] | 241 | [
"Optical communications",
"Airport infrastructure",
"Telecommunications engineering",
"Aerospace engineering"
] |
20,041,308 | https://en.wikipedia.org/wiki/Berezin%20integral | In mathematical physics, the Berezin integral, named after Felix Berezin, (also known as Grassmann integral, after Hermann Grassmann), is a way to define integration for functions of Grassmann variables (elements of the exterior algebra). It is not an integral in the Lebesgue sense; the word "integral" is used because the Berezin integral has properties analogous to the Lebesgue integral and because it extends the path integral in physics, where it is used as a sum over histories for fermions.
Definition
Let be the exterior algebra of polynomials in anticommuting elements over the field of complex numbers. (The ordering of the generators is fixed and defines the orientation of the exterior algebra.)
One variable
The Berezin integral over the sole Grassmann variable is defined to be a linear functional
where we define
so that :
These properties define the integral uniquely and imply
Take note that is the most general function of because Grassmann variables square to zero, so cannot have non-zero terms beyond linear order.
Multiple variables
The Berezin integral on is defined to be the unique linear functional with the following properties:
for any where means the left or the right partial derivative. These properties define the integral uniquely.
Notice that different conventions exist in the literature: Some authors define instead
The formula
expresses the Fubini law. On the right-hand side, the interior integral of a monomial is set to be where ; the integral of vanishes. The integral with respect to is calculated in the similar way and so on.
Change of Grassmann variables
Let be odd polynomials in some antisymmetric variables . The Jacobian is the matrix
where refers to the right derivative (). The formula for the coordinate change reads
Integrating even and odd variables
Definition
Consider now the algebra of functions of real commuting variables and of anticommuting variables (which is called the free superalgebra of dimension ). Intuitively, a function is a function of m even (bosonic, commuting) variables and of n odd (fermionic, anti-commuting) variables. More formally, an element is a function of the argument that varies in an open set with values in the algebra Suppose that this function is continuous and vanishes in the complement of a compact set The Berezin integral is the number
Change of even and odd variables
Let a coordinate transformation be given by where are even and are odd polynomials of depending on even variables The Jacobian matrix of this transformation has the block form:
where each even derivative commutes with all elements of the algebra ; the odd derivatives commute with even elements and anticommute with odd elements. The entries of the diagonal blocks and are even and the entries of the off-diagonal blocks are odd functions, where again mean right derivatives.
When the function is invertible in
So we have the Berezinian (or superdeterminant) of the matrix , which is the even function
Suppose that the real functions define a smooth invertible map of open sets in and the linear part of the map is invertible for each The general transformation law for the Berezin integral reads
where ) is the sign of the orientation of the map The superposition is defined in the obvious way, if the functions do not depend on In the general case, we write where are even nilpotent elements of and set
where the Taylor series is finite.
Useful formulas
The following formulas for Gaussian integrals are used often in the path integral formulation of quantum field theory:
with being a complex matrix.
with being a complex skew-symmetric matrix, and being the Pfaffian of , which fulfills .
In the above formulas the notation is used. From these formulas, other useful formulas follow (See Appendix A in) :
with being an invertible matrix. Note that these integrals are all in the form of a partition function.
History
Berezin integral was probably first presented by David John Candlin in 1956. Later it was independently discovered by Felix Berezin in 1966.
Unfortunately Candlin's article failed to attract notice, and has been buried in oblivion. Berezin's work came to be widely known, and has almost been cited universally, becoming an indispensable tool to treat quantum field theory of fermions by functional integral.
Other authors contributed to these developments, including the physicists Khalatnikov (although his paper contains mistakes), Matthews and Salam, and Martin.
See also
Supermanifold
Berezinian
Footnote
References
Further reading
Theodore Voronov: Geometric integration theory on Supermanifolds, Harwood Academic Publisher,
Berezin, Felix Alexandrovich: Introduction to Superanalysis, Springer Netherlands,
Multilinear algebra
Differential forms
Integral calculus
Mathematical physics
Quantum field theory
Supersymmetry | Berezin integral | [
"Physics",
"Mathematics",
"Engineering"
] | 986 | [
"Quantum field theory",
"Tensors",
"Calculus",
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Differential forms",
"Physics beyond the Standard Model",
"Integral calculus",
"Mathematical physics",
"Supersymmetry",
"Symmetry"
] |
20,042,003 | https://en.wikipedia.org/wiki/Voltage-sensitive%20dye | Voltage-sensitive dyes, also known as potentiometric dyes, are dyes which change their spectral properties in response to voltage changes. They are able to provide linear measurements of firing activity of single neurons, large neuronal populations or activity of myocytes. Many physiological processes are accompanied by changes in cell membrane potential which can be detected with voltage sensitive dyes. Measurements may indicate the site of action potential origin, and measurements of action potential velocity and direction may be obtained.
Potentiometric dyes are used to monitor the electrical activity inside cell organelles where it is not possible to insert an electrode, such as the mitochondria and dendritic spine. This technology is especially powerful for the study of patterns of activity in complex multicellular preparations. It also makes possible the measurement of spatial and temporal variations in membrane potential along the surface of single cells.
Types of dyes
Fast-response probes: These are amphiphilic membrane staining dyes which usually have a pair of hydrocarbon chains acting as membrane anchors and a hydrophilic group which aligns the chromophore perpendicular to the membrane/aqueous interface. The chromophore is believed to undergo a large electronic charge shift as a result of excitation from the ground to the excited state and this underlies the putative electrochromic mechanism for the sensitivity of these dyes to membrane potential. This molecule (dye) intercalates among the lipophilic part of biological membranes. This orientation assures that the excitation induced charge redistribution will occur parallel to the electric field within the membrane. A change in the voltage across the membrane will therefore cause a spectral shift resulting from a direct interaction between the field and the ground and excited state dipole moments.
New voltage dyes can sense voltage with high speed and sensitivity using photoinduced electron transfer (PeT) through a conjugated molecular wire.
Slow-response probes: These exhibit potential-dependent changes in their transmembrane distribution which are accompanied by a fluorescence change. Typical slow-response probes include cationic carbocyanines and rhodamines, and ionic oxonols.
Examples
Commonly used voltage sensitive dyes are substituted aminonaphthylethenylpyridinium (ANEP) dyes, such as di-4-ANEPPS, di-8-ANEPPS, and RH237. Depending on their chemical modifications which change their physical properties they are used for different experimental procedures. They were first described in 1985 by the research group of Leslie Loew. ANNINE-6plus is a voltage sensitive dye with fast response (ns response time) and high sensitivity. It has been applied to measure the action potentials of a single t-tubule of cardiomyocytes by Guixue Bu et al. More recently, a series of fluorinated ANEP dyes was introduced that offer enhanced sensitivity and photostability; they are also available over a wide choice of excitation and emission wavelengths. A recent computational study confirmed that the ANEP dyes are affected only by the electrostatic environment and not by specific molecular interactions. Other structural scaffolds, such as xanthenes, are also successfully used.
Materials
The core material for imaging brain activity with voltage-sensitive dyes are the dyes themselves. These voltage-sensitive dyes are lipophilic and preferably localized in membranes with their hydrophobic tails. They are used in applications involving fluorescence or absorption; they are fast acting and are able to provide linear measurements of changes in membrane potential. Voltage sensitive dyes are supplied by many companies who offer fluorescent probes for biological applications. Potentiometric Probes, LLC specializes only in voltage sensitive dyes; they have an exclusive license to distribute the large set of fluorinated VSDs, marketed under the ElectroFluor brand.
A variety of specialized equipment may be used in conjunction with the dyes, and choices in equipment will vary according to the particularities of a preparation. Essentially, equipment will include specialized microscopes and imaging devices, and may include technical lamps or lasers.
Strengths and weaknesses
Strengths of imaging brain activity with voltage-sensitive dyes include the following abilities:
Measurement of population signals from many areas may be taken simultaneously, and hundreds of neurons may be recorded from. Such multisite recordings may provide precise information on action potential initiation and propagation (including direction and velocity), and on the entire branching structure of a neuron.
Measurements of spike activity in a ganglion that is producing behaviour can be taken and may provide information about how the behaviour is producing.
In certain preparations the pharmacological effects of the dyes may be completely reversed by removing the staining pipette and allowing the neuron 1–2 hours for recovery.
Dyes may be used to analyze signal integration in terminal dendritic branches. Voltage-sensitive dyes offer the only alternative to genetically encoded voltage sensitive proteins (such as Ci-VSP derived proteins) for doing this.
More soluble dyes such as ElectroFluor-530s, or di-2-ANEPEQ may perfused internally into single cell through a patch pipet. This technique has permitted the study of electrical signals in individual dendrites and dendritic spines within brain slices.
Weaknesses of imaging brain activity with voltage-sensitive dyes include the following problems:
Voltage-sensitive dyes may respond very differently from one preparation to another; typically tens of dyes must be tested in order to obtain an optimal signal., imaging parameters, such as excitation wavelength, emission wavelength, exposure time, should also be optimized.
Voltage-sensitive dyes often fail to penetrate through connective tissue or move through intracellular spaces to the region of membrane desired for study. Staining is a serious issue in applications of these dyes. Water-soluble dyes, such as ANNINE-6plus, ElectroFluor-530s, or di-2-ANEPEQ, do not suffer this problem.
On the other hand, if the dyes are too water-soluble, staining may not persist. This can be addressed by utilizing dyes containing longer alkyl chains to increase lipophilicity.
Noise is a problem in all preparations with voltage-sensitive dyes and in certain preparations the signal may be significantly obscured. Signal to noise ratios can be improved with spatial filtering or temporal filtering algorithms. Many such algorithms exist; one signal processing algorithm can be found in recent work with the ANNINE-6plus dye.
Cells may be permanently affected by treatments. Lasting pharmacological effects are possible, and the photodynamics of the dyes can be damaging. Recently developed fuorinated voltages sensitive dyes have been shown to mitigate these effects.
Uses
Voltage-sensitive dyes have been used to measure neural activity in several areas of the nervous system in a variety of organisms, including the squid giant axon, whisker barrels of the rat somatosensory cortex, olfactory bulb of the salamander, visual cortex of the cat, optic tectum of the frog, and the visual cortex of the rhesus monkey.
Many applications in cardiac electrophysiology have been published, including ex vivo mapping of electrical activity in whole hearts from various animal species, subcellular imaging from single cardiomyocytes, and even mapping both sinus rhythms and arrhytmias in open heart in vivo pig, where motion artifacts could be eliminated by dual wavelength ratio imaging of the voltage sensitive dye fluorescence.
References
Further reading
Biochemistry detection methods
Cell culture reagents
Neuroscience
Electrophysiology
Cell biology | Voltage-sensitive dye | [
"Chemistry",
"Biology"
] | 1,571 | [
"Biochemistry methods",
"Neuroscience",
"Cell biology",
"Chemical tests",
"Cell culture reagents",
"Biochemistry detection methods",
"Reagents for biochemistry"
] |
20,046,529 | https://en.wikipedia.org/wiki/Vector%20meson%20dominance | In physics, vector meson dominance (VMD) was a model developed by J. J. Sakurai in the 1960s before the introduction of quantum chromodynamics to describe interactions between energetic photons and hadronic matter.
In particular, the hadronic components of the physical photon consist of the lightest vector mesons, , and . Therefore, interactions between photons and hadronic matter occur by the exchange of a hadron between the dressed photon and the hadronic target.
Background
Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electric charge structures of protons and neutrons are substantially different.
According to VMD, the photon is a superposition of the pure electromagnetic photon (which interacts only with electric charges) and vector meson.
Just after 1970, when more accurate data on the above processes became available, some discrepancies with the VMD predictions appeared and new extensions of the model were published. These theories are known as Generalized Vector Meson Dominance theories (GVMD).
VMD and Hidden Local Symmetry
Whilst the ultraviolet description of the standard model is based on QCD, work over many decades has involved writing a low energy effective description of QCD, and further, positing a possible "dual" description. One such popular description is that of the hidden local symmetry. The dual description is based on the idea of emergence of gauge symmetries in the infrared of strongly coupled theories. Gauge symmetries are not really physical symmetries (only the global elements of the local gauge group are physical). This emergent property of gauge symmetries was demonstrated in Seiberg duality and later in the development of the AdS/CFT correspondence. In its generalised form, Vector Meson Dominance appears in AdS/CFT, AdS/QCD, AdS/condensed matter and some Seiberg dual constructions. It is therefore a common place idea within the theoretical physics community.
Criticism
Measurements of the photon-hadron interactions in higher energy levels show that VMD cannot predict the interaction in such levels. In his Nobel lecture J.I. Friedman summarizes the situation of VMD as follows: "...this eliminated the model [VMD] as a possible description of deep inelastic scattering... calculations of the generalized vector-dominance failed in general to describe the data over the full kinematic range..."
The Vector Meson Dominance model still sometimes makes significantly more accurate predictions of hadronic decays of excited light mesons involving photons than subsequent models such as the relativistic quark model for the meson wave function and the covariant oscillator quark model. Similarly, the Vector Meson Dominance model has outperformed perturbative QCD in making predictions of transitional form factors of the neutral pion meson, the eta meson, and the eta prime meson, that are "hard to explain within QCD." And, the model accurately reproduces recent experimental data for rho meson decays. Generalizations of the Vector Meson Dominance model to higher energies, or to consider additional factors present in cases where VMD fails, have been proposed to address the shortcomings identified by Friedman and others.
See also
Matter creation
Photon structure function
Notes
Particle physics
Nuclear physics
Obsolete theories in physics | Vector meson dominance | [
"Physics"
] | 738 | [
"Nuclear physics",
"Theoretical physics",
"Particle physics",
"Obsolete theories in physics"
] |
20,047,065 | https://en.wikipedia.org/wiki/Kalman%20decomposition | In control theory, a Kalman decomposition provides a mathematical means to convert a representation of any linear time-invariant (LTI) control system to a form in which the system can be decomposed into a standard form which makes clear the observable and controllable components of the system. This decomposition results in the system being presented with a more illuminating structure, making it easier to draw conclusions on the system's reachable and observable subspaces.
Definition
Consider the continuous-time LTI control system
,
,
or the discrete-time LTI control system
,
.
The Kalman decomposition is defined as the realization of this system obtained by transforming the original matrices as follows:
,
,
,
,
where is the coordinate transformation matrix defined as
,
and whose submatrices are
: a matrix whose columns span the subspace of states which are both reachable and unobservable.
: chosen so that the columns of are a basis for the reachable subspace.
: chosen so that the columns of are a basis for the unobservable subspace.
: chosen so that is invertible.
It can be observed that some of these matrices may have dimension zero. For example, if the system is both observable and controllable, then , making the other matrices zero dimension.
Consequences
By using results from controllability and observability, it can be shown that the transformed system has matrices in the following form:
This leads to the conclusion that
The subsystem is both reachable and observable.
The subsystem is reachable.
The subsystem is observable.
Variants
A Kalman decomposition also exists for linear dynamical quantum systems. Unlike classical dynamical systems, the coordinate transformation used in this variant requires to be in a specific class of transformations due to the physical laws of quantum mechanics.
See also
Realization (systems)
Observability
Controllability
References
External links
Lectures on Dynamic Systems and Control, Lecture 25 - , Munther Dahleh, George Verghese — MIT OpenCourseWare
Control theory | Kalman decomposition | [
"Mathematics"
] | 425 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
1,335,495 | https://en.wikipedia.org/wiki/Lambda%20point | The lambda point is the temperature at which normal fluid helium (helium I) makes the transition to superfluid state (helium II). At pressure of 1 atmosphere, the transition occurs at approximately 2.17 K. The lowest pressure at which He-I and He-II can coexist is the vapor−He-I−He-II triple point at and , which is the "saturated vapor pressure" at that temperature (pure helium gas in thermal equilibrium over the liquid surface, in a hermetic container). The highest pressure at which He-I and He-II can coexist is the bcc−He-I−He-II triple point with a helium solid at , .
The point's name derives from the graph (pictured) that results from plotting the specific heat capacity as a function of temperature (for a given pressure in the above range, in the example shown, at 1 atmosphere), which resembles the Greek letter lambda . The specific heat capacity has a sharp peak as the temperature approaches the lambda point. The tip of the peak is so sharp that a critical exponent characterizing the divergence of the heat capacity can be measured precisely only in zero gravity, to provide a uniform density over a substantial volume of fluid. Hence, the heat capacity was measured within 2 nK below the transition in an experiment included in a Space Shuttle payload in 1992.
Although the heat capacity has a peak, it does not tend towards infinity (contrary to what the graph may suggest), but has finite limiting values when approaching the transition from above and below. The behavior of the heat capacity near the peak is described by the formula where is the reduced temperature, is the Lambda point temperature, are constants (different above and below the transition temperature), and is the critical exponent: . Since this exponent is negative for the superfluid transition, specific heat remains finite.
The quoted experimental value of is in a significant disagreement with the most precise theoretical determinations coming from high temperature expansion techniques, Monte Carlo methods and the conformal bootstrap.
See also
Lambda point refrigerator
References
External links
What is superfluidity?
Threshold temperatures
Superfluidity
Liquid helium | Lambda point | [
"Physics",
"Chemistry",
"Materials_science"
] | 443 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Threshold temperatures",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Matter",
"Fluid dynamics"
] |
1,335,974 | https://en.wikipedia.org/wiki/Vacuum%20arc | A vacuum arc can arise when the surfaces of metal electrodes in contact with a good vacuum begin to emit electrons either through heating (thermionic emission) or in an electric field that is sufficient to cause field electron emission. Once initiated, a vacuum arc can persist, since the freed particles gain kinetic energy from the electric field, heating the metal surfaces through high-speed particle collisions. This process can create an incandescent cathode spot, which frees more particles, thereby sustaining the arc. At sufficiently high currents an incandescent anode spot may also be formed.
Electric discharge in vacuum is important for certain types of vacuum tubes and for high-voltage vacuum switches.
The thermionic vacuum arc (TVA) is a new type of plasma source, which generates a plasma containing ions with a directed energy. TVA discharges can be ignited in high-vacuum conditions between a heated cathode (electron gun) and an anode (tungsten crucible) containing the material. The accelerated electron beam, incident on the anode, heats the crucible, together with its contents, to a high temperature. After establishing a steady-state density of the evaporating anode material atoms, and when the voltage applied is high enough, a bright discharge is ignited between the electrodes.
See also
Cathodic arc deposition
Electric arc
Electric discharge in gases
Glow discharge
Vacuum arc thruster
References
Electric arcs | Vacuum arc | [
"Physics"
] | 289 | [
"Electric arcs",
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Plasma physics stubs"
] |
1,335,984 | https://en.wikipedia.org/wiki/Dendrimer | Dendrimers are highly ordered, branched polymeric molecules. Synonymous terms for dendrimer include arborols and cascade molecules. Typically, dendrimers are symmetric about the core, and often adopt a spherical three-dimensional morphology. The word dendron is also encountered frequently. A dendron usually contains a single chemically addressable group called the focal point or core. The difference between dendrons and dendrimers is illustrated in the top figure, but the terms are typically encountered interchangeably.
The first dendrimers were made by divergent synthesis approaches by Fritz Vögtle in 1978, R.G. Denkewalter at Allied Corporation in 1981, Donald Tomalia at Dow Chemical in 1983 and in 1985, and by George R. Newkome in 1985. In 1990 a convergent synthetic approach was introduced by Craig Hawker and Jean Fréchet. Dendrimer popularity then greatly increased, resulting in more than 5,000 scientific papers and patents by the year 2005.
Properties
Dendritic molecules are characterized by structural perfection. Dendrimers and dendrons are monodisperse and usually highly symmetric, spherical compounds. The field of dendritic molecules can be roughly divided into low-molecular weight and high-molecular weight species. The first category includes dendrimers and dendrons, and the latter includes dendronized polymers, hyperbranched polymers, and the polymer brush.
The properties of dendrimers are dominated by the functional groups on the molecular surface, however, there are examples of dendrimers with internal functionality. Dendritic encapsulation of functional molecules allows for the isolation of the active site, a structure that mimics that of active sites in biomaterials. Also, it is possible to make dendrimers water-soluble, unlike most polymers, by functionalizing their outer shell with charged species or other hydrophilic groups. Other controllable properties of dendrimers include toxicity, crystallinity, tecto-dendrimer formation, and chirality.
Dendrimers are also classified by generation, which refers to the number of repeated branching cycles that are performed during its synthesis. For example, if a dendrimer is made by convergent synthesis (see below), and the branching reactions are performed onto the core molecule three times, the resulting dendrimer is considered a third generation dendrimer. Each successive generation results in a dendrimer roughly twice the molecular weight of the previous generation. Higher generation dendrimers also have more exposed functional groups on the surface, which can later be used to customize the dendrimer for a given application. Dendrimers may have a single surface functional group, or may be modified to allow for multiple functional groups on the surface.
Synthesis
One of the first dendrimers, the Newkome dendrimer, was synthesized in 1985. This macromolecule is also commonly known by the name arborol. The figure outlines the mechanism of the first two generations of arborol through a divergent route (discussed below). The synthesis is started by nucleophilic substitution of 1-bromopentane by triethyl sodiomethanetricarboxylate in dimethylformamide and benzene. The ester groups were then reduced by lithium aluminium hydride to a triol in a deprotection step. Activation of the chain ends was achieved by converting the alcohol groups to tosylate groups with tosyl chloride and pyridine. The tosyl group then served as leaving groups in another reaction with the tricarboxylate, forming generation two. Further repetition of the two steps leads to higher generations of arborol.
Poly(amidoamine), or PAMAM, is perhaps the most well known dendrimer. The core of PAMAM is a diamine (commonly ethylenediamine), which is reacted with methyl acrylate, and then another ethylenediamine to make the generation-0 (G-0) PAMAM. Successive reactions create higher generations, which tend to have different properties. Lower generations can be thought of as flexible molecules with no appreciable inner regions, while medium-sized (G-3 or G-4) do have internal space that is essentially separated from the outer shell of the dendrimer. Very large (G-7 and greater) dendrimers can be thought of more like solid particles with very dense surfaces due to the structure of their outer shell. The functional group on the surface of PAMAM dendrimers is ideal for click chemistry, which gives rise to many potential applications.
Dendrimers can be considered to have three major portions: a core, an inner shell, and an outer shell. Ideally, a dendrimer can be synthesized to have different functionality in each of these portions to control properties such as solubility, thermal stability, and attachment of compounds for particular applications. Synthetic processes can also precisely control the size and number of branches on the dendrimer. There are two defined methods of dendrimer synthesis, divergent synthesis and convergent synthesis. However, because the actual reactions consist of many steps needed to protect the active site, it is difficult to synthesize dendrimers using either method. This makes dendrimers hard to make and very expensive to purchase. At this time, there are only a few companies that sell dendrimers; Polymer Factory Sweden AB commercializes biocompatible bis-MPA dendrimers and Dendritech is the only kilogram-scale producers of PAMAM dendrimers. NanoSynthons, LLC from Mount Pleasant, Michigan, USA produces PAMAM dendrimers and other proprietary dendrimers.
Divergent methods
The dendrimer is assembled from a multifunctional core, which is extended outward by a series of reactions, commonly a Michael reaction. Each step of the reaction must be driven to full completion to prevent mistakes in the dendrimer, which can cause trailing generations (some branches are shorter than the others). Such impurities can impact the functionality and symmetry of the dendrimer, but are extremely difficult to purify out because the relative size difference between perfect and imperfect dendrimers is very small.
Convergent methods
Dendrimers are built from small molecules that end up at the surface of the sphere, and reactions proceed inward building inward and are eventually attached to a core. This method makes it much easier to remove impurities and shorter branches along the way, so that the final dendrimer is more monodisperse. However dendrimers made this way are not as large as those made by divergent methods because crowding due to steric effects along the core is limiting.
Click chemistry
Dendrimers have been prepared via click chemistry, employing Diels-Alder reactions, thiol-ene and thiol-yne reactions and azide-alkyne reactions.
There are ample avenues that can be opened by exploring this chemistry in dendrimer synthesis.
Applications
Applications of dendrimers typically involve conjugating other chemical species to the dendrimer surface that can function as detecting agents (such as a dye molecule), affinity ligands, targeting components, radioligands, imaging agents, or pharmaceutically active compounds. Dendrimers have very strong potential for these applications because their structure can lead to multivalent systems. In other words, one dendrimer molecule has hundreds of possible sites to couple to an active species. Researchers aimed to utilize the hydrophobic environments of the dendritic media to conduct photochemical reactions that generate the products that are synthetically challenged. Carboxylic acid and phenol-terminated water-soluble dendrimers were synthesized to establish their utility in drug delivery as well as conducting chemical reactions in their interiors. This might allow researchers to attach both targeting molecules and drug molecules to the same dendrimer, which could reduce negative side effects of medications on healthy cells.
Dendrimers can also be used as a solubilizing agent. Since their introduction in the mid-1980s, this novel class of dendrimer architecture has been a prime candidate for host–guest chemistry. Dendrimers with hydrophobic core and hydrophilic periphery have shown to exhibit micelle-like behavior and have container properties in solution. The use of dendrimers as unimolecular micelles was proposed by Newkome in 1985. This analogy highlighted the utility of dendrimers as solubilizing agents. The majority of drugs available in pharmaceutical industry are hydrophobic in nature and this property in particular creates major formulation problems. This drawback of drugs can be ameliorated by dendrimeric scaffolding, which can be used to encapsulate as well as to solubilize the drugs because of the capability of such scaffolds to participate in extensive hydrogen bonding with water. Dendrimer labs are trying to manipulate dendrimer's solubilizing trait, to explore dendrimers for drug delivery and to target specific carriers.
For dendrimers to be able to be used in pharmaceutical applications, they must surmount the required regulatory hurdles to reach market. One dendrimer scaffold designed to achieve this is the polyethoxyethylglycinamide (PEE-G) dendrimer. This dendrimer scaffold has been designed and shown to have high HPLC purity, stability, aqueous solubility and low inherent toxicity.
Drug delivery
Approaches for delivering unaltered natural products using polymeric carriers is of widespread interest. Dendrimers have been explored for the encapsulation of hydrophobic compounds and for the delivery of anticancer drugs. The physical characteristics of dendrimers, including their monodispersity, water solubility, encapsulation ability, and large number of functionalizable peripheral groups make these macromolecules appropriate candidates for drug delivery vehicles.
Role of dendrimer chemical modifications in drug delivery
Dendrimers are particularly versatile drug delivery devices due to the wide range of chemical modifications that can be made to increase in vivo suitability and allow for site-specific targeted drug delivery.
Drug attachment to the dendrimer may be accomplished by (1) a covalent attachment or conjugation to the external surface of the dendrimer forming a dendrimer prodrug, (2) ionic coordination to charged outer functional groups, or (3) micelle-like encapsulation of a drug via a dendrimer-drug supramolecular assembly. In the case of a dendrimer prodrug structure, linking of a drug to a dendrimer may be direct or linker-mediated depending on desired release kinetics. Such a linker may be pH-sensitive, enzyme catalyzed, or a disulfide bridge. The wide range of terminal functional groups available for dendrimers allows for many different types of linker chemistries, providing yet another tunable component on the system. Key parameters to consider for linker chemistry are (1) release mechanism upon arrival to the target site, whether that be within the cell or in a certain organ system, (2) drug-dendrimer spacing so as to prevent lipophilic drugs from folding into the dendrimer, and (3) linker degradability and post-release trace modifications on drugs.
Polyethylene glycol (PEG) is a common modification for dendrimers to modify their surface charge and circulation time. Surface charge can influence the interactions of dendrimers with biological systems, such as amine-terminal modified dendrimers which have a propensity to interact with cell membranes with anionic charge. Certain in vivo studies have shown polycationic dendrimers to be cytotoxic through membrane permeabilization, a phenomenon that could be partially mitigated via addition of PEGylation caps on amine groups, resulting in lower cytotoxicity and lower red blood cell hemolysis. Additionally, studies have found that PEGylation of dendrimers results in higher drug loading, slower drug release, longer circulation times in vivo, and lower toxicity in comparison to counterparts without PEG modifications.
Numerous targeting moieties have been used to modify dendrimer biodistribution and allow for targeting to specific organs. For example, folate receptors are overexpressed in tumor cells and are therefore promising targets for localized drug delivery of chemotherapeutics. Folic acid conjugation to PAMAM dendrimers has been shown to increase targeting and decrease off-target toxicity while maintaining on-target cytotoxicity of chemotherapeutics such as methotrexate, in mouse models of cancer.
Antibody-mediated targeting of dendrimers to cell targets has also shown promise for targeted drug delivery. As epidermal growth factor receptors (EGFRs) are often overexpressed in brain tumors, EGFRs are a convenient target for site-specific drug delivery. The delivery of boron to cancerous cells is important for effective neutron capture therapy, a cancer treatment which requires a large concentration of boron in cancerous cells and a low concentration in healthy cells. A boronated dendrimer conjugated with a monoclonal antibody drug that targets EGFRs was used in rats to successfully deliver boron to cancerous cells.
Modifying nanoparticle dendrimers with peptides has also been successful for targeted destruction of colorectal (HCT-116) cancer cells in a co-culture scenario. Targeting peptides can be used to achieve site- or cell-specific delivery, and it has been shown that these peptides increase in targeting specificity when paired with dendrimers. Specifically, gemcitabine-loaded YIGSR-CMCht/PAMAM, a unique kind of dendrimer nanoparticle, induces a targeted mortality on these cancer cells. This is performed via selective interaction of the dendrimer with laminin receptors. Peptide dendrimers may be employed in the future to precisely target cancer cells and deliver chemotherapeutic agents.
The cellular uptake mechanism of dendrimers can also be tuned using chemical targeting modifications. Non-modified PAMAM-G4 dendrimer is taken up into activated microglia by fluid phase endocytosis. Conversely, mannose modification of hydroxyl PAMAM-G4 dendrimers was able to change the mechanism of internalization to mannose-receptor (CD206) mediated endocytosis. Additionally, mannose modification was able to change the biodistribution in the rest of the body in rabbits.
Pharmacokinetics and pharmacodynamics
Dendrimers have the potential to completely change the pharmacokinetic and pharmacodynamic (PK/PD) profiles of a drug. As carriers, the PK/PD is no longer determined by the drug itself but by the dendrimer’s localization, drug release, and dendrimer excretion. ADME properties are very highly tunable by varying dendrimer size, structure, and surface characteristics. While G9 dendrimers accumulate very heavily to the liver and spleen, G6 dendrimers tend to accumulate more broadly. As molecular weight increases, urinary clearance and plasma clearance decrease while terminal half-life increases.
Routes of delivery
To increase patient compliance with prescribed treatment, delivery of drugs orally is often preferred to other routes of drug administration. However oral bioavailability of many drugs tends to be very low. Dendrimers can be used to increase the solubility and stability of orally-administered drugs and increase drug penetration through the intestinal membrane. The bioavailability of PAMAM dendrimers conjugated to a chemotherapeutic has been studied in mice; it was found that around 9% of dendrimer administered orally was found intact in circulation and that minimal dendrimer degradation occurred in the gut.
Intravenous dendrimer delivery shows promise as gene vectors to deliver genes to various organs in the body, and even tumors. One study found that through intravenous injection, a combination of PPI dendrimers and gene complexes resulted in gene expression in the liver, and another study showed that a similar injection regressed the growth of tumors in observed animals.
The primary obstacle to transdermal drug delivery is the epidermis. Hydrophobic drugs have a very difficult time penetrating the skin layer, as they partition heavily into skin oils. Recently, PAMAM dendrimers have been used as delivery vehicles for NSAIDS to increase hydrophilicity, allowing greater drug penetration. These modifications act as polymeric transdermal enhancers allowing drugs to more easily penetrate the skin barrier.
Dendrimers may also act as new ophthalmic vehicles for drug delivery, which are different from the polymers currently used for this purpose. A study by Vanndamme and Bobeck used PAMAM dendrimers as ophthalmic delivery vehicles in rabbits for two model drugs and measured the ocular residence time of this delivery to be comparable and in some cases greater than current bioadhesive polymers used in ocular delivery. This result indicates that administered drugs were more active and had increased bioavailability when delivered via dendrimers than their free-drug counterparts. Additionally, photo-curable, drug-eluting dendrimer-hyaluronic acid hydrogels have been used as corneal sutures applied directly to the eye. These hydrogel sutures have shown efficacy as a medical device in rabbit models that surpasses traditional sutures and minimizes corneal scarring.
Brain drug delivery
Dendrimer drug delivery has also shown major promise as a potential solution for many traditionally difficult drug delivery problems. In the case of drug delivery to the brain, dendrimers are able to take advantage of the EPR effect and blood-brain barrier (BBB) impairment to cross the BBB effectively in vivo. For example, hydroxyl-terminated PAMAM dendrimers possess an intrinsic targeting ability to inflamed macrophages in the brain, verified using fluorescently labeled neutral generation dendrimers in a rabbit model of cerebral palsy. This intrinsic targeting has enabled drug delivery in a variety of conditions, ranging from cerebral palsy and other neuroinflammatory disorders to traumatic brain injury and hypothermic circulatory arrest, across a variety of animal models ranging from mice and rabbits to canines. Dendrimer uptake into the brain correlates with severity of inflammation and BBB impairment and it is believed that the BBB impairment is the key driving factor allowing dendrimer penetration. Localization is heavily skewed towards activated microglia. Dendrimer-conjugated N-acetyl cysteine has shown efficacy in vivo as an anti-inflammatory at more than 1000-fold lower dose than free drug on a drug basis, reversing the phenotype of cerebral palsy, Rett syndrome, macular degeneration and other inflammatory diseases.
Clinical trials
Starpharma, an Australian pharmaceutical company, has multiple products that have either already been approved for use or are in the clinical trial phase. SPL7013, also known as astodrimer sodium, is a hyperbranched polymer used in Starpharma’s VivaGel line of pharmaceuticals that is currently approved to treat bacterial vaginosis and prevent the spread of HIV, HPV, and HSV in Europe, Southeast Asia, Japan, Canada, and Australia. Due to SPL7013’s broad antiviral action, it has recently been tested by the company as a potential drug to treat SARS-CoV-2. The company states preliminary in-vitro studies show high efficacy in preventing SARS-CoV-2 infection in cells.
Gene delivery and transfection
The ability to deliver pieces of DNA to the required parts of a cell includes many challenges. Current research is being performed to find ways to use dendrimers to traffic genes into cells without damaging or deactivating the DNA. To maintain the activity of DNA during dehydration, the dendrimer/DNA complexes were encapsulated in a water-soluble polymer, and then deposited on or sandwiched in functional polymer films with a fast degradation rate to mediate gene transfection. Based on this method, PAMAM dendrimer/DNA complexes were used to encapsulate functional biodegradable polymer films for substrate mediated gene delivery. Research has shown that the fast-degrading functional polymer has great potential for localized transfection.
Sensors
Dendrimers have potential applications in sensors. Studied systems include proton or pH sensors using poly(propylene imine), cadmium-sulfide/polypropylenimine tetrahexacontaamine dendrimer composites to detect fluorescence signal quenching, and poly(propylenamine) first and second generation dendrimers for metal cation photodetection amongst others. Research in this field is vast and ongoing due to the potential for multiple detection and binding sites in dendritic structures.
Nanoparticles
Dendrimers also are used in the synthesis of monodisperse metallic nanoparticles. Poly(amidoamine), or PAMAM, dendrimers are utilized for their tertiary amine groups at the branching points within the dendrimer. Metal ions are introduced to an aqueous dendrimer solution and the metal ions form a complex with the lone pair of electrons present at the tertiary amines. After complexation, the ions are reduced to their zerovalent states to form a nanoparticle that is encapsulated within the dendrimer. These nanoparticles range in width from 1.5 to 10 nanometers and are called dendrimer-encapsulated nanoparticles.
Other applications
Given the widespread use of pesticides, herbicides and insecticides in modern farming, dendrimers are also being used by companies to help improve the delivery of agrochemicals to enable healthier plant growth and to help fight plant diseases.
Dendrimers are also being investigated for use as blood substitutes. Their steric bulk surrounding a heme-mimetic centre significantly slows degradation compared to free heme, and prevents the cytotoxicity exhibited by free heme.
Dendritic functional polymer polyamidoamine (PAMAM) is used to prepare core shell structure i.e. microcapsules and utilized in formulation of self-healing coatings of conventional and renewable origins.
Different generations of polyamidoamine dendrimers have recently been implemented as selective contacts in photovoltaic devices.
Drug delivery
Dendrimers in drug-delivery systems is an example of various host–guest interactions. The interaction between host and guest, the dendrimer and the drug, respectively, can either be hydrophobic or covalent. Hydrophobic interaction between host and guest is considered "encapsulated," while covalent interactions are considered to be conjugated. The use of dendrimers in medicine has shown to improve drug delivery by increasing the solubility and bioavailability of the drug. In conjunction, dendrimers can increase both cellular uptake and targeting ability, and decrease drug resistance.
The solubility of various nonsteroidal anti-inflammatory drugs (NSAID) increases when they are encapsulated in PAMAM dendrimers. This study shows the enhancement of NSAID solubility is due to the electrostatic interactions between the surface amine groups in PAMAM and the carboxyl groups found in NSAIDs. Contributing to the increase in solubility are the hydrophobic interactions between the aromatic groups in the drugs and the interior cavities of the dendrimer. When a drug is encapsulated within a dendrimer, its physical and physiological properties remains unaltered, including non-specificity and toxicity. However, when the dendrimer and the drug are covalently linked together, it can be used for specific tissue targeting and controlled release rates. Covalent conjugation of multiple drugs on dendrimer surfaces can pose a problem of insolubility.
This principle is also being studied for cancer treatment application. Several groups have encapsulated anti-cancer medications such as: Camptothecin, Methotrexate, and Doxorubicin. Results from these research has shown that dendrimers have increased aqueous solubility, slowed release rate, and possibly control cytotoxicity of the drugs. Cisplatin has been conjugated to PAMAM dendrimers that resulted in the same pharmacological results as listed above, but the conjugation also helped in accumulating cisplatin in solid tumors in intravenous administration.
See also
Dendronized polymer
Ferrocene-containing dendrimers
Metallodendrimer
References
Supramolecular chemistry | Dendrimer | [
"Chemistry",
"Materials_science"
] | 5,291 | [
"Nanotechnology",
"Dendrimers",
"nan",
"Supramolecular chemistry"
] |
1,336,603 | https://en.wikipedia.org/wiki/Clinical%20pharmacology | Clinical pharmacology is "that discipline that teaches, does research, frames policy, gives information and advice about the actions and proper uses of medicines in humans and implements that knowledge in clinical practice". Clinical pharmacology is inherently a translational discipline underpinned by the basic science of pharmacology, engaged in the experimental and observational study of the disposition and effects of drugs in humans, and committed to the translation of science into evidence-based therapeutics. It has a broad scope, from the discovery of new target molecules to the effects of drug usage in whole populations. The main aim of clinical pharmacology is to generate data for optimum use of drugs and the practice of 'evidence-based medicine'.
Clinical pharmacologists have medical and scientific training that enables them to evaluate evidence and produce new data through well-designed studies. Clinical pharmacologists must have access to enough patients for clinical care, teaching and education, and research. Their responsibilities to patients include, but are not limited to, detecting and analysing adverse drug effects and reactions, therapeutics, and toxicology including reproductive toxicology, perioperative drug management, and psychopharmacology.
Modern clinical pharmacologists are also trained in data analysis skills. Their approaches to analyse data can include modelling and simulation techniques (e.g. population analysis, non-linear mixed-effects modelling).
Branches
Clinical pharmacology consists of multiple branches listed below:
Pharmacodynamics – what drugs do to the body and how. This includes not just the cellular and molecular aspects, but also more relevant clinical measurements. For example, not just the pharmacological actions of salbutamol, a beta2-adrenergic receptor agonist, but the respiratory peak flow rate of both healthy volunteers and patients.
Pharmacokinetics – what happens to the drug while in the body. This involves the body systems for handling the drug, usually divided into the following classification:
Absorption – the processes by which the drug move into the bloodstream from the site of administration (e.g. the gut)
Distribution – the extent to which the drug enters and leaves different tissues of the body
Metabolism – the processes by which the drug is metabolized in the liver, i.e. transformed into molecules that are usually less pharmacologically active
Excretion – the processes by which the drug is eliminated from the body, which mostly happens in the liver and kidneys.
Rational Prescribing – using the right medication, in the right dose, using the right route and frequency of administration, and for the right duration of time.
Adverse drug effects – unwanted effects of a medicine that are typically not noticed by the individual (e.g. a reduction in the white cell count or a change in the serum uric acid concentration)
Adverse drug reactions – unwanted effects of the drug that the individual experiences (e.g. a sore throat because of a reduced white cell count or an attack of gout because of an increased serum uric acid concentration)
Toxicology – the discipline that deals with the adverse effects of chemicals
Drug interactions – the study of how drugs interact with each other. A drug may negatively or positively affect the effects of another drug; drugs can also interact with other agents, such as foods, alcohol, and devices.
Drug development – the processes of bringing a new medicine from its discovery to clinical use, usually culminating in some form of clinical trials and marketing authorization applications to country-specific drug regulators, such as the US FDA and the UK's MHRA.
Molecular pharmacology – the discipline of studying drug actions at the molecular level; it is a branch of pharmacology in general.
Pharmacogenomics – the study of the human genome in order to understand the ways in which genetic factors determine the actions of medicines.
History
Medicinal uses of plant and animal resources have been common since prehistoric times. Many countries, such as China, Egypt, and India, have written documentation of many traditional remedies. A few of these remedies are still regarded as helpful today, but most have them have been discarded, because they were ineffective and potentially harmful.
For many years, therapeutic practices were based on Hippocratic humoral theory, popularized by the Greek physician Galen (129 – c. AD 216) and not on experimentation.
In around the 17th century physicians started to apply use methods to study traditional remedies, although they still lacked methods to test the hypotheses they had about how drugs worked.
By the late 18th century and early 19th century, methods of experimental physiology and pharmacology began to be developed by scientists such as François Magendie and his student Claude Bernard.
From the late 18th century to the early 20th century, advances were made in chemistry and physiology that laid the foundations needed to understand how drugs act at the tissue and organ levels. The advances that were made at this time gave manufacturers the ability to make and sell medicines that they claimed to be effective, but were in many cases worthless. There were no methods for evaluating such claims until rational therapeutic concepts were established in medicine, starting at about the end of the 19th century.
The development of receptor theory at the start of the 20th century and later developments led to better understanding of how medicines act and the development of many new medicines that are both safe and effective. Expansion of the scientific principles of pharmacology and clinical pharmacology continues today.
See also
Dormant therapy
References
External links
International Union of Basic and Clinical Pharmacology (IUPHAR)
European Association for Clinical Pharmacology and Therapeutics (EACPT)
Dutch Society on Clinical Pharmacology and Biopharmaceutics (NVKF&B)
American Society for Clinical Pharmacology and Therapeutics (ASCPT)
American College of Clinical Pharmacology (ACCP)
British Pharmacological Society (BPS)
Korean Society for Clinical Pharmacology and Therapeutics (KSCPT)
Japanese Society for Clinical Pharmacology and Therapeutics (JSCPT)
Pharmacology | Clinical pharmacology | [
"Chemistry"
] | 1,252 | [
"Pharmacology",
"Medicinal chemistry",
"Clinical pharmacology"
] |
1,336,900 | https://en.wikipedia.org/wiki/Vertical%20displacement | In tectonics, vertical displacement refers to the shifting of land in a vertical direction, resulting in uplift and subsidence. The displacement of rock layers can provide information on how and why Earth's lithosphere changes throughout geologic time. There are different mechanisms which lead to vertical displacement such as tectonic activity, and isostatic adjustments. Tectonic activity leads to vertical displacement when crust is rearranged during a seismic event. Isostatic adjustments result in vertical displacement through sinking due to an increased load or isostatic rebound due to load removal.
Tectonic causes of vertical displacement
Vertical displacement resulting from tectonic activity occurs at divergent and convergent plate boundaries. The movement of magma in the asthenosphere can create divergent plate boundaries as the magma begins to rise and protrude weaker lithospheric crust. Subsidence at a divergent plate boundary is a form of vertical displacement which occurs when a plate begins to split apart. As intrusive magma widens the rift zone of a divergent plate boundary the layers of crust on the surface above the rift will subside into the rift, creating a vertical displacement of those layers of surface crust.
Convergent plate boundaries create orogenies such as the Laramide orogeny that raised the Rocky Mountains. For this orogen event dense oceanic crust from the Pacific plate subducts beneath the less dense continental crust of the North American plate as they converge. This subduction induced the compression of the bounded western region of the North American plate which created the uplift of different layers of rock. This vertical displacement created the various mountain formations which are cumulatively known as the Rocky Mountain range.
Earthquakes are one mechanism that leads to vertical displacement of crust. The fracturing of land during an earthquake creates a fault when land is displaced during the event. The throw of the fault is a term used to describe and quantify the magnitude of this displacement.
Glacial isostatic adjustment
Changes in glaciation can lead to the vertical displacement of crust. Glaciers and ice sheets residing on top of landmass result in an isostatic depression, or sinking, in a section of lithospheric crust due to the weight of the ice. Likewise, isostatic rebound, or uplift, occurs when glaciers and ice sheets recede.
Using asthenosphere viscosity data researchers are able to determine the rate by which isostatic rebound occurs. Isostatic rebound occurrence rate can be determined by comparing local viscosities to the maximum viscosity of the asthenosphere. Areas with higher viscosity are subject to quick isostatic rebound while in regions of low viscosity crustal uplift occurs at a slower rate. Uplift is still occurring through isostatic rebound from the Last Glacial Maximum.
Glacial isostatic rebound leads to sea level regression which can be measured using 14C dating to determine the age of sublittoral sediment in different regions along the seafloor.
See also
Ground displacement
Notes
Plate tectonics
Vertical position | Vertical displacement | [
"Physics"
] | 612 | [
"Vertical position",
"Physical quantities",
"Distance"
] |
1,337,282 | https://en.wikipedia.org/wiki/Topological%20property | In topology and related areas of mathematics, a topological property or topological invariant is a property of a topological space that is invariant under homeomorphisms. Alternatively, a topological property is a proper class of topological spaces which is closed under homeomorphisms. That is, a property of spaces is a topological property if whenever a space X possesses that property every space homeomorphic to X possesses that property. Informally, a topological property is a property of the space that can be expressed using open sets.
A common problem in topology is to decide whether two topological spaces are homeomorphic or not. To prove that two spaces are not homeomorphic, it is sufficient to find a topological property which is not shared by them.
Properties of topological properties
A property is:
Hereditary, if for every topological space and subset the subspace has property
Weakly hereditary, if for every topological space and closed subset the subspace has property
Common topological properties
Cardinal functions
The cardinality of the space .
The cardinality of the topology (the set of open subsets) of the space .
Weight , the least cardinality of a basis of the topology of the space .
Density , the least cardinality of a subset of whose closure is .
Separation
Some of these terms are defined differently in older mathematical literature; see history of the separation axioms.
T0 or Kolmogorov. A space is Kolmogorov if for every pair of distinct points x and y in the space, there is at least either an open set containing x but not y, or an open set containing y but not x.
T1 or Fréchet. A space is Fréchet if for every pair of distinct points x and y in the space, there is an open set containing x but not y. (Compare with T0; here, we are allowed to specify which point will be contained in the open set.) Equivalently, a space is T1 if all its singletons are closed. T1 spaces are always T0.
Sober. A space is sober if every irreducible closed set C has a unique generic point p. In other words, if C is not the (possibly nondisjoint) union of two smaller closed non-empty subsets, then there is a p such that the closure of {p} equals C, and p is the only point with this property.
T2 or Hausdorff. A space is Hausdorff if every two distinct points have disjoint neighbourhoods. T2 spaces are always T1.
T2½ or Urysohn. A space is Urysohn if every two distinct points have disjoint closed neighbourhoods. T2½ spaces are always T2.
Completely T2 or completely Hausdorff. A space is completely T2 if every two distinct points are separated by a function. Every completely Hausdorff space is Urysohn.
Regular. A space is regular if whenever C is a closed set and p is a point not in C, then C and p have disjoint neighbourhoods.
T3 or Regular Hausdorff. A space is regular Hausdorff if it is a regular T0 space. (A regular space is Hausdorff if and only if it is T0, so the terminology is consistent.)
Completely regular. A space is completely regular if whenever C is a closed set and p is a point not in C, then C and {p} are separated by a function.
T3½, Tychonoff, Completely regular Hausdorff or Completely T3. A Tychonoff space is a completely regular T0 space. (A completely regular space is Hausdorff if and only if it is T0, so the terminology is consistent.) Tychonoff spaces are always regular Hausdorff.
Normal. A space is normal if any two disjoint closed sets have disjoint neighbourhoods. Normal spaces admit partitions of unity.
T4 or Normal Hausdorff. A normal space is Hausdorff if and only if it is T1. Normal Hausdorff spaces are always Tychonoff.
Completely normal. A space is completely normal if any two separated sets have disjoint neighbourhoods.
T5 or Completely normal Hausdorff. A completely normal space is Hausdorff if and only if it is T1. Completely normal Hausdorff spaces are always normal Hausdorff.
Perfectly normal. A space is perfectly normal if any two disjoint closed sets are precisely separated by a function. A perfectly normal space must also be completely normal.
T6 or Perfectly normal Hausdorff, or perfectly T4. A space is perfectly normal Hausdorff, if it is both perfectly normal and T1. A perfectly normal Hausdorff space must also be completely normal Hausdorff.
Discrete space. A space is discrete if all of its points are completely isolated, i.e. if any subset is open.
Number of isolated points. The number of isolated points of a topological space.
Countability conditions
Separable. A space is separable if it has a countable dense subset.
First-countable. A space is first-countable if every point has a countable local base.
Second-countable. A space is second-countable if it has a countable base for its topology. Second-countable spaces are always separable, first-countable and Lindelöf.
Lindelöf. A space is Lindelöf if every open cover has a countable subcover.
σ-compact. A space is σ-compact if it is the union of countably many compact subspaces.
Connectedness
Connected. A space is connected if it is not the union of a pair of disjoint non-empty open sets. Equivalently, a space is connected if the only clopen sets are the empty set and itself.
Locally connected. A space is locally connected if every point has a local base consisting of connected sets.
Totally disconnected. A space is totally disconnected if it has no connected subset with more than one point.
Path-connected. A space X is path-connected if for every two points x, y in X, there is a path p from x to y, i.e., a continuous map p: [0,1] → X with p(0) = x and p(1) = y. Path-connected spaces are always connected.
Locally path-connected. A space is locally path-connected if every point has a local base consisting of path-connected sets. A locally path-connected space is connected if and only if it is path-connected.
Arc-connected. A space X is arc-connected if for every two points x, y in X, there is an arc f from x to y, i.e., an injective continuous map with and . Arc-connected spaces are path-connected.
Simply connected. A space X is simply connected if it is path-connected and every continuous map is homotopic to a constant map.
Locally simply connected. A space X is locally simply connected if every point x in X has a local base of neighborhoods U that is simply connected.
Semi-locally simply connected. A space X is semi-locally simply connected if every point has a local base of neighborhoods U such that every loop in U is contractible in X. Semi-local simple connectivity, a strictly weaker condition than local simple connectivity, is a necessary condition for the existence of a universal cover.
Contractible. A space X is contractible if the identity map on X is homotopic to a constant map. Contractible spaces are always simply connected.
Hyperconnected. A space is hyperconnected if no two non-empty open sets are disjoint. Every hyperconnected space is connected.
Ultraconnected. A space is ultraconnected if no two non-empty closed sets are disjoint. Every ultraconnected space is path-connected.
Indiscrete or trivial. A space is indiscrete if the only open sets are the empty set and itself. Such a space is said to have the trivial topology.
Compactness
Compact. A space is compact if every open cover has a finite subcover. Some authors call these spaces quasicompact and reserve compact for Hausdorff spaces where every open cover has finite subcover. Compact spaces are always Lindelöf and paracompact. Compact Hausdorff spaces are therefore normal.
Sequentially compact. A space is sequentially compact if every sequence has a convergent subsequence.
Countably compact. A space is countably compact if every countable open cover has a finite subcover.
Pseudocompact. A space is pseudocompact if every continuous real-valued function on the space is bounded.
σ-compact. A space is σ-compact if it is the union of countably many compact subsets.
Lindelöf. A space is Lindelöf if every open cover has a countable subcover.
Paracompact. A space is paracompact if every open cover has an open locally finite refinement. Paracompact Hausdorff spaces are normal.
Locally compact. A space is locally compact if every point has a local base consisting of compact neighbourhoods. Slightly different definitions are also used. Locally compact Hausdorff spaces are always Tychonoff.
Ultraconnected compact. In an ultra-connected compact space X every open cover must contain X itself. Non-empty ultra-connected compact spaces have a largest proper open subset called a monolith.
Metrizability
Metrizable. A space is metrizable if it is homeomorphic to a metric space. Metrizable spaces are always Hausdorff and paracompact (and hence normal and Tychonoff), and first-countable. Moreover, a topological space is said to be metrizable if there exists a metric for such that the metric topology is identical with the topology
Polish. A space is called Polish if it is metrizable with a separable and complete metric.
Locally metrizable. A space is locally metrizable if every point has a metrizable neighbourhood.
Miscellaneous
Baire space. A space X is a Baire space if it is not meagre in itself. Equivalently, X is a Baire space if the intersection of countably many dense open sets is dense.
Door space. A topological space is a door space if every subset is open or closed (or both).
Topological Homogeneity. A space X is (topologically) homogeneous if for every x and y in X there is a homeomorphism such that Intuitively speaking, this means that the space looks the same at every point. All topological groups are homogeneous.
Finitely generated or Alexandrov. A space X is Alexandrov if arbitrary intersections of open sets in X are open, or equivalently if arbitrary unions of closed sets are closed. These are precisely the finitely generated members of the category of topological spaces and continuous maps.
Zero-dimensional. A space is zero-dimensional if it has a base of clopen sets. These are precisely the spaces with a small inductive dimension of 0.
Almost discrete. A space is almost discrete if every open set is closed (hence clopen). The almost discrete spaces are precisely the finitely generated zero-dimensional spaces.
Boolean. A space is Boolean if it is zero-dimensional, compact and Hausdorff (equivalently, totally disconnected, compact and Hausdorff). These are precisely the spaces that are homeomorphic to the Stone spaces of Boolean algebras.
Reidemeister torsion
-resolvable. A space is said to be κ-resolvable (respectively: almost κ-resolvable) if it contains κ dense sets that are pairwise disjoint (respectively: almost disjoint over the ideal of nowhere dense subsets). If the space is not -resolvable then it is called -irresolvable.
Maximally resolvable. Space is maximally resolvable if it is -resolvable, where Number is called dispersion character of
Strongly discrete. Set is strongly discrete subset of the space if the points in may be separated by pairwise disjoint neighborhoods. Space is said to be strongly discrete if every non-isolated point of is the accumulation point of some strongly discrete set.
Non-topological properties
There are many examples of properties of metric spaces, etc, which are not topological properties. To show a property is not topological, it is sufficient to find two homeomorphic topological spaces such that has , but does not have .
For example, the metric space properties of boundedness and completeness are not topological properties. Let and be metric spaces with the standard metric. Then, via the homeomorphism . However, is complete but not bounded, while is bounded but not complete.
See also
Homology and cohomology
Homotopy group and Cohomotopy group
Citations
References
[2] Simon Moulieras, Maciej Lewenstein and Graciana Puentes, Entanglement engineering and topological protection by discrete-time quantum walks, Journal of Physics B: Atomic, Molecular and Optical Physics 46 (10), 104005 (2013).
https://iopscience.iop.org/article/10.1088/0953-4075/46/10/104005/pdf
Homeomorphisms
ru:Топологический инвариант | Topological property | [
"Mathematics"
] | 2,785 | [
"Properties of topological spaces",
"Homeomorphisms",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
1,337,370 | https://en.wikipedia.org/wiki/Cross%20section%20%28geometry%29 | In geometry and science, a cross section is the non-empty intersection of a solid body in three-dimensional space with a plane, or the analog in higher-dimensional spaces. Cutting an object into slices creates many parallel cross-sections. The boundary of a cross-section in three-dimensional space that is parallel to two of the axes, that is, parallel to the plane determined by these axes, is sometimes referred to as a contour line; for example, if a plane cuts through mountains of a raised-relief map parallel to the ground, the result is a contour line in two-dimensional space showing points on the surface of the mountains of equal elevation.
In technical drawing a cross-section, being a projection of an object onto a plane that intersects it, is a common tool used to depict the internal arrangement of a 3-dimensional object in two dimensions. It is traditionally crosshatched with the style of crosshatching often indicating the types of materials being used.
With computed axial tomography, computers can construct cross-sections from x-ray data.
Definition
If a plane intersects a solid (a 3-dimensional object), then the region common to the plane and the solid is called a cross-section of the solid. A plane containing a cross-section of the solid may be referred to as a cutting plane.
The shape of the cross-section of a solid may depend upon the orientation of the cutting plane to the solid. For instance, while all the cross-sections of a ball are disks, the cross-sections of a cube depend on how the cutting plane is related to the cube. If the cutting plane is perpendicular to a line joining the centers of two opposite faces of the cube, the cross-section will be a square, however, if the cutting plane is perpendicular to a diagonal of the cube joining opposite vertices, the cross-section can be either a point, a triangle or a hexagon.
Plane sections
A related concept is that of a plane section, which is the curve of intersection of a plane with a surface. Thus, a plane section is the boundary of a cross-section of a solid in a cutting plane.
If a surface in a three-dimensional space is defined by a function of two variables, i.e., , the plane sections by cutting planes that are parallel to a coordinate plane (a plane determined by two coordinate axes) are called level curves or isolines.
More specifically, cutting planes with equations of the form (planes parallel to the -plane) produce plane sections that are often called contour lines in application areas.
Mathematical examples of cross sections and plane sections
A cross section of a polyhedron is a polygon.
The conic sections – circles, ellipses, parabolas, and hyperbolas – are plane sections of a cone with the cutting planes at various different angles, as seen in the diagram at left.
Any cross-section passing through the center of an ellipsoid forms an elliptic region, while the corresponding plane sections are ellipses on its surface. These degenerate to disks and circles, respectively, when the cutting planes are perpendicular to a symmetry axis. In more generality, the plane sections of a quadric are conic sections.
A cross-section of a solid right circular cylinder extending between two bases is a disk if the cross-section is parallel to the cylinder's base, or an elliptic region (see diagram at right) if it is neither parallel nor perpendicular to the base. If the cutting plane is perpendicular to the base it consists of a rectangle (not shown) unless it is just tangent to the cylinder, in which case it is a single line segment.
The term cylinder can also mean the lateral surface of a solid cylinder (see cylinder (geometry)). If a cylinder is used in this sense, the above paragraph would read as follows: A plane section of a right circular cylinder of finite length is a circle if the cutting plane is perpendicular to the cylinder's axis of symmetry, or an ellipse if it is neither parallel nor perpendicular to that axis. If the cutting plane is parallel to the axis the plane section consists of a pair of parallel line segments unless the cutting plane is tangent to the cylinder, in which case, the plane section is a single line segment.
A plane section can be used to visualize the partial derivative of a function with respect to one of its arguments, as shown. Suppose . In taking the partial derivative of with respect to , one can take a plane section of the function at a fixed value of to plot the level curve of solely against ; then the partial derivative with respect to is the slope of the resulting two-dimensional graph.
In related subjects
A plane section of a probability density function of two random variables in which the cutting plane is at a fixed value of one of the variables is a conditional density function of the other variable (conditional on the fixed value defining the plane section). If instead the plane section is taken for a fixed value of the density, the result is an iso-density contour. For the normal distribution, these contours are ellipses.
In economics, a production function specifies the output that can be produced by various quantities and of inputs, typically labor and physical capital. The production function of a firm or a society can be plotted in three-dimensional space. If a plane section is taken parallel to the -plane, the result is an isoquant showing the various combinations of labor and capital usage that would result in the level of output given by the height of the plane section. Alternatively, if a plane section of the production function is taken at a fixed level of —that is, parallel to the -plane—then the result is a two-dimensional graph showing how much output can be produced at each of various values of usage of one input combined with the fixed value of the other input .
Also in economics, a cardinal or ordinal utility function gives the degree of satisfaction of a consumer obtained by consuming quantities and of two goods. If a plane section of the utility function is taken at a given height (level of utility), the two-dimensional result is an indifference curve showing various alternative combinations of consumed amounts and of the two goods all of which give the specified level of utility.
Area and volume
Cavalieri's principle states that solids with corresponding cross-sections of equal areas have equal volumes.
The cross-sectional area () of an object when viewed from a particular angle is the total area of the orthographic projection of the object from that angle. For example, a cylinder of height h and radius r has when viewed along its central axis, and when viewed from an orthogonal direction. A sphere of radius r has when viewed from any angle. More generically, can be calculated by evaluating the following surface integral:
where is the unit vector pointing along the viewing direction toward the viewer, is a surface element with an outward-pointing normal, and the integral is taken only over the top-most surface, that part of the surface that is "visible" from the perspective of the viewer. For a convex body, each ray through the object from the viewer's perspective crosses just two surfaces. For such objects, the integral may be taken over the entire surface () by taking the absolute value of the integrand (so that the "top" and "bottom" of the object do not subtract away, as would be required by the Divergence Theorem applied to the constant vector field ) and dividing by two:
In higher dimensions
In analogy with the cross-section of a solid, the cross-section of an -dimensional body in an -dimensional space is the non-empty intersection of the body with a hyperplane (an -dimensional subspace). This concept has sometimes been used to help visualize aspects of higher dimensional spaces. For instance, if a four-dimensional object passed through our three-dimensional space, we would see a three-dimensional cross-section of the four-dimensional object. In particular, a 4-ball (hypersphere) passing through 3-space would appear as a 3-ball that increased to a maximum and then decreased in size during the transition. This dynamic object (from the point of view of 3-space) is a sequence of cross-sections of the 4-ball.
Examples in science
In geology, the structure of the interior of a planet is often illustrated using a diagram of a cross-section of the planet that passes through the planet's center, as in the cross-section of Earth at right.
Cross-sections are often used in anatomy to illustrate the inner structure of an organ, as shown at the left.
A cross-section of a tree trunk, as shown at left, reveals growth rings that can be used to find the age of the tree and the temporal properties of its environment.
See also
Descriptive geometry
Exploded-view drawing
Graphical projection
Plans (drawings)
Profile gauge
Section lining; representation of materials
Secant plane
Notes
References
Infographics
Elementary geometry
Technical drawing
Methods of representation
Planes (geometry)
Geometric intersection | Cross section (geometry) | [
"Mathematics",
"Engineering"
] | 1,842 | [
"Design engineering",
"Mathematical objects",
"Infinity",
"Elementary mathematics",
"Elementary geometry",
"Civil engineering",
"Planes (geometry)",
"Technical drawing"
] |
1,337,562 | https://en.wikipedia.org/wiki/PLATO%20%28computational%20chemistry%29 | PLATO (Package for Linear-combination of ATomic Orbitals) is a suite of programs for electronic structure calculations. It receives its name from the choice of basis set (numeric atomic orbitals) used to expand the electronic wavefunctions.
PLATO is a code, written in C, for the efficient modelling of materials. It is a tight binding code (both orthogonal and non-orthogonal), allowing for multipole charges and electron spin. It also contains Density Functional Theory programs: these were restored to enable clear benchmarking to tight binding simulations, but can be used in their own right. The Density Functional Tight Binding program can be applied to systems with periodic boundary conditions in three dimension (crystals), as well as clusters and molecules.
How PLATO works
How PLATO performs Density Functional Theory is summarized in several papers:
. The way it performs tight binding is summarized in the following papers
Applications of PLATO
Some examples of its use are listed below.
Metals
Point defects in transition metals: Density functional theory calculations have been performed to study the systematic trends of point defect behaviours in bee transition metals.
Surfaces
Interaction of C60 molecules on Si(100):The interactions between pairs of C60 molecules adsorbed upon the Si(100) surface have been studied via a series of DFT calculations.
Molecules
Efficient local-orbitals based method for ultrafast dynamics: The evolution of electrons in molecules under the influence of time-dependent electric fields is simulated.
See also
List of quantum chemistry and solid state physics software
References
External links
Computer Physics Communications Program Library, from which Plato can be downloaded
Computational chemistry software | PLATO (computational chemistry) | [
"Chemistry"
] | 321 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
1,338,601 | https://en.wikipedia.org/wiki/Water%20industry | The water industry provides drinking water and wastewater services (including sewage treatment) to residential, commercial, and industrial sectors of the economy. Typically public utilities operate water supply networks. The water industry does not include manufacturers and suppliers of bottled water, which is part of the beverage production and belongs to the food sector.
The water industry includes water engineering, operations, water and wastewater plant construction, equipment supply and specialist water treatment chemicals, among others.
The water industry is at the service of other industries, e.g. of the food sector which produces beverages such as bottled water.
Organizational structure
There are a variety of organizational structures for the water industry, with countries usually having one dominant traditional structure, which usually changes only gradually over time.
Ownership of water infrastructure and operations
local government - the most usual structure worldwide, public utility
national government - in many developing countries, especially smaller ones
private ownership - more common in the developed world, see for example Water privatisation in England and Wales
co-operative ownership and related NGO structures, public utility
Operations
local government operating the system through a municipal department, municipal company, or inter-municipal company
local government outsources operations to private sector, i.e. private water operators
national government operations
private water operators owns the system
BOTs - private sector building parts of a water system (such as a wastewater treatment plant) and operating it for an agreed period before transferring to public sector ownership and operation.
cooperation and NGO operators
Functions
Integrated water system (water supply, sewerage (sanitation) system, and wastewater treatment)
Separation by function (e.g. Dutch system where sewerage run by city, water supply by municipal or provincial companies, and water treatment by water boards), though some Water Supply Companies have merged beyond municipal or provincial borders.
Other separation (e.g. Munich, separated into three companies for bulk water supply, water and wastewater network operations, and retail)
Standards
Water quality standards and environmental standards relating to wastewater are usually set by national bodies.
In England, the Drinking Water Inspectorate and the Environment Agency.
In the United States, drinking water standards for public water systems are set by the United States Environmental Protection Agency (EPA) pursuant to the Safe Drinking Water Act. EPA issues water pollution control standards in conjunction with state environmental agencies, pursuant to the Clean Water Act.
For countries within the European Union, water-related European Union directives are important for water resource management and environmental and water quality standards. Key directives include the Urban Waste Water Treatment Directive 1992 requiring most towns and cities to treat their wastewater to specified standards, and the Water Framework Directive 2000, which requires water resource plans based on river basins, including public participation based on Aarhus Convention principles.
International Standards (ISO) on water service management and assessment are under preparation within Technical Committee ISO/TC 224.
Global companies
Using available data only, and during 2009 - 2010, the ten largest water companies active globally were (largest first) : Veolia Environnement (France), Suez Environnement (France), ITT Corporation (US), United Utilities (UK), Severn Trent (UK), Thames Water (UK), American Water Works Company (US), GE Water (US), Kurita Water Industries (Japan), Nalco Water (US).
See also
American Water Works Association - North American industry and standards association for drinking water
Imagine H2O - International accelerator and organization for water technology startups
Millennium Development Goals (one of the MDGs is "Reduce by half the proportion of people without sustainable access to safe drinking water")
National Rural Water Association - Industry association supporting small and rural water and wastewater utilities in the United States.
Water Environment Federation - Professional association for ambient water quality research & pollution control
References
External links
Truth from the Tap "Water Industry Facts" http://truthfromthetap.com/water-industry-facts/
Lowi, Alvin Jr. Avoiding the Grid: Technology and the Decentralization of Water
WaterWorld Magazine (see Water & Wastewater Industry Report e-newsletter)
Global Water Intelligence
Industrial WaterWorld
Water & Wastewater International
Water Procurement Portal
National Association of Clean Water Agencies
Industrial Doctorate Centre for the Water Sector
Sewerage
Hydrology
Industries (economics) | Water industry | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 852 | [
"Hydrology",
"Water pollution",
"Sewerage",
"Water industry",
"Environmental engineering"
] |
1,338,683 | https://en.wikipedia.org/wiki/Corecursion | In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is generative recursion, which may lack a definite "direction" inherent in corecursion and recursion.
Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such as streams, so long as it can be produced from simple data (base cases) in a sequence of finite steps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for example recurrence relations such as the factorial.
Corecursion can produce both finite and infinite data structures as results, and may employ self-referential data structures. Corecursion is often used in conjunction with lazy evaluation, to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept in functional programming, where corecursion and codata allow total languages to work with infinite data structures.
Examples
Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarily of interest in functional programming, it can be illustrated using imperative programming, which is done below using the generator facility in Python. In these examples local variables are used, and assigned values imperatively (destructively), though these are not necessary in corecursion in pure functional programming. In pure functional programming, rather than assigning to local variables, these computed values form an invariable sequence, and prior values are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be computed). The assignments simply express this in the imperative paradigm and explicitly specify where the computations happen, which serves to clarify the exposition.
Factorial
A classic example of recursion is computing the factorial, which is defined recursively by 0! := 1 and n! := n × (n - 1)!.
To recursively compute its result on a given input, a recursive function calls (a copy of) itself with a different ("smaller" in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless the base case has been reached. Thus a call stack develops in the process. For example, to compute fac(3), this recursively calls in turn fac(2), fac(1), fac(0) ("winding up" the stack), at which point recursion terminates with fac(0) = 1, and then the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial call frame fac(3) that uses the result of fac(2) = 2 to calculate the final result as 3 × 2 = 3 × fac(2) =: fac(3) and finally return fac(3) = 6. In this example a function returns a single value.
This stack unwinding can be explicated, defining the factorial corecursively, as an iterator, where one starts with the case of , then from this starting value constructs factorial values for increasing numbers 1, 2, 3... as in the above recursive definition with "time arrow" reversed, as it were, by reading it backwards as The corecursive algorithm thus defined produces a stream of all factorials. This may be concretely implemented as a generator. Symbolically, noting that computing next factorial value requires keeping track of both n and f (a previous factorial value), this can be represented as:
or in Haskell,
(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)
meaning, "starting from , on each step the next values are calculated as ". This is mathematically equivalent and almost identical to the recursive definition, but the emphasizes that the factorial values are being built up, going forwards from the starting case, rather than being computed after first going backwards, down to the base case, with a decrement. The direct output of the corecursive function does not simply contain the factorial values, but also includes for each value the auxiliary data of its index n in the sequence, so that any one specific result can be selected among them all, as and when needed.
There is a connection with denotational semantics, where the denotations of recursive programs is built up corecursively in this way.
In Python, a recursive factorial function can be defined as:
def factorial(n: int) -> int:
"""Recursive factorial function."""
if n == 0:
return 1
else:
return n * factorial(n - 1)
This could then be called for example as factorial(5) to compute 5!.
A corresponding corecursive generator can be defined as:
def factorials():
"""Corecursive generator."""
n, f = 0, 1
while True:
yield f
n, f = n + 1, f * (n + 1)
This generates an infinite stream of factorials in order; a finite portion of it can be produced by:
def n_factorials(n: int):
k, f = 0, 1
while k <= n:
yield f
k, f = k + 1, f * (k + 1)
This could then be called to produce the factorials up to 5! via:
for f in n_factorials(5):
print(f)
If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and the access into one function,
def nth_factorial(n: int):
k, f = 0, 1
while k < n:
k, f = k + 1, f * (k + 1)
return f
As can be readily seen here, this is practically equivalent (just by substituting return for the only yield there) to the accumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the concept of corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, where applicable.
Fibonacci sequence
In the same way, the Fibonacci sequence can be represented as:
Because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track two successive terms, with the corresponding to shift forward by one step, and the corresponding to computing the next term. This can then be implemented as follows (using parallel assignment):
def fibonacci_sequence():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
In Haskell, map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )
Tree traversal
Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can very naturally be implemented via corecursion.
Iteratively, one may traverse a tree by placing its root node in a data structure, then iterating with that data structure while it is non-empty, on each step removing the first node from it and placing the removed node's child nodes back into that data structure. If the data structure is a stack (LIFO), this yields depth-first traversal, and if the data structure is a queue (FIFO), this yields breadth-first traversal:
Using recursion, a depth-first traversal of a tree is implemented simply as recursively traversing each of the root node's child nodes in turn. Thus the second child subtree is not processed until the first child subtree is finished. The root node's value is handled separately, whether before the first child is traversed (resulting in pre-order traversal), after the first is finished and before the second (in-order), or after the second child node is finished (post-order) assuming the tree is binary, for simplicity of exposition. The call stack (of the recursive traversal function invocations) corresponds to the stack that would be iterated over with the explicit LIFO structure manipulation mentioned above. Symbolically,
"Recursion" has two meanings here. First, the recursive invocations of the tree traversal functions . More pertinently, we need to contend with how the resulting list of values is built here. Recursive, bottom-up output creation will result in the right-to-left tree traversal. To have it actually performed in the intended left-to-right order the sequencing would need to be enforced by some extraneous means, or it would be automatically achieved if the output were to be built in the top-down fashion, i.e. corecursively.
A breadth-first traversal creating its output in the top-down order, corecursively, can be also implemented by starting at the root node, outputting its value, then breadth-first traversing the subtrees – i.e., passing on the whole list of subtrees to the next step (not a single subtree, as in the recursive approach) – at the next step outputting the values of all of their root nodes, then passing on their child subtrees, etc. In this case the generator function, indeed the output sequence itself, acts as the queue. As in the factorial example above, where the auxiliary information of the index (which step one was at, n) was pushed forward, in addition to the actual output of n!, in this case the auxiliary information of the remaining subtrees is pushed forward, in addition to the actual output. Symbolically,
meaning that at each step, one outputs the list of values in this level's nodes, then proceeds to the next level's nodes. Generating just the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the list of lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list). This is extensionally equivalent to the specification above. In Haskell,
concatMap fst ( (\(v, ts) -> (rootValues ts, childTrees ts)) `iterate` ([], [fullTree]) )
Notably, given an infinite tree, the corecursive breadth-first traversal will traverse all nodes, just as for a finite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeed if traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf. This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures. One caveat still remains for trees with the infinite branching factor, which need a more attentive interlacing to explore the space better. See dovetailing.
In Python, this can be implemented as follows.
The usual post-order depth-first traversal can be defined as:
def df(node):
"""Post-order depth-first traversal."""
if node is not None:
df(node.left)
df(node.right)
print(node.value)
This can then be called by df(t) to print the values of the nodes of the tree in post-order depth-first order.
The breadth-first corecursive generator can be defined as:
def bf(tree):
"""Breadth-first corecursive generator."""
tree_list = [tree]
while tree_list:
new_tree_list = []
for tree in tree_list:
if tree is not None:
yield tree.value
new_tree_list.append(tree.left)
new_tree_list.append(tree.right)
tree_list = new_tree_list
This can then be called to print the values of the nodes of the tree in breadth-first order:
for i in bf(t):
print(i)
Definition
Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphism is then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpoint of a type equation; the isomorphism is then given by a final coalgebra.
If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not. On the other hand, if the domain of discourse is the category of complete partial orders and continuous functions, which corresponds roughly to the Haskell programming language, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form an isomorphism.
Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dual to the way that ordinary recursion recursively defines functions whose domain is an initial data type.
The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if one were to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistent with existing textbooks about Haskell. The examples used in this article predate the attempts to define corecursion and explain what it is.
Discussion
The rule for primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending on the argument by pattern-matching on its constructors (that were called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. "fields"), we ascend on the result by filling-in its "destructors" (or "observers", that will be called afterwards, somewhere - so we're actually calling a constructor, creating another bit of the result to be observed later on). Thus corecursion creates (potentially infinite) codata, whereas ordinary recursion analyses (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it might not terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.
In "Programming with streams in Coq: a case study: the Sieve of Eratosthenes" we find
hd (conc a s) = a
tl (conc a s) = s
(sieve p s) = if div p (hd s) then sieve p (tl s)
else conc (hd s) (sieve p (tl s))
hd (primes s) = (hd s)
tl (primes s) = primes (sieve (hd s) (tl s))
where primes "are obtained by applying the primes operation to the stream (Enu 2)". Following the above notation, the sequence of primes (with a throwaway 0 prefixed to it) and numbers streams being progressively sieved, can be represented as
or in Haskell,
(\(p, s@(h:t)) -> (h, sieve h t)) `iterate` (0, [2..])
The authors discuss how the definition of sieve is not guaranteed always to be productive, and could become stuck e.g. if called with [5,10..] as the initial stream.
Here is another example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes are ever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniques are an important part of Haskell programming.
This can be done in Python as well:
>>> from itertools import tee, chain, islice
>>> def fibonacci():
... def deferred_output():
... yield from output
...
... result, c1, c2 = tee(deferred_output(), 3)
... paired = (x + y for x, y in zip(c1, islice(c2, 1, None)))
... output = chain([0, 1], paired)
... return result
>>> print(*islice(fibonacci(), 20), sep=', ')
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181
The definition of zipWith can be inlined, leading to this:
fibs = 0 : 1 : next fibs
where
next (a: t@(b:_)) = (a+b):next t
This example employs a self-referential data structure. Ordinary recursion makes use of self-referential functions, but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can be rewritten as follows:
fibs = fibgen (0,1)
fibgen (x,y) = x : fibgen (y,x+y)
This employs only self-referential function to construct the result. If it were used with strict list constructor it would be an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces an indefinitely defined list.
Corecursion need not produce an infinite object; a corecursive queue is a particularly good example of this phenomenon. The following definition produces a breadth-first traversal of a binary tree in the top-down manner, in linear time (already incorporating the flattening mentioned above):
data Tree a b = Leaf a
| Branch b (Tree a b) (Tree a b)
bftrav :: Tree a b -> [Tree a b]
bftrav tree = tree : ts
where
ts = gen 1 (tree : ts)
gen 0 p = []
gen len (Leaf _ : p) = gen (len-1) p
gen len (Branch _ l r : p) = l : r : gen (len+1) p
-- ----read---- ----write-ahead---
-- bfvalues tree = [v | (Branch v _ _) <- bftrav tree]
This definition takes a tree and produces a list of its sub-trees (nodes and leaves). This list serves dual purpose as both the input queue and the result (gen len p produces its output len notches ahead of its input back-pointer, p, along the list). It is finite if and only if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; this can safely be elided if this definition is applied only to infinite trees.
This Haskell code uses self-referential data structure, but does not essentially depend on lazy evaluation. It can be straightforwardly translated into e.g. Prolog which is not a lazy language. What is essential is the ability to build a list (used as the queue) in the top-down manner. For that, Prolog has tail recursion modulo cons (i.e. open ended lists). Which is also emulatable in Scheme, C, etc. using linked lists with mutable tail sentinel pointer:
bftrav( Tree, [Tree|TS]) :- bfgen( 1, [Tree|TS], TS).
bfgen( 0, _, []) :- !. % 0 entries in the queue -- stop and close the list
bfgen( N, [leaf(_) |P], TS ) :- N2 is N-1, bfgen( N2, P, TS).
bfgen( N, [branch(_,L,R)|P], [L,R|TS]) :- N2 is N+1, bfgen( N2, P, TS).
%% ----read----- --write-ahead--
Another particular example gives a solution to the problem of breadth-first labeling. The function label visits every node in a binary tree in the breadth first fashion, replacing each label with an integer, each subsequent integer bigger than the last by 1. This solution employs a self-referential data structure, and the binary tree can be finite or infinite.
label :: Tree a b -> Tree Int Int
label t = tn
where
(tn, ns) = go t (1:ns)
go :: Tree a b -> [Int] -> (Tree Int Int, [Int])
go (Leaf _ ) (i:a) = (Leaf i , i+1:a)
go (Branch _ l r) (i:a) = (Branch i ln rn, i+1:c)
where
(ln, b) = go l a
(rn, c) = go r b
Or in Prolog, for comparison,
label( Tree, Tn) :- label( Tree, [1|Ns], Tn, Ns).
label( leaf(_), [I|A], leaf( I), [I+1|A]).
label( branch(_,L,R),[I|A], branch(I,Ln,Rn),[I+1|C]) :-
label( L, A, Ln, B),
label( R, B, Rn, C).
An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the same way that a paramorphism (such as a catamorphism, such as fold) is a form of recursion.
The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.
History
Corecursion, referred to as circular programming, dates at least to , who credits John Hughes and Philip Wadler; more general forms were developed in . The original motivations included producing more efficient algorithms (allowing a single pass over data in some cases, instead of requiring multiple passes) and implementing classical data structures, such as doubly linked lists and queues, in functional languages.
See also
Bisimulation
Coinduction
Recursion
Anamorphism
Notes
References
Theoretical computer science
Self-reference
Articles with example Haskell code
Articles with example Python (programming language) code
Functional programming
Category theory
Recursion | Corecursion | [
"Mathematics"
] | 5,136 | [
"Functions and mappings",
"Mathematical structures",
"Recursion",
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
1,339,615 | https://en.wikipedia.org/wiki/Stopping%20power | Stopping power is the ability of a weapon – typically a ranged weapon such as a firearm – to cause a target (human or animal) to be incapacitated or immobilized. Stopping power contrasts with lethality in that it pertains only to a weapon's ability to make the target cease action, regardless of whether or not death ultimately occurs. Which ammunition cartridges have the greatest stopping power is a much-debated topic.
Stopping power is related to the physical properties and terminal behavior of the projectile (bullet, shot, or slug), the biology of the target, and the wound location, but the issue is complicated and not easily studied. Although higher-caliber ammunitions usually have greater muzzle energy and momentum and thus traditionally been widely associated with higher stopping power, the physics involved are multifactorial, with caliber, muzzle velocity, bullet mass, bullet shape and bullet material all contributing to the ballistics.
Despite much disagreement, the most popular theory of stopping power is that it is usually caused not by the force of the bullet but by the wounding effects of the bullet, which are typically a rapid loss of blood causing a circulatory failure, which leads to impaired motor function and/or unconsciousness. The "Big Hole School" and the principles of penetration and permanent tissue damage are in line with this way of thinking. The other prevailing theories focus more on the energy of the bullet and its effects on the nervous system, including hydrostatic shock and energy transfer, which is similar to kinetic energy deposit.
History
The concept of stopping power appeared in the tail end of the 19th century when colonial troops (including American troops in the Philippines during the Moro Rebellion, and British soldiers during the New Zealand Wars) at close quarters found that their pistols were not able to stop charging native tribesmen. This led to the introduction or reintroduction of larger caliber weapons (such as the older .45 Colt and the newly developed .45 ACP) capable of stopping opponents with a single round.
During the Seymour Expedition in China, at one of the battles at Langfang, Chinese Boxers, armed with swords and spears, conducted a massed infantry charge against the forces of the Eight-Nation Alliance, who were equipped with rifles. At point-blank range, a British soldier had to fire four .303 Lee-Metford bullets into a Boxer before he stopped charging. U.S. Army officer Bowman McCalla reported that single rifle shots were not enough: multiple rifle shots were needed to halt a Boxer. Only machine guns were effective in immediately stopping the Boxers.
In the Moro Rebellion, Moro Muslim Juramentados in suicide attacks continued to charge against American soldiers even after being shot. Panglima Hassan in the Hassan uprising had to be shot dozens of times before he died. This forced the Americans to phase out .38 Long Colt revolvers and start using .45 Colt against the Moros.
British troops used expanding bullets during various conflicts in the Northwest Frontier in India, and the Mahdist War in Sudan. The British government voted against a prohibition on their use at the Hague Convention of 1899, although the prohibition only applied to international warfare.
In response to addressing stopping power issues, the Mozambique Drill was developed to maximize the likelihood of a target's quick incapacitation.
"Manstopper" is an informal term used to refer to any combination of firearm and ammunition that can reliably incapacitate, or "stop", a human target immediately. For example, the .45 ACP round and the .357 Magnum round both have firm reputations as "manstoppers". Historically, one type of ammunition has had the specific tradename "Manstopper". Officially known as the Mk III cartridge, these were made to suit the British Webley .455 service revolver in the early 20th century. The ammunition used a cylindrical bullet with hemispherical depressions at both ends. The front acted as a hollow point deforming on impact while the base opened to seal the round in the barrel. It was introduced in 1898 for use against "savage foes", but fell quickly from favor due to concerns of breaching the Hague Convention's international laws on military ammunition, and was replaced in 1900 by re-issued Mk II pointed-bullet ammunition.
Some sporting arms are also referred to as "stoppers" or "stopping rifles". These powerful arms are often used by game hunters (or their guides) for stopping a suddenly charging animal, like a buffalo or an elephant.
Dynamics of bullets
A bullet will destroy or damage any tissues which it penetrates, creating a wound channel. It will also cause nearby tissue to stretch and expand as it passes through tissue. These two effects are typically referred to as permanent cavity (the track left by the bullet as it penetrates flesh) and temporary cavity, which, as the name implies, is the temporary (instantaneous) displacement caused as the bullet travels through flesh, and is many times larger than the actual diameter of the bullet. These phenomena are unrelated to low-pressure cavitation in liquids.
The degree to which permanent and temporary cavitation occur is dependent on the mass, diameter, material, design and velocity of the bullet. This is because bullets crush tissue, and do not cut it. A bullet constructed with a half diameter ogive designed meplat and hard, solid copper alloy material may crush only the tissue directly in front of the bullet. This type of bullet (monolithic-solid rifle bullet) is conducive to causing more temporary cavitation as the tissue flows around the bullet, resulting in a deep and narrow wound channel. A bullet constructed with a two diameter, hollow point ogive designed meplat and low-antimony lead-alloy core with a thin gilding metal jacket material will crush tissue in front and to the sides as the bullet expands. Due to the energy expended in bullet expansion, velocity is lost more quickly. This type of bullet (hollow-point hand gun bullet) is conducive to causing more permanent cavitation as the tissue is crushed and accelerated into other tissues by the bullet, causing a shorter and wider wound channel. The exception to this general rule is non-expanding bullets which are long relative to their diameter. These tend to destabilize and yaw (tumble) soon after impact, increasing both temporary and permanent cavitation.
Bullets are constructed to behave in different ways, depending on the intended target. Different bullets are constructed variously to: not expand upon impact, expand upon impact at high velocity, expand upon impact, expand across a broad range of velocities, expand upon impact at low velocity, tumble upon impact, fragment upon impact, or disintegrate upon impact.
To control the expansion of a bullet, meplat design and materials are engineered. The meplat designs are: flat; round to pointed depending on the ogive; hollow pointed which can be large in diameter and shallow or narrow in diameter and deep and truncated which is a long narrow punched hole in the end of a monolithic-solid type bullet. The materials used to make bullets are: pure lead; alloyed lead for hardness; gilding metal jacket which is a copper alloy of nickel and zinc to promote higher velocities; pure copper; copper alloy of bronze with tungsten steel alloy inserts to promote weight.
Some bullets are constructed by bonding the lead core to the jacket to promote higher weight retention upon impact, causing a larger and deeper wound channel. Some bullets have a web in the center of the bullet to limit the expansion of the bullet while promoting penetration. Some bullets have dual cores to promote penetration.
Bullets that might be considered to have stopping power for dangerous large game animals are usually 11.63 mm (.458 caliber) and larger, including 12-gauge shotgun slugs. These bullets are monolithic-solids; full metal jacketed and tungsten steel insert. They are constructed to hold up during close range, high velocity impacts. These bullets are expected to impact and penetrate, and transfer energy to the surrounding tissues and vital organs through the entire length of a game animal's body if need be.
The stopping power of firearms when used against humans is a more complex subject, in part because many persons voluntarily cease hostile actions when shot; they either flee, surrender, or fall immediately. This is sometimes referred to as "psychological incapacitation".
Physical incapacitation is primarily a matter of shot location; most persons who are shot in the head are immediately incapacitated, and most who are shot in the extremities are not, regardless of the firearm or ammunition involved. Shotguns will usually incapacitate with one shot to the torso, but rifles and especially handguns are less reliable, particularly those which do not meet the FBI's penetration standard, such as .25ACP, .32 S&W, and rimfire models. More powerful handguns may or may not meet the standard, or may even overpenetrate, depending on what ammunition is used.
Fully jacketed bullets penetrate deeply without much expansion, while soft or hollow point bullets create a wider, shallower wound channel. Pre-fragmented bullets such as Glaser Safety Slugs and MagSafe ammunition are designed to fragment into birdshot on impact with the target. This fragmentation is intended to create more trauma to the target, and also to reduce collateral damage caused from ricocheting or overpenetrating of the target and the surrounding environments such as walls. Fragmenting rounds have been shown to be unlikely to obtain deep penetration necessary to disrupt vital organs located at the back of a hostile human.
Wounding effects
Physical
Permanent and temporary cavitation cause very different biological effects. A hole through the heart will cause loss of pumping efficiency, loss of blood, and eventual cardiac arrest. A hole through the liver or lung will be similar, with the lung shot having the added effect of reducing blood oxygenation; these effects however are generally slower to arise than damage to the heart. A hole through the brain can cause instant unconsciousness and will likely kill the recipient. A hole through the spinal cord will instantly interrupt the nerve signals to and from some or all extremities, disabling the target and in many cases also resulting in death (as the nerve signals to and from the heart and lungs are interrupted by a shot high in the chest or to the neck). By contrast, a hole through an arm or leg which hits only muscle will cause a great deal of pain but is unlikely to be fatal, unless one of the large blood vessels (femoral or brachial arteries, for example) is also severed in the process.
The effects of temporary cavitation are less well understood, due to a lack of a test material identical to living tissue. Studies on the effects of bullets typically are based on experiments using ballistic gelatin, in which temporary cavitation causes radial tears where the gelatin was stretched. Although such tears are visually engaging, some animal tissues (but not bone or liver) are more elastic than gelatin. In most cases, temporary cavitation is unlikely to cause anything more than a bruise. Some speculation states that nerve bundles can be damaged by temporary cavitation, creating a stun effect, but this has not been confirmed.
One exception to this is when a very powerful temporary cavity intersects with the spine. In this case, the resulting blunt trauma can slam the vertebrae together hard enough to either sever the spinal cord, or damage it enough to knock out, stun, or paralyze the target. For instance, in the shootout between eight FBI agents and two bank robbers in the 1986 FBI Miami shootout, Special Agent Gordon McNeill was struck in the neck by a high-velocity .223 bullet fired by Michael Platt. While the bullet did not directly contact the spine, and the wound incurred was not ultimately fatal, the temporary cavitation was sufficient to render SA McNeill paralyzed for several hours. Temporary cavitation may similarly fracture the femur if it is narrowly missed by a bullet.
Temporary cavitation can also cause the tearing of tissues if a very large amount of force is involved. The tensile strength of muscle ranges roughly from 1 to 4 MPa (145 to 580 lbf/in2), and minimal damage will result if the pressure exerted by the temporary cavitation is below this. Gelatin and other less elastic media have much lower tensile strengths, thus they exhibit more damage after being struck with the same amount of force. At typical handgun velocities, bullets will create temporary cavities with much less than 1 MPa of pressure, and thus are incapable of causing damage to elastic tissues that they do not directly contact.
Rifle bullets that strike a major bone (such as a femur) can expend their entire energy into the surrounding tissue. The struck bone is commonly shattered at the point of impact.
High velocity fragmentation can also increase the effect of temporary cavitation. The fragments sheared from the bullet cause many small permanent cavities around the main entry point. The main mass of the bullet can then cause a truly massive amount of tearing as the perforated tissue is stretched.
Whether a person or animal will be incapacitated (i.e. "stopped") when shot, depends on a large number of factors, including physical, physiological, and psychological effects.
Neurological
The only way to immediately incapacitate a person or animal is to damage or disrupt their central nervous system (CNS) to the point of paralysis, unconsciousness, or death. Bullets can achieve this directly or indirectly. If a bullet causes sufficient damage to the brain or spinal cord, immediate loss of consciousness or paralysis, respectively, can result. However, these targets are relatively small and mobile, making them extremely difficult to hit even under optimal circumstances.
Bullets can indirectly disrupt the CNS by damaging the cardiovascular system so that it can no longer provide enough oxygen to the brain to sustain consciousness. This can be the result of bleeding from a perforation of a large blood vessel or blood-bearing organ, or the result of damage to the lungs or airway. If blood flow is completely cut off from the brain, a human still has enough oxygenated blood in their brain for 10–15 seconds of wilful action, though with rapidly decreasing effectiveness as the victim begins to lose consciousness.
Unless a bullet directly damages or disrupts the central nervous system, a person or animal will not be instantly and completely incapacitated by physiological damage. However, bullets can cause other disabling injuries that prevent specific actions (a person shot in the femur cannot run) and the physiological pain response from severe injuries will temporarily disable most individuals.
Several scientific papers reveal ballistic pressure wave effects on wounding and incapacitation, including central nervous system injuries from hits to the thorax and extremities. These papers document remote wounding effects for both rifle and pistol levels of energy transfer.
Recent work by Courtney and Courtney provides compelling support for the role of a ballistic pressure wave in creating remote neural effects leading to incapacitation and injury. This work builds upon the earlier works of Suneson et al. where the researchers implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh. These scientists observed neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh. The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile ..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
Psychological
Emotional shock, terror, or surprise can cause a person to faint, surrender, or flee when shot or shot at. There are many documented instances where people have instantly dropped unconscious when the bullet only hit an extremity, or even completely missed. Additionally, the muzzle blast and flash from many firearms are substantial and can cause disorientation, dazzling, and stunning effects. Flashbangs (stun grenades) and other less-lethal "distraction devices" rely exclusively on these effects.
Pain is another psychological factor, and can be enough to dissuade a person from continuing their actions.
Temporary cavitation can emphasize the impact of a bullet, since the resulting tissue compression is identical to simple blunt force trauma. It is easier for someone to feel when they have been shot if there is considerable temporary cavitation, and this can contribute to either psychological factor of incapacitation.
However, if a person is sufficiently enraged, determined, or intoxicated, they can simply shrug off the psychological effects of being shot. During the colonial era, when native tribesmen came into contact with firearms for the first time, there was no psychological conditioning that being shot could be fatal, and most colonial powers eventually sought to create more effective manstoppers.
Therefore, such effects are not as reliable as physiological effects at stopping people. Animals will not faint or surrender if injured, though they may become frightened by the loud noise and pain of being shot, so psychological mechanisms are generally less effective against non-humans.
Penetration
According to Dr. Martin Fackler and the International Wound Ballistics Association (IWBA), between of penetration in calibrated tissue simulant is optimal performance for a bullet which is meant to be used defensively, against a human adversary. They also believe that penetration is one of the most important factors when choosing a bullet (and that the number one factor is shot placement). If the bullet penetrates less than their guidelines, it is inadequate, and if it penetrates more, it is still satisfactory though not optimal. The FBI's penetration requirement is very similar at .
A penetration depth of may seem excessive, but a bullet sheds velocity—and crushes a narrower hole—as it penetrates deeper, so the bullet might be crushing a very small amount of tissue (simulating an "ice pick" injury) during its last two or three inches of travel, giving only between of effective wide-area penetration. Also, skin is elastic and tough enough to cause a bullet to be retained in the body, even if the bullet had a relatively high velocity when it hit the skin. About velocity is required for an expanded hollow point bullet to puncture skin 50% of the time.
The IWBA's and FBI's penetration guidelines are to ensure that the bullet can reach a vital structure from most angles, while retaining enough velocity to generate a large diameter hole through tissue. An extreme example where penetration would be important is if the bullet first had to enter and then exit an outstretched arm before impacting the torso. A bullet with low penetration might embed itself in the arm whereas a higher penetrating bullet would penetrate the arm then enter the thorax where it would have a chance of hitting a vital organ.
Overpenetration
Excessive penetration or overpenetration occurs when a bullet passes through its intended target and out of the other side, with enough residual kinetic energy to continue flying as a stray projectile and risk causing unintended collateral damage to objects or persons beyond. This happens because the bullet has not released all its energy within the target, according to the energy transfer hypothesis.
Other hypotheses
These hypotheses are a matter of some debate among scientists in the field:
Energy transfer
The energy transfer hypothesis states that for small arms in general, the more energy transferred to the target, the greater the stopping power. It postulates that the pressure wave exerted on soft tissues by the bullet's temporary cavity hits the nervous system with a jolt of shock and pain and thereby forces incapacitation.
Proponents of this theory contend that the incapacitation effect is similar to that seen in non-concussive blunt-force trauma events, such as a knock-out punch to the body, a football player "shaken up" as result of a hard tackle, or a hitter being struck by a fastball. Pain in general has an inhibitory and weakening effect on the body, causing a person under physical stress to take a seat or even collapse. The force put on the body by the temporary cavity is supersonic compression, like the lash of a whip. While the lash only affects a short line of tissue across the back of the victim, the temporary cavity affects a volume of tissue roughly the size and shape of a football. Giving further credence to this theory is the support from the aforementioned effects of drugs on incapacitation. Pain killers, alcohol, and PCP have all been known to decrease the effects of nociception and increase a person's resistance to incapacitation, all while having no effect on blood loss.
Kinetic energy is a function of the bullet's mass and the square of its velocity. Generally speaking, it is the intention of the shooter to deliver an adequate amount of energy to the target via the projectiles. All else held equal, bullets that are light and fast tend to have more energy than those that are heavy and slow.
Over-penetration is detrimental to stopping power in regards to energy. This is because a bullet that passes through the target does not transfer all of its energy to the target. Lighter bullets tend to have less penetration in soft tissue and therefore are less likely to over-penetrate. Expanding bullets and other tip variations can increase the friction of the bullet through soft tissue, and/or allow internal ricochets off bone, therefore helping prevent over-penetration.
Non-penetrating projectiles can also possess stopping power and give support to the energy transfer hypothesis. Notable examples of projectiles designed to deliver stopping power without target penetration are Flexible baton rounds (commonly known as "beanbag bullets") and the rubber bullet, types of reduced-lethality ammunition.
The force exerted by a projectile upon tissue is equal to the bullet's local rate of kinetic energy loss, with distance (the first derivative of the bullet's kinetic energy with respect to position). The ballistic pressure wave is proportional to this retarding force (Courtney and Courtney), and this retarding force is also the origin of both temporary cavitation and prompt damage (CE Peters).
Hydrostatic shock
Hydrostatic shock is a controversial theory of terminal ballistics that states a penetrating projectile (such as a bullet) can produce a sonic pressure wave that causes "remote neural damage", "subtle damage in neural tissues" and/or "rapid incapacitating effects" in living targets. Proponents of the theory contend that damage to the brain from hydrostatic shock from a shot to the chest occurs in humans with most rifle cartridges and some higher-velocity handgun cartridges. Hydrostatic shock is not the shock from the temporary cavity itself, but rather the sonic pressure wave that radiates away from its edges through static soft tissue.
Knockback
The idea of "knockback" implies that a bullet can have enough force to stop the forward motion of an attacker and physically knock them backwards or downwards. It follows from the law of conservation of momentum that no "knockback" could ever exceed the recoil felt by the shooter, and therefore has no use as a weapon. The myth of "knockback" has been spread through its confusion with the phrase "stopping power" as well as by many films, which show bodies flying backward after being shot.
The idea of knockback was first widely expounded in ballistics discussions during American involvement in Philippine insurrections and, simultaneously, in British conflicts in its colonial empire, when front-line reports stated that the .38 Long Colt caliber revolvers carried by U.S. and British soldiers were incapable of bringing down a charging warrior. Thus, in the early 1900s, the U.S. reverted to the .45 Colt in single action revolvers, and later adopted the .45 ACP cartridge in what was to become the M1911A1 pistol, and the British adopted the .455 Webley caliber cartridge in the Webley Revolver. The larger cartridges were chosen largely due to the Big Hole Theory (a larger hole does more damage), but the common interpretation was that these were changes from a light, deeply penetrating bullet to a larger, heavier "manstopper" bullet.
Though popularized in television and movies, and commonly referred to as "true stopping power" by uneducated proponents of large powerful calibers such as .44 Magnum, the effect of knockback from a handgun and indeed most personal weapons is largely a myth. The momentum of the so-called "manstopper" .45 ACP bullet is approximately that of a mass dropped from a height of . or that of a baseball. Such a force is simply incapable of arresting a running target's forward momentum. In addition, bullets are designed to penetrate instead of strike a blunt force blow, because, in penetrating, more severe tissue damage is done. A bullet with sufficient energy to knock down an assailant, such as a high-speed rifle bullet, would be more likely to instead pass straight through, while not transferring the full energy (in fact only a very small percentage of the full energy) of the bullet to the victim. Most energy from a fully stopped rifle round instead goes into formation of the temporary cavity and the destruction of both the round, the wound channel, and some of the surrounding tissues. There is no physical principle preventing a hypervelocity round from causing a splash injury in which the ejecta create rocket-like impulse on their way out to cause knockback, and indeed, no principle preventing a similar effect for exit wounds causing "knockforward", but this is still generally not anywhere near the impulse required to stop the motion of a sprinting person or knock them over from pure momentum.
Sometimes "knockdown power" is a phrase used interchangeably with "knockback", while other times it's used interchangeably with "stopping power". The misuse and fluid meaning of these phrases have done their part in confusing the issue of stopping power. The ability of a bullet to "knock down" a metal or otherwise inanimate target falls under the category of momentum, as explained above, and has little correlation with stopping power.
One-shot stop
This hypothesis, promoted by Evan P. Marshall, is based on statistical analysis of actual shooting incidents from various reporting sources (typically police agencies). It is intended to be used as a unit of measurement and not as a tactical philosophy, as mistakenly believed by some. It considers the history of shooting incidents for a given factory ammunition load and compiles the percentage of "one-shot-stops" achieved with each specific ammunition load. That percentage is then intended to be used with other information to help predict the effectiveness of that load getting a "one-shot-stop". For example, if an ammunition load is used in 10 torso shootings, incapacitating all but two with one shot, the "one-shot-stop" percentage for the total sample would be 80%.
Some argue that this hypothesis ignores any inherent selection bias. For example, high-velocity 9×19mm Parabellum hollow point rounds appear to have the highest percentage of one-shot stops. Rather than identifying this as an inherent property of the firearm/bullet combination, the situations where these have occurred need to be considered. The 9mm has been the predominantly used caliber of many police departments, so many of these one-shot-stops were probably made by well-trained police officers, where accurate placement would be a contributory factor. However, Marshall's database of "one-shot-stops" does include shootings from law enforcement agencies, private citizens, and criminals alike.
Critics of this theory point out that bullet placement is a very significant factor, but is only generally used in such one-shot-stop calculations, covering shots to the torso. Others contend that the importance of "one-shot stop" statistics is overstated, pointing out that most gun encounters do not involve a "shoot once and see how the target reacts" situation. Proponents contend that studying one-shot situations is the best way to compare cartridges as comparing a person shot once to a person shot twice does not maintain a control and has no value.
Big hole school
This school of thought says that the bigger the hole in the target, the higher the rate of bleed-out and thus the higher the rate of the aforementioned "one-shot stop". According to this theory, as the bullet does not pass entirely through the body, it incorporates the energy transfer and the overpenetration ideals. Those that support this theory cite the .40 S&W round, arguing that it has a better ballistic profile than the .45 ACP, and more stopping power than a 9mm.
The theory centers on the "permanent cavitation" element of a handgun wound. A big hole damages more tissue. It is therefore valid to a point, but penetration is also important, as a large bullet that does not penetrate will be less likely to strike vital blood vessels and blood-carrying organs such as the heart and liver, while a smaller bullet that penetrates deep enough to strike these organs or vessels will cause faster bleed-out through a smaller hole. The ideal may therefore be a combination: a large bullet that penetrates deeply, which can be achieved with a larger, slower non-expanding bullet, or a smaller, faster expanding bullet such as a hollow point.
In the extreme, a heavier bullet (which preserves momentum greater than a lighter bullet of the same caliber) may "overpenetrate", passing completely through the target without expending all of its kinetic energy. So-called "overpenetration" is not an important consideration when it comes to wounding incapacitation or "stopping power" because: (a) while a lower proportion of the bullet's energy is transferred to the target, a higher absolute amount of energy is shed than in partial penetration, and (b) overpenetration creates an exit wound.
Other contributing factors
As mentioned earlier, there are many factors, such as drug and alcohol levels within the body, body mass index, mental illness, motivation levels, and gunshot location on the body which may determine which round will kill or at least catastrophically affect a target during any given situation.
See also
Table of handgun and rifle cartridges
Taylor knock-out factor
References
Notes
External links
What We Didn't Know Hurt Us (PDF)
One Shot Drops – Surviving the Myth
Ballistics | Stopping power | [
"Physics"
] | 6,265 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
22,629,003 | https://en.wikipedia.org/wiki/Frontier%20molecular%20orbital%20theory | In chemistry, frontier molecular orbital theory is an application of molecular orbital theory describing HOMO–LUMO interactions.
History
In 1952, Kenichi Fukui published a paper in the Journal of Chemical Physics titled "A molecular theory of reactivity in aromatic hydrocarbons." Though widely criticized at the time, he later shared the Nobel Prize in Chemistry with Roald Hoffmann for his work on reaction mechanisms. Hoffman's work focused on creating a set of four pericyclic reactions in organic chemistry, based on orbital symmetry, which he coauthored with Robert Burns Woodward, entitled "The Conservation of Orbital Symmetry."
Fukui's own work looked at the frontier orbitals, and in particular the effects of the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) on reaction mechanisms, which led to it being called frontier molecular orbital theory (FMO theory). He used these interactions to better understand the conclusions of the Woodward–Hoffmann rules.
Theory
Fukui realized that a good approximation for reactivity could be found by looking at the frontier orbitals (HOMO/LUMO). This was based on three main observations of molecular orbital theory as two molecules interact:
The occupied orbitals of different molecules repel each other.
Positive charges of one molecule attract the negative charges of the other.
The occupied orbitals of one molecule and the unoccupied orbitals of the other (especially the HOMO and LUMO) interact with each other causing attraction.
In general, the total energy change of the reactants on approach of the transition state is described by the Klopman–Salem equation, derived from perturbational MO theory. The first and second observations correspond to taking into consideration the filled–filled interaction and Coulombic interaction terms of the equation, respectively. With respect to the third observation, primary consideration of the HOMO–LUMO interaction is justified by the fact that the largest contribution in the filled–unfilled interaction term of the Klopman-Salem equation comes from molecular orbitals and that are closest in energy (i.e., smallest value). From these observations, frontier molecular orbital (FMO) theory simplifies prediction of reactivity to analysis of the interaction between the more energetically matched HOMO–LUMO pairing of the two reactants. In addition to providing a unified explanation of diverse aspects of chemical reactivity and selectivity, it agrees with the predictions of the Woodward–Hoffmann orbital symmetry and Dewar–Zimmerman aromatic transition state treatments of thermal pericyclic reactions, which are summarized in the following selection rule:
"A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2)s and (4r)a components is odd"
(4q+2)s refers to the number of aromatic, suprafacial electron systems; likewise, (4r)a refers to antiaromatic, antarafacial systems. It can be shown that if the total number of these systems is odd then the reaction is thermally allowed.
Applications
Cycloadditions
A cycloaddition is a reaction that simultaneously forms at least two new bonds, and in doing so, converts two or more open-chain molecules into rings. The transition states for these reactions typically involve the electrons of the molecules moving in continuous rings, making it a pericyclic reaction. These reactions can be predicted by the Woodward–Hoffmann rules and thus are closely approximated by FMO theory.
The Diels–Alder reaction between maleic anhydride and cyclopentadiene is allowed by the Woodward–Hoffmann rules because there are six electrons moving suprafacially and no electrons moving antarafacially. Thus, there is one (4q + 2)s component and no (4r)a component, which means the reaction is allowed thermally.
FMO theory also finds that this reaction is allowed and goes even further by predicting its stereoselectivity, which is unknown under the Woodward-Hoffmann rules. Since this is a [4 + 2], the reaction can be simplified by considering the reaction between butadiene and ethene. The HOMO of butadiene and the LUMO of ethene are both antisymmetric (rotationally symmetric), meaning the reaction is allowed.*
In terms of the stereoselectivity of the reaction between maleic anhydride and cyclopentadiene, the endo-product is favored, a result best explained through FMO theory. The maleic anhydride is an electron-withdrawing species that makes the dieneophile electron deficient, forcing the regular Diels–Alder reaction. Thus, only the reaction between the HOMO of cyclopentadiene and the LUMO of maleic anhydride is allowed. Furthermore, though the exo-product is the more thermodynamically stable isomer, there are secondary (non-bonding) orbital interactions in the endo- transition state, lowering its energy and making the reaction towards the endo- product faster, and therefore more kinetically favorable. Since the exo-product has primary (bonding) orbital interactions, it can still form; but since the endo-product forms faster, it is the major product.
*Note: The HOMO of ethene and the LUMO of butadiene are both symmetric, meaning the reaction between these species is allowed as well. This is referred to as the "inverse electron demand Diels–Alder."
Sigmatropic reactions
A sigmatropic rearrangement is a reaction in which a sigma bond moves across a conjugated pi system with a concomitant shift in the pi bonds. The shift in the sigma bond may be antarafacial or suprafacial. In the example of a [1,5] shift in pentadiene, if there is a suprafacial shift, there is 6 e− moving suprafacially and none moving antarafacially, implying this reaction is allowed by the Woodward–Hoffmann rules. For an antarafacial shift, the reaction is not allowed.
These results can be predicted with FMO theory by observing the interaction between the HOMO and LUMO of the species. To use FMO theory, the reaction should be considered as two separate ideas: (1) whether or not the reaction is allowed, and (2) which mechanism the reaction proceeds through. In the case of a [1,5] shift on pentadiene, the HOMO of the sigma bond (i.e., a constructive bond) and the LUMO of butadiene on the remaining 4 carbons is observed. Assuming the reaction happens suprafacially, the shift results with the HOMO of butadiene on the four carbons that are not involved in the sigma bond of the product. Since the pi system changed from the LUMO to the HOMO, this reaction is allowed (though it would not be allowed if the pi system went from LUMO to LUMO).
To explain why the reaction happens suprafacially, first notice that the terminal orbitals are in the same phase. For there to be a constructive sigma bond formed after the shift, the reaction would have to be suprafacial. If the species shifted antarafacially then it would form an antibonding orbital and there would not be a constructive sigma shift.
In propene the shift would have to be antarafacial, but since the molecule is very small, that twist is not possible and the reaction is not allowed.
Electrocyclic reactions
An electrocyclic reaction is a pericyclic reaction involving the net loss of a pi bond and creation of a sigma bond with formation of a ring. This reaction proceeds through either a conrotatory or disrotatory mechanism. In the conrotatory ring opening of cyclobutene, there are two electrons moving suprafacially (on the pi bond) and two moving antarafacially (on the sigma bond). This means there is one 4q + 2 suprafacial system and no 4r antarafacial system; thus, the conrotatory process is thermally allowed by the Woodward–Hoffmann rules.
The HOMO of the sigma bond (i.e., a constructive bond) and the LUMO of the pi bond are important in the FMO theory consideration. If the ring opening uses a conrotatory process, then the reaction results with the HOMO of butadiene. As in the previous examples, the pi system moves from a LUMO species to a HOMO species, meaning this reaction is allowed.
See also
Addition to pi ligands
Klopman–Salem equation
Oxy Cope elimination pericyclic reaction
References
Quantum chemistry | Frontier molecular orbital theory | [
"Physics",
"Chemistry"
] | 1,836 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
22,630,866 | https://en.wikipedia.org/wiki/Kostka%20polynomial | In mathematics, Kostka polynomials, named after the mathematician Carl Kostka, are families of polynomials that generalize the Kostka numbers. They are studied primarily in algebraic combinatorics and representation theory.
The two-variable Kostka polynomials Kλμ(q, t) are known by several names including Kostka–Foulkes polynomials, Macdonald–Kostka polynomials or q,t-Kostka polynomials. Here the indices λ and μ are integer partitions and Kλμ(q, t) is polynomial in the variables q and t. Sometimes one considers single-variable versions of these polynomials that arise by setting q = 0, i.e., by considering the polynomial Kλμ(t) = Kλμ(0, t).
There are two slightly different versions of them, one called transformed Kostka polynomials.
The one-variable specializations of the Kostka polynomials can be used to relate Hall-Littlewood polynomials Pμ to Schur polynomials sλ:
These polynomials were conjectured to have non-negative integer coefficients by Foulkes, and this was later proved in 1978 by Alain Lascoux and Marcel-Paul Schützenberger.
In fact, they show that
where the sum is taken over all semi-standard Young tableaux with shape λ and weight μ.
Here, charge is a certain combinatorial statistic on semi-standard Young tableaux.
The Macdonald–Kostka polynomials can be used to relate Macdonald polynomials (also denoted by Pμ) to Schur polynomials sλ:
where
Kostka numbers are special values of the one- or two-variable Kostka polynomials:
Examples
References
External links
Short tables of Kostka polynomials
Long tables of Kostka polynomials
Symmetric functions | Kostka polynomial | [
"Physics",
"Mathematics"
] | 361 | [
"Algebra",
"Symmetric functions",
"Symmetry"
] |
22,631,535 | https://en.wikipedia.org/wiki/List%20of%20software%20for%20nuclear%20engineering | With the decreased cost and increased capabilities of computers, Nuclear Engineering has implemented computer software (Computer code to Mathematical model) into all facets of this field. There are a wide variety of fields associated with nuclear engineering, but computers and associated software are used most often in design and analysis. Neutron kinetics, thermal-hydraulics, and structural mechanics are all important in this effort. Each software needs to be tested and verified before use. The codes can be separated by use and function. Most of the software are written in C and Fortran.
Monte Carlo Radiation Transport
Geant4 (CERN)
McCARD (KAIST)
MCNP (LANL)
OpenMC
PHITS (JAEA)
SCALE (KENO V and KENO VI) (ORNL)
Serpent (VTT)
TRIPOLI-4 (CEA)
Transmutation, fuel depletion
ACAB code Activation and transmutation calculations for nuclear applications
ORIP_XXI code Isotope transmutation simulations
ORILL Code 1D transmutation, fuel depletion (burn-up) and radiological protection code
FISPACT-II Multiphysics, inventory and source-term code
MURE Serpent-MCNP utility for Reactor Evolution
VESTA Monte Carlo depletion interface code
Reactor Systems Analysis
Particle Accelerators and High Voltage Machines
Magnetic Fusion Research
Toolkit
PyNE The Nuclear Engineering Toolkit
Deterministic Radiation Transport
CASMO5 (Studsvik)
HELIOS-2 (Studsvik)
SCALE (ORNL)
MPACT (ORNL)
THOR
nTRACER (Seoul National University)
Steady-state Reactor Analysis
SIMULATE5
Spatial Kinetics
PARCS
SIMULATE-3K
NESTLE
Thermal-Hydraulics
ATHLET (GRS, Gesellschaft für Anlagen- und Reaktorsicherheit)
TRACE (NRC)
SPACE (KEPCO)
RELAP5-3D (Idaho National Laboratory)
GOTHIC (Numerical Advisory Solutions)
CATHARE (CEA)
FLICA-4 (CEA)
RETRAN (RETRAN-02 and RETRAN-3D)
VIPRE-01
PROTO-FLO
PROTO-HX
PROTO-HVAC
PROTO-Sprinkler
Computational Fluid Dynamics
CFX (ANSYS)
FLUENT (ANSYS)
StarCD (Siemens)
STAR-CCM+ (Siemens)
LOGOS
COBRA-TF
TransAT
code_saturne (EDF)
neptune_cfd (EDF)
Trio_CFD (CEA)
Severe Accident
ATHLET-CD (GRS)
MELCOR (Sandia National Laboratories)
MAAP (EPRI)
ASTEC (IRSN and GRS)
Many codes are supported by the U.S. Nuclear Regulatory Commission (NRC). These include SCALE, PARCS, TRACE (Formerly RELAP5 and TRAC-B), MELCOR, and many others.
http://www.nrc.gov/about-nrc/regulatory/research/safetycodes.html
See also
Safety code (nuclear reactor)
Computational science
Computational physics
Computer simulation
List of software for nanostructures modeling
References
External links
http://www.min.uc.edu/nuclear/current_research/sinema-research/codes-of-interest
https://www.nrc.gov/about-nrc/regulatory/research/safetycodes.html
http://www.oecd-nea.org/tools/abstract/list
http://www.ne.anl.gov/codes/
http://www.irsn.fr/EN/Research/Scientific-tools/Computer-codes/Pages/Computer-codes-2624.aspx
https://www.oecd-nea.org/tools/abstract/list/category/*
Nuclear technology
Physics software | List of software for nuclear engineering | [
"Physics"
] | 786 | [
"Nuclear technology",
"Physics software",
"Nuclear physics",
"Computational physics"
] |
22,631,982 | https://en.wikipedia.org/wiki/Sodium%20Reactor%20Experiment | The Sodium Reactor Experiment was a pioneering nuclear power plant built by Atomics International at the Santa Susana Field Laboratory near Simi Valley, California. The reactor operated from 1957 to 1964. On July 12, 1957 the Sodium Reactor Experiment became the first nuclear reactor in California to produce electrical power for a commercial power grid by powering the nearby city of Moorpark. In July 1959, the reactor experienced a partial meltdown when 13 of the reactor's 43 fuel elements partially melted, and radioactive gas was released into the atmosphere. The reactor was repaired and restarted in September 1960. In February 1964, the Sodium Reactor Experiment was in operation for the last time. Removal of the deactivated reactor was completed in 1981. Technical analyses of the 1959 incident have produced contrasting conclusions regarding the types and quantities of radioactive materials released. Members of the neighboring communities have expressed concerns about the possible impacts on their health and environment from the incident. In August 2009 the Department of Energy hosted a community workshop to discuss the 1959 incident.
Location
The Sodium Reactor Experiment facility was situated in a northwestern administrative section (known as Area IV) on a mountaintop known as The Hill of the Santa Susana Field Laboratory, about northwest of downtown Los Angeles in Simi Valley. When the Sodium Reactor Experiment was active, the Santa Susana Field Laboratory was operated by two business divisions of the North American Aviation company. The Rocketdyne division conducted liquid-propellant rocket engine testing and development at the site, while the Atomics International division focused on the development of commercial nuclear reactors and compact nuclear reactors for outer-space applications.
History
In 1954, the United States Atomic Energy Commission announced plans to test the basic nuclear reactor designs then under study by building five experimental reactors in five years. The Sodium Reactor Experiment, designed by Atomics International, was one of the chosen reactors. Design of the Sodium Reactor Experiment began in June 1954, and construction was underway in April 1955. A local utility company, Southern California Edison, installed and operated a 6.5 MW electric-power generating system. Controlled nuclear fission began on April 25, 1957.
The Los Angeles Times published a front-page story when Moorpark was supplied with nuclear-generated electricity. Edward R. Murrow’s television program See It Now featured the event as a special news report, broadcast on November 24, 1957. In July 1958, Atomics International produced a film describing the construction of the Sodium Reactor Experiment facility. The Sodium Reactor Experiment used sodium as a coolant. Heat generated in the reactor was transported by liquid sodium through the reactor facility piping system. The pumps used to move the sodium were hot-oil centrifugal pumps modified for use in a sodium system. A support system used tetralin (an oil-like fluid) to cool the pump seals, which prevented leakage of hot sodium at the pump shaft. In July 1959, tetralin seeped into the primary coolant system through a pump seal and was decomposed by the high-temperature sodium. The decomposed tetralin clogged several narrow cooling channels used by the sodium system to remove heat from the reactor fuel elements. As the tetralin residues clogged the reactor's internal cooling channels, 13 of the reactor's 43 fuel elements overheated and were damaged. The exact date of the fuel damage is unknown, but is believed to have occurred between July 12 and 26.
At the time, operators were aware of unusual reactor behavior but were unaware of the damage. They continued operations for several days before shutting down the reactor for examination. When the operators attempted to remove the fuel elements from the reactor, most elements were removed normally but some were found to be jammed. Pieces of the damaged fuel elements fell to the bottom of the reactor. During the following months, Atomics International personnel removed all the jammed fuel elements, retrieved the pieces of dropped fuel elements, cleaned the sodium system and installed a new reactor core. The Sodium Reactor Experiment was restarted on September 7, 1960, nearly fourteen months after the accident. In 1961, Atomics International produced another movie explaining the accident and the recovery operation. The Sodium Reactor Experiment operated until February 15, 1964.
In 1964, several modifications were made to the reactor. These modifications were completed in May 1965, but the Atomic Energy Agency and Atomics International decided to close the reactor rather than restart it. The facility decommissioning began in 1976 with the removal of the reactor core, support systems and contaminated soil under the structure. The source of the contaminated soil underneath the building does not appear related to the 1959 reactor incident. Decommissioning was completed in 1981. In 1982, Atomics International produced a film on the decommissioning and decontamination of the Sodium Reactor Experiment.
A group of certified health physicists from the Argonne National Laboratory conducted independent sampling to determine if the then-current minimum cleanup standards for radioactive residue were met. In 1985, the United States Department of Energy completed its evaluation of the survey reports and certified “…that there is no evidence the facilities pose a radiological threat to either personnel or the environment”. In 1999, the remaining structures were demolished and removed from the site.
Principles
The purpose of the Sodium Reactor Experiment was to demonstrate the feasibility of a sodium-cooled reactor as the heat source for a commercial power reactor to produce electricity. A secondary objective was to obtain operational data on slightly enriched fuel and uranium-thorium fuel mixtures. The reactor was designed as a flexible development facility, and was considered a development tool emphasizing the investigation of fuel materials.
Compared to water, sodium has a relatively low vapor pressure at the operating temperatures of the reactor. The Sodium Reactor Experiment design utilized sodium as a coolant so high-pressure water systems would not be required. The reactor did not have a containment pressure vessel, because the maximum credible accident would not release enough gas volume to require pressure containment. It was designed to retain gases at about atmospheric pressure and reduce diffusion leakage from potentially contaminated gas.
The Sodium Reactor Experiment included a complex of buildings, workshops and support systems. The reactor was housed in the main reactor building, which consisted of a high bay area and a hot cell facility. Three cleaning cells were located in the main building. The cleaning cells were designed to wash sodium from the fuel elements with water in an inert atmosphere. The cleaning allowed examination of the fuel rods after they were removed from the reactor. Because sodium reacts violently with water, the wash cell was sealed off and flooded with inert gas to minimize the reaction during washing. Operators worked behind thick walls to limit their exposure to radiation emitted from the fuel elements, which were loaded into the cell through a ceiling entrance hole (normally covered with a heavy shield plug).
The reactor core sat in a lower portion of a vessel lined with stainless steel and filled with liquid sodium. The Sodium Reactor Experiment reactor core contained 43 fuel elements, each comprising seven fuel rods. A fuel rod was a stainless steel tube six feet long, filled with twelve uranium fuel slugs. Many of the fuel elements in the SRE were instrumented with thermocouples located in the center of the fuel materials at several places in the core. Two of the thermocouples were monitored in the reactor control room, while the remaining measurements were recorded on instrumentation outside the control room. The sodium temperature was also monitored at several points within the reactor system.
At full power, sodium at a temperature of approximately passed through a plenum chamber beneath the reactor core through the heat channels absorbing heat released from the fuel elements, and discharged into the upper pool (about deep) above the core at an average temperature of 950 °F (510 °C). This space was filled with helium gas, maintained at a pressure of approximately three pounds per square inch (gauge). Piping circulated 50,000 pounds (22,680 kg) of heated liquid sodium from the reactor vessel to one of two available heat exchangers. One heat exchanger transferred heat from the primary sodium loop, which in turn dissipated the heat in a steam generator which boiled water to make steam for use in a turbine generating electricity.
The gases used as a cover gas in the sodium systems (such as the reactor and fuel-assembly wash cells) are potentially radioactive. The design of the Sodium Reactor Experiment support facilities was to collect all such gases into a tank, compress them and put them into a gas holding tank until they had decayed sufficiently to allow discharge into the environment from an outdoor stack.
1959 incident
Limited experience
The Sodium Reactor Experiment was designed and constructed to gain experience in the use of uranium fuel in a reactor used to produce electricity. The fuel elements in the Sodium Reactor Experiment were operating under untried conditions. Fuel-design limits were based on theoretical limits, not operating experience. Cladding materials were untested, with little or no operating experience.
Before the incident
During the operation of the Sodium Reactor Experiment, its operators conducted several test cycles (known as “runs”) to correct and modify facility support systems, conduct reactor physics experiments and generate electricity. During run three, the Sodium Reactor Experiment became the first nuclear reactor in the US to produce power for a commercial power grid. During Run eight, a black residue (believed to be decomposed tetralin) was noticed on fuel elements removed from the reactor. The fuel elements were washed in the wash cell, and returned to the reactor. The reactor returned to operation for high-temperature testing. Several anomalous temperature readings were occasionally noticed during the next few runs, while the operators attempted to understand the behavior and its cause. At the end of run 13, it was obvious that something had occurred which impaired the heat-transfer characteristics of the system. It was decided that a tetralin leak had reoccurred, and was the cause of the trouble. The reactor sodium was purged with gaseous nitrogen, to remove volatile contamination.
Wash cell explosion
Following run 13, an attempt was made to wash a fuel element in the wash cell. During the operation, an explosion occurred of sufficient magnitude to lift the shield plug out of the wash cell. It is believed the tetralin-related decomposition products caused a substantial amount of sodium to be trapped in the fuel rod elements by blocking drain holes. There were no reported injuries or fatalities associated with the wash-cell explosion. As a result of the explosion, no further washing of the elements was done. Measurements from within the reactor building indicated extremely high radioactivity levels throughout the building. Within several days radioactivity in the high bay had been reduced to normal levels, except for the area immediately around the wash cells.
Run 14 (July 12–July 26, 1959)
Shortly after the reactor was restarted, radiation monitors inside the reactor building showed a sharp increase in airborne radioactivity within the reactor building. The reactor remained operating, while attempts were made to determine the source of the radioactivity. Airborne radioactivity then returned to normal.
On July 13, the reactor experienced a series of temperature and radiation fluctuations (known as "excursions", because they were an unexpected departure from expected conditions). The power level rose from about 4 MW to about 14 MW (70% of full power) over a period of about two minutes. The excursion required the operators to manually override a malfunctioning automatic-control switch, and the reactor was shut down. The switch was repaired, and the reactor was slowly restarted. The following day, monitors again indicated elevated airborne radioactivity levels within the reactor building. The source was traced to two locations at the reactor core loading face, which were sealed. Airborne radioactivity within the reactor building was reduced. The reactor was restarted, but the operators noted unusual behavior over the next few days. The reactor increased power faster than expected, and the temperature difference between the reactor bottom (where the sodium entered) and the reactor top (where the sodium exited) was unusually high. Radioactivity within the reactor also increased. The operators investigated, performing several exercises to understand and correct the reactor behavior.
On July 23, it was decided to shut the reactor down because of high fuel temperature and an unacceptable top-bottom reactor temperature differential. While moving the elements to dislodge foreign material (and lower the exit temperatures), it was noticed that four reactor elements were stuck. On July 26 the reactor was shut down, and the first damaged fuel element was observed.
On July 29, 1959, an ad hoc investigative committee was established to study the incident and make recommendations. On August 21, 1959, The Van Nuys News published a story with the headline “Parted Fuel Element seen at Atomics International”. The article stated, “…a parted fuel element was observed” and “The fuel element damage is not an indication of unsafe reactor conditions. No release of radioactive materials to the plant or its environs occurred”. The investigative committee released “SRE Fuel Element Damage, An Interim Report” on November 15, 1959; the final report was made in 1961. The introductory material in both documents includes the statement, ”This report has been distributed according to the category ‘Reactors-Power’ as given in Standard Distribution Lists for Unclassified Scientific and Technical Reports", also noting that a total of 700 copies were printed. The documents were not labeled “secret”.
Radioactivity release
The Sodium Reactor Experiment core, high bay, reactor gas and exhaust stack were routinely monitored with particle detectors. Monitoring was underway at the time of the incident. Two sets of documentation appear to exist concerning the release of radioactive gases in the 1959 incident. The first set of documents include incident reports, technical analysis and radiation monitoring reports prepared by Atomics International personnel shortly after the incident. The second set of documents was primarily prepared to support a defense against a lawsuit against the current property owner (Boeing) about 45 years after the incident.
Following the incident, Atomics International personnel documented an analysis of the distribution of radioactive materials released into the reactor by the damaged fuel elements. The analysis reviewed the radioactive materials released into the sodium and cover gas above the reactor. The researchers determined the amount of radioactive materials released into the sodium, and noted the materials were removed with cold traps; the sodium was reused when the reactor returned to service. The document states that only radioactive xenon-133 and krypton-85 were found in the cover gas. Attempts to detect radioactive iodine-131 were unsuccessful; this was not explained by Atomics International at the time. Internal Atomics International memoranda show the gases were removed from the reactor following the incident and stored in tanks, where they were allowed to decay and then slowly released into the atmosphere.
A summary of radioactive gases released from the Sodium Reactor Experiment over a two-month period was prepared by Boeing. The document notes that 28 curies of fission gases were released into the environment through a stack, in a controlled manner which met federal requirements. Other expert estimates indicate that the available documentation does not resolve the uncertainties about how fission gases were released following the accident, and that the total amount of radioactivity released could be higher.
Controversy
Following the original July 1959 incident, it was next referenced in a 1976 report on nuclear activity in Los Angeles in a little-noticed publication by Another Mother For Peace. The Three Mile Island accident sparked interest by students and faculty member Daniel Hirsch at University of California Los Angeles, who acquired the extensive collection of documentation and film footage of the damaged reactor. The documents and film were supplied to local media, triggering extensive coverage.
In December 2003, the United States Environmental Protection Agency (EPA) completed an evaluation of the portion of the Santa Susana Field Laboratory previously involved with nuclear reactor development (including the Sodium Reactor Experiment site). The evaluation was based on data which included any remaining radiological impacts on water and soils in the area from the Sodium Reactor Experiment. The EPA determined, “the site is not eligible for inclusion on Superfund’s National Priorities List and no further Superfund response is warranted at this time”.
In February 2004 a class action lawsuit was filed against the landowner, Boeing, alleging (in part) that the Sodium Reactor Experiment caused harm to nearby residents. The plaintiffs produced an analysis of the incident prepared by expert witness Arjun Makhijani, who is the head of an anti-nuclear organization. Makhijani's analysis of the Sodium Reactor Experiment estimated the incident at the Sodium Reactor Experiment may have released up to 260 times more radioactive iodine-131 than the official estimates for the Three Mile Island Nuclear Generating Station release. The "260 times worse than Three Mile Island" assertion has been widely quoted. The "Three Mile Island" conclusion presented in the legal filing did not agree with data and documents prepared at the time of the SRE incident.
In August 2004 ground water under the former Sodium Reactor Experiment was sampled to determine the presence of tritium, which was undetected. The results were presented at a DOE-sponsored community meeting in June 2005 and in handouts at the meeting.
In May 2005 a response to the Makhijani analysis was prepared for the defense by Jerry Christian, who provided a technical analysis disputing Makhijani’s claim of iodine release following the incident. Christian noted that Atomics International personnel attempted to monitor iodine-131 without success, and reactor temperature conditions did not permit a significant formation of iodine. A more-detailed analysis was prepared for the plaintiffs by John A. Daniel. Daniel focused on evaluating plant conditions, radiation monitoring and documentation to determine the amount of radioactivity released. His analysis concluded that a smaller amount of radioactive gases were released from the SRE. Christian and Daniel's technical analyses contrasted with that prepared by Makhihjani. The case was settled, reportedly with a large payment by Boeing to the plaintiffs (residents near the Santa Susana Field Laboratory who had cancer and other injuries from past site activities, including the SRE incident).
In July 2006, the History Channel broadcast a video summary of the 1959 Sodium Reactor Experiment incident during episode 19 of the Engineering Disasters documentary series. The segment features quotes from Dan Hirsch, a nuclear policy analyst, and David Lochbaum. The segment alleged that the incident was kept secret for 20 years, and the radioactivity release from the incident may have been as high as 240 times the radioactivity released from the accident at the Three Mile Island Nuclear Generating Station. The Engineering Disasters segment did not mention the technical analyses prepared for Boeing.
In October 2006 California legislators, responding to community calls for independent health studies in the wake of revelations about the site, established the Santa Susana Field Laboratory Advisory Panel. The panel consisted of independent experts from around the country (and one from Britain) and community representatives. It was a project of the Tides Center, funded by the US Department of Energy and later by the California Environmental Protection Agency (as mandated by the California State Legislature). The panel released a set of documents analyzing events at the Santa Susana Field Laboratory. Five reports by consultants focused on the analysis of the radiological impacts of the July 1959 Sodium Reactor Experiment incident. One, by David Lochbaum, concluded that contrary to Rocketdyne's claim that no radioactivity was released into the environment, "as much as 30% of the most worrisome of the radionuclides, iodine-131 and caesium-137, may have been released, with a best estimate of 15% of each". Scant and disconnected data prevented a quantitative assessment of exactly what gases escaped and when. In another report, Jan Beya attempted to provide an exposure estimate to epidemiologists interested in evaluating the effectiveness of radiation-induced-disease studies around the Santa Susana Field Laboratory. Beya noted that some meteorological information was withheld by the plant owner (Boeing). The estimates in the report were limited to scoping calculations with a wide range of uncertainty, but represented the current state of knowledge about the accident and its consequences according to experts who have analyzed the event.
In September 2008, Daniel Hirsch presented testimony in the U.S. Senate to the Committee on Environment and Public Works, chaired by California senator Barbara Boxer. Hirsch called the July 1959 event “one of the worst nuclear accidents in nuclear history” and testified that the government “covered up the seriousness of the accident”. A contrasting viewpoint, based on the technical analysis prepared for Boeing, was not presented at the hearing.
In April 2009, the Department of Energy announced the transfer of $38.3 million to the EPA for a complete radiological survey of a area of the Santa Susana Field Laboratory. The source of the funds was the American Recovery and Reinvestment Act of 2009. The DOE had provided funds earlier to the EPA for a portion of the survey, so the total funding provided for the Area IV survey is $41.5 million. The survey was scheduled to be completed in September 2011. In December 2012, the EPA released the results of testing done at the site. The Agency noted that it took 3735 soil samples during the study and of those samples more than 10% contained radioactivity higher than background level.
In July 2009, local media recognized the 50th anniversary of the July 1959 SRE incident. Local media reported that a former employee, John Pace, "broke his 50-year vow of secrecy" to describe his role in the reactor incident and recovery. A local newspaper featured photographs of Pace conducting activities at the SRE (monitoring the reactor, turning the top of the reactor core, placing sealer on asbestos piping, and seated at a console operating the reactor). The claim of secrecy contrasts with a press release, a motion picture and reports to the public following the 1959 incident. Jan Beyea was interviewed by a local newspaper; he reaffirmed his assertion that iodine-131 was released during the SRE incident, but it would "probably" not have produced a widespread effect on health.
In August 2009, the Department of Energy (DOE) hosted a public workshop in Simi Valley to explore expert and community perspectives on what occurred before, during and immediately after the July 1959 SRE incident. The workshop featured presentations from three experts: Paul Pickard of DOE's Sandia National Laboratories, Thomas Cochran of the Natural Resources Defense Council and Richard Denning of Ohio State University. Over 185 community members and Atomics International retirees attended the workshop. Posters (depicting key operation and accident timelines) and an evaluation of the reactor's radioactive material inventory and release into the environment were discussed. An electronic library of over 80 technical documents describing the design, operation, the 1959 incident and the activities taken to repair and restart the SRE is maintained by the DOE. Videos of the introductions, presentations, community comments and the question-and-answer session are available for viewing.
Aftermath
As a result of the incident, changes were made to the Sodium Reactor Experiment. Tetralin was eliminated, the sodium system was modified, the wash-cell cleaning process used steam instead of water, instrumentation was improved and the fuel-element geometry was modified. In September 1960, following recovery and cleanup operations, the Sodium Reactor Experiment began operation with a new reactor core. At the time of the July 1959 incident, the Sodium Reactor Experiment had operated for 10,344 hours. After the repairs were made and a new core loaded, the Sodium Reactor Experiment operated for an additional 26,716 hours and generated a total of 37 GWh of electricity.
In 1966, the Energy Technology Engineering Center was established at the Santa Susana Field Laboratory by the U.S. Atomic Energy Commission for the development and non-nuclear testing of liquid metal reactor components. The testing and development notably improved the safety and reliability of sodium pump seals. The Energy Technology Engineering Center designed, developed and performed full-scale testing for a wide variety of sodium components (such as cold traps, flow meters and valves) from 1965 to 1998.
References
External links
Company websites
U.S. DOE ETEC Closure Project Website
Santa Susana Field Laboratory website Hosted by the Boeing Company
Sodium Reactor Experiment videos
Sodium Reactor Experiment Construction, July 1958
Description of the July 1959 incident and recovery, November, 1961
Removal of the Sodium Reactor Experiment, March 1982
Other links
Starr & Dickinson, Sodium graphite reactors (1958).
Construction of the Sodium Reactor Experiment — 1959 pamphlet
Cleanuprocketdyne.org Blog
The Aerospace Cancer Museum of Los Angeles Blog
SSFL Community Advisory Group 2013 Website
SSFL CAG (community advisory group) — SSFL Forum
The Rocketdyne Information Society — SSFL Forum
Madeline Felkins Website
Santa Susana Advisory Panel — closed forum
Miller-McCune: 50 years after nuclear meltdown
SSFL CAG petition
Atomics International
Nuclear power plants in California
Nuclear research reactors
Civilian nuclear power accidents
Energy infrastructure in California
Former nuclear power stations in the United States
Former nuclear research institutes
Simi Hills
Energy infrastructure completed in 1957
1957 establishments in California
Buildings and structures in Ventura County, California
Buildings and structures demolished in 1999
Demolished buildings and structures in California
1959 disasters in the United States
1959 in California
History of the San Fernando Valley
History of Simi Valley, California
History of Ventura County, California
Rocketdyne
Nuclear accidents and incidents in the United States
1964 disestablishments in California
Former power stations in California | Sodium Reactor Experiment | [
"Technology"
] | 5,106 | [
"Environmental impact of nuclear power",
"Civilian nuclear power accidents"
] |
18,865,642 | https://en.wikipedia.org/wiki/Supersymmetry%20algebra | In theoretical physics, a supersymmetry algebra (or SUSY algebra) is a mathematical formalism for describing the relation between bosons and fermions. The supersymmetry algebra contains not only the Poincaré algebra and a compact subalgebra of internal symmetries, but also contains some fermionic supercharges, transforming as a sum of N real spinor representations of the Poincaré group. Such symmetries are allowed by the Haag–Łopuszański–Sohnius theorem. When N>1 the algebra is said to have extended supersymmetry. The supersymmetry algebra is a semidirect sum of a central extension of the super-Poincaré algebra by a compact Lie algebra B of internal symmetries.
Bosonic fields commute while fermionic fields anticommute. In order to have a transformation that relates the two kinds of fields, the introduction of a Z2-grading under which the even elements are bosonic and the odd elements are fermionic is required. Such an algebra is called a Lie superalgebra.
Just as one can have representations of a Lie algebra, one can also have representations of a Lie superalgebra, called supermultiplets. For each Lie algebra, there exists an associated Lie group which is connected and simply connected, unique up to isomorphism, and the representations of the algebra can be extended to create group representations. In the same way, representations of a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.
Structure of a supersymmetry algebra
The general supersymmetry algebra for spacetime dimension d, and with the fermionic piece consisting of a sum of N irreducible real spinor representations, has a structure of the form
(P×Z).Q.(L×B)
where
P is a bosonic abelian vector normal subalgebra of dimension d, normally identified with translations of spacetime. It is a vector representation of L.
Z is a scalar bosonic algebra in the center whose elements are called central charges.
Q is an abelian fermionic spinor subquotient algebra, and is a sum of N real spinor representations of L. (When the signature of spacetime is divisible by 4 there are two different spinor representations of L, so there is some ambiguity about the structure of Q as a representation of L.) The elements of Q, or rather their inverse images in the supersymmetry algebra, are called supercharges. The subalgebra (P×Z).Q is sometimes also called the supersymmetry algebra and is nilpotent of length at most 2, with the Lie bracket of two supercharges lying in P×Z.
L is a bosonic subalgebra, isomorphic to the Lorentz algebra in d dimensions, of dimension d(d–1)/2
B is a scalar bosonic subalgebra, given by the Lie algebra of some compact group, called the group of internal symmetries. It commutes with P,Z, and L, but may act non-trivially on the supercharges Q.
The terms "bosonic" and "fermionic" refer to even and odd subspaces of the superalgebra.
The terms "scalar", "spinor", "vector", refer to the behavior of subalgebras under the action of the Lorentz algebra L.
The number N is the number of irreducible real spin representations. When the signature of spacetime is divisible by 4 this is ambiguous as in this case there are two different irreducible real spinor representations, and the number N is sometimes replaced by a pair of integers (N1, N2).
The supersymmetry algebra is sometimes regarded as a real super algebra, and sometimes as a complex algebra with a hermitian conjugation. These two views are essentially equivalent, as the real algebra can be constructed from the complex algebra by taking the skew-Hermitian elements, and the complex algebra can be constructed from the real one by taking tensor product with the complex numbers.
The bosonic part of the superalgebra is isomorphic to the product of the Poincaré algebra P.L with the algebra Z×B of internal symmetries.
When N>1 the algebra is said to have extended supersymmetry.
When Z is trivial, the subalgebra P.Q.L is the super-Poincaré algebra.
See also
Adinkra symbols
Super-Poincaré algebra
Superconformal algebra
Supersymmetry algebras in 1 + 1 dimensions
N = 2 superconformal algebra
References
Supersymmetry
Lie algebras | Supersymmetry algebra | [
"Physics"
] | 998 | [
"Symmetry",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Supersymmetry"
] |
18,869,034 | https://en.wikipedia.org/wiki/Virtual%20lab%20automation | Virtual Lab Automation refers to a category of software solutions to automate IT labs using virtualization technology.
History
Virtualization | Virtual lab automation | [
"Engineering"
] | 25 | [
"Computer networks engineering",
"Virtualization"
] |
18,869,317 | https://en.wikipedia.org/wiki/Kayles | Kayles is a simple impartial game in combinatorial game theory, invented by Henry Dudeney in 1908. Given a row of imagined bowling pins, players take turns to knock out either one pin, or two adjacent pins, until all the pins are gone. Using the notation of octal games, Kayles is denoted 0.77.
Rules
Kayles is played with a row of tokens, which represent bowling pins. The row may be of any length. The two players alternate; each player, on his or her turn, may remove either any one pin (a ball bowled directly at that pin), or two adjacent pins (a ball bowled to strike both). Under the normal play convention, a player loses when they have no legal move (that is, when all the pins are gone). The game can also be played using misère rules; in this case, the player who cannot move wins.
History
Kayles was invented by Henry Dudeney. Richard Guy and Cedric Smith were first to completely analyze the normal-play version, using Sprague-Grundy theory. The misère version was analyzed by William Sibert in 1973, but he did not publish his work until 1989.
The name "Kayles" is an Anglicization of the French quilles, meaning "bowling pins".
Analysis
Most players quickly discover that the first player has a guaranteed win in normal Kayles whenever the row length is greater than zero. This win can be achieved using a symmetry strategy. On their first move, the first player should move so that the row is broken into two sections of equal length. This restricts all future moves to one section or the other. Now, the first player merely imitates the second player's moves in the opposite row.
It is more interesting to ask what the nim-value is of a row of length . This is often denoted ; it is a nimber, not a number. By the Sprague–Grundy theorem, is the mex over all possible moves of the nim-sum of the nim-values of the two resulting sections. For example,
because from a row of length 5, one can move to the positions
Recursive calculation of values (starting with ) gives the results summarized in the following table. To find the value of on the table, write as , and look at row a, column b:
At this point, the nim-value sequence becomes periodic with period 12, so all further rows of the table are identical to the last row.
Applications
Because certain positions in dots and boxes reduce to Kayles positions, it is helpful to understand Kayles in order to analyze a generic dots and boxes position.
Computational complexity
Under normal play, Kayles can be solved in polynomial time using Sprague-Grundy theory.
Generalizations
Node Kayles is a generalization of Kayles to graphs in which each bowl “knocks down” (removes) a desired vertex and all its neighboring vertices. Alternatively, this game can be viewed as two players finding an independent set together. Winner determination is solvable in polynomial time for any family of graphs with bounded asteroidal number (defined as the size of a largest subset of vertices such that the removal of the closed neighborhood of any vertex in the set leaves the remaining vertices of the set in the same connected component).
Similarly, in the clique-forming game, two players must find a clique in the graph. The last to play wins. Schaefer (1978) proved that deciding the outcome of these games is PSPACE-complete. The same result holds for a partisan version of node Kayles, in which, for every node, only one of the players is allowed to choose that particular node as the knock down target.
See also
Combinatorial game theory
Octal games
Dawson's Kayles
Nimber
References
Combinatorial game theory
Mathematical games | Kayles | [
"Mathematics"
] | 794 | [
"Mathematical games",
"Recreational mathematics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
18,870,628 | https://en.wikipedia.org/wiki/Gianotti%E2%80%93Crosti%20syndrome | Gianotti–Crosti syndrome (), also known as infantile papular acrodermatitis, papular acrodermatitis of childhood, and papulovesicular acrolocated syndrome, is a reaction of the skin to a viral infection. Hepatitis B virus and Epstein–Barr virus are the most frequently reported pathogens. Other viruses implicated are hepatitis A virus, hepatitis C virus, cytomegalovirus, coxsackievirus, adenovirus, enterovirus, rotavirus, rubella virus, HIV, and parainfluenza virus.
It is named for Ferdinando Gianotti and Agostino Crosti.
Presentation
Gianotti–Crosti syndrome mainly affects infants and young children. Children as young as 1.5 months and up to 12 years of age are reported to be affected. It is generally recognized as a papular or papulovesicular skin rash occurring mainly on the face and distal aspects of the four limbs. Purpura is generally not seen but may develop upon tourniquet test. However, extensive purpura without any hemorrhagic disorder has been reported. The presence of less florid lesions on the trunk does not exclude the diagnosis. Lymphadenopathy and hepatomegaly are sometimes noted. Raised AST and ALT levels with no rise in conjugated and unconjugated bilirubin levels are sometimes detectable, although the absence of such does not exclude the diagnosis. Spontaneous disappearance of the rash usually occurs after 15 to 60 days.
Diagnosis
The diagnosis of Gianotti–Crosti syndrome is clinical. A validated diagnostic criterion is as follows:
A patient is diagnosed as having Gianotti–Crosti syndrome if:
On at least one occasion or clinical encounter, he/she exhibits all the positive clinical features,
On all occasions or clinical encounters related to the rash, he/she does not exhibit any of the negative clinical features,
None of the differential diagnoses is considered to be more likely than Gianotti–Crosti syndrome on clinical judgment, and
If lesional biopsy is performed, the histopathological findings are consistent with Gianotti–Crosti syndrome.
The positive clinical features are:
Monomorphous, flat-topped, pink-brown papules or papulovesicles 1-10mm in diameter.
At least three of the following four sites involved – (1) cheeks, (2) buttocks, (3) extensor surfaces of forearms, and (4) extensor surfaces of legs.
Being symmetrical, and
Lasting for at least ten days.
The negative clinical features are:
Extensive truncal lesions, and
Scaly lesions.
Differential diagnosis
The differential diagnoses are: acrodermatitis enteropathica, erythema infectiosum, erythema multiforme, hand-foot-and-mouth disease, Henoch–Schönlein purpura, Kawasaki disease, lichen planus, papular urticaria, papular purpuric gloves and socks syndrome, and scabies.
Treatment
Gianotti-Crosti disease is a harmless and self-limiting condition, so no treatment may be required. Treatment is mainly focused on controlling itching, symptomatic relief and to avoid any further complications. For symptomatic relief from itching, oral antihistamines or any soothing lotions like calamine lotion or zinc oxide may be used. If there are any associated conditions like streptococcal infections, antibiotics may be required.
See also
List of cutaneous conditions
References
External links
Virus-related cutaneous conditions
Epstein–Barr virus–associated diseases
Syndromes affecting the skin
Syndromes caused by microbes | Gianotti–Crosti syndrome | [
"Biology"
] | 787 | [
"Microorganisms",
"Syndromes caused by microbes"
] |
27,024,156 | https://en.wikipedia.org/wiki/Lithium%20tetrakis%28pentafluorophenyl%29borate | Lithium tetrakis(pentafluorophenyl)borate is the lithium salt of the weakly coordinating anion (B(C6F5)4)−. Because of its weakly coordinating abilities, lithium tetrakis(pentafluorophenyl)borate makes it commercially valuable in the salt form in the catalyst composition for olefin polymerization reactions and in electrochemistry. It is a water-soluble compound. Its anion is closely related to the non-coordinating anion known as BARF. The tetrakis(pentafluorophenyl)borates have the advantage of operating on a one-to-one stoichiometric basis with Group IV transition metal polyolefin catalysts, unlike methylaluminoxane (MAO) which may be used in large excess.
Structure and properties
The anion is tetrahedral with B-C bond lengths of approximately 1.65 Angstroms. The salt has only been obtained as the etherate, and the crystallography confirms that four ether (OEt2) molecules are bound to the lithium cation, with Li-O bond lengths of approximately 1.95 Å. The [Li(OEt2)4]+ complex is tetrahedral.
Preparation
The salt was first produced in studies on tris(pentafluorophenyl)boron, a well known Lewis acidic compound. Combining equimolar ether solutions of pentafluorophenyllithium and tris(pentafluorophenyl)boron gives the lithium salt of tetrakis(pentafluorophenyl)borate, which precipitates the etherate as a white solid:
(C6F5)3B + Li(C6F5) → [Li(OEt2)3][B(C6F5)4]
Since its discovery, many revised syntheses have been described.
Reactions
Lithium tetrakis(pentafluorophenyl)borate is primarily used to prepare cationic transition metal complexes:
LiB(C6F5)4) + MLnCl → LiCl + [MLn]B(C6F5)4
LiB(C6F5)4 is converted to the trityl reagent [Ph3C][B(C6F5)4], which is useful activator of Lewis-acid catalysts.
Safety
Lithium tetrakis(pentafluorophenyl)borate will deflagrate on melting (ca. 265 °C) giving thick black smoke, even under nitrogen. The mechanism is unknown. Metal tetrakis(pentafluorophenyl)borates of K and Na decompose vigorously as well.
See also
Tetraphenylborate
References
Lithium salts
Organoboron compounds
Pentafluorophenyl compounds
Perfluorinated compounds
Catalysts | Lithium tetrakis(pentafluorophenyl)borate | [
"Chemistry"
] | 611 | [
"Catalysis",
"Catalysts",
"Lithium salts",
"Salts",
"Chemical kinetics"
] |
25,519,739 | https://en.wikipedia.org/wiki/MAGPIE | MAGPIE (Mega Ampere Generator for Plasma Implosion Experiments) is a pulsed power generator based at Imperial College London, United Kingdom. The generator was originally designed to produce a current pulse with a maximum of 1.8 million amperes in 240 nanoseconds (150 nanoseconds rise time). At present the machine is operated with a maximum current of approximately 1.4 million amperes and operates as a z-pinch facility.
The generator consists of four voltage multipliers (Marx generators), each one containing 24 capacitors. At the maximum charging voltage of 100 kilo-volts, an output voltage of 2.4 million volts is produced and delivered into the load section. The pulses have a rise time of 150 ns and can be delivered in a high impedance load through a 1.25 Ω final line impedance.
Research at the MAGPIE generator has focused in the past on the field of inertial confinement fusion, but has recently seen significant adaptations for studies of Laboratory Astrophysics. In particular, the study of astrophysical jets in young stellar objects (see Herbig–Haro object) has been motivated by improved observational capabilities in the recent years. The simulation of such large-scale events has been undertaken at MAGPIE both from a computational point of view, through the GORGON code, and from an experimental one by means of the generator.
MAGPIE is one of several similar pulsed power machines worldwide, of which the largest and most powerful is the Z-machine at Sandia National Laboratories, Albuquerque, New Mexico.
References
External links
MAGPIE pulsed-power generator website at Imperial College London
Electrical generators
Plasma physics facilities | MAGPIE | [
"Physics",
"Technology"
] | 340 | [
"Electrical generators",
"Machines",
"Plasma physics",
"Physical systems",
"Plasma physics facilities"
] |
25,520,575 | https://en.wikipedia.org/wiki/Xylomannan | Xylomannan is an antifreeze molecule, found in the freeze-tolerant Alaskan beetle Upis ceramboides. Unlike antifreeze proteins, xylomannan is not a protein. Instead, it is a combination of a sugar (saccharide) and a fatty acid that is found in cell membranes. As such is expected to work in a different manner than AFPs. It is believed to work by incorporating itself directly into the cell membrane and preventing the freezing of water molecules within the cell.
Xylomannan is also found in the red seaweed Nothogenia fastigiata (Scinaiaceae family). Fraction F6 of a sulphated xylomannan from Nothogenia fastigiata was found to inhibit replication of a variety of viruses, including Herpes simplex virus types 1 and 2 (HSV-1, HSV-2), Human cytomegalovirus (HCMV, HHV-5), Respiratory syncytial virus (RSV), Influenzavirus A, Influenzavirus B, Junin and Tacaribe virus, Simian immunodeficiency virus, and (weakly) Human immunodeficiency virus types 1 and 2.
References
Cryobiology | Xylomannan | [
"Physics",
"Chemistry",
"Biology"
] | 259 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
25,523,630 | https://en.wikipedia.org/wiki/Jargon%20File | The Jargon File is a glossary and usage dictionary of slang used by computer programmers. The original Jargon File was a collection of terms from technical cultures such as the MIT AI Lab, the Stanford AI Lab (SAIL) and others of the old ARPANET AI/LISP/PDP-10 communities, including Bolt, Beranek and Newman (BBN), Carnegie Mellon University, and Worcester Polytechnic Institute. It was published in paperback form in 1983 as The Hacker's Dictionary (edited by Guy Steele) and revised in 1991 as The New Hacker's Dictionary (ed. Eric S. Raymond; third edition published 1996).
The concept of the file began with the Tech Model Railroad Club (TMRC) that came out of early TX-0 and PDP-1 hackers in the 1950s, where the term hacker emerged and the ethic, philosophies and some of the nomenclature emerged.
1975 to 1983
The Jargon File (referred to here as "Jargon-1" or "the File") was made by Raphael Finkel at Stanford in 1975. From that time until the plug was finally pulled on the SAIL computer in 1991, the File was named "AIWORD.RF[UP,DOC]" ("[UP,DOC]" was a system directory for "User Program DOCumentation" on the WAITS operating system). Some terms, such as frob, foo and mung are believed to date back to the early 1950s from the Tech Model Railroad Club at MIT and documented in the 1959 Dictionary of the TMRC Language compiled by Peter Samson. The revisions of Jargon-1 were all unnumbered and may be collectively considered "version 1". Note that it was always called "AIWORD" or "the Jargon file", never "the File"; the last term was coined by Eric Raymond.
In 1976, Mark Crispin, having seen an announcement about the File on the SAIL computer, FTPed a copy of the File to the MIT AI Lab. He noticed that it was hardly restricted to "AI words", and so stored the file on his directory, named as "AI:MRC;SAIL JARGON" ("AI" lab computer, directory "MRC", file "SAIL JARGON").
Raphael Finkel dropped out of active participation shortly thereafter and Don Woods became the SAIL contact for the File (which was subsequently kept in duplicate at SAIL and MIT, with periodic resynchronizations).
The File expanded by fits and starts until 1983. Richard Stallman was prominent among the contributors, adding many MIT and ITS-related coinages. The Incompatible Timesharing System (ITS) was named to distinguish it from another early MIT computer operating system, Compatible Time-Sharing System (CTSS).
In 1981, a hacker named Charles Spurgeon got a large chunk of the File published in Stewart Brand's CoEvolution Quarterly (issue 29, pages 26–35) with illustrations by Phil Wadler and Guy Steele (including a couple of Steele's Crunchly cartoons). This appears to have been the File's first paper publication.
A late version of Jargon-1, expanded with commentary for the mass market, was edited by Guy Steele into a book published in 1983 as The Hacker's Dictionary (Harper & Row CN 1082, ). It included all of Steele's Crunchly cartoons. The other Jargon-1 editors (Raphael Finkel, Don Woods, and Mark Crispin) contributed to this revision, as did Stallman and Geoff Goodfellow. This book (now out of print) is hereafter referred to as "Steele-1983" and those six as the Steele-1983 coauthors.
1983 to 1990
Shortly after the publication of Steele-1983, the File effectively stopped growing and changing. Originally, this was due to a desire to freeze the file temporarily to ease the production of Steele-1983, but external conditions caused the "temporary" freeze to become permanent.
The AI Lab culture had been hit hard in the late 1970s, by funding cuts and the resulting administrative decision to use vendor-supported hardware and associated proprietary software instead of homebrew whenever possible. At MIT, most AI work had turned to dedicated Lisp machines. At the same time, the commercialization of AI technology lured some of the AI Lab's best and brightest away to startups along the Route 128 strip in Massachusetts and out west in Silicon Valley. The startups built Lisp machines for MIT; the central MIT-AI computer became a TWENEX system rather than a host for the AI hackers' ITS.
The Stanford AI Lab had effectively ceased to exist by 1980, although the SAIL computer continued as a computer science department resource until 1991. Stanford became a major TWENEX site, at one point operating more than a dozen TOPS-20 systems, but by the mid-1980s, most of the interesting software work was being done on the emerging BSD Unix standard.
In April 1983, the PDP-10-centered cultures that had nourished the File were dealt a death-blow by the cancellation of the Jupiter project at DEC. The File's compilers, already dispersed, moved on to other things. Steele-1983 was partly a monument to what its authors thought was a dying tradition; no one involved realized at the time just how wide its influence was to be.
As mentioned in some editions:
1990 and later
A new revision was begun in 1990, which contained nearly the entire text of a late version of Jargon-1 (a few obsolete PDP-10-related entries were dropped after consultation with the editors of Steele-1983). It merged in about 80% of the Steele-1983 text, omitting some framing material and a very few entries introduced in Steele-1983 that are now only of historical interest.
The new version cast a wider net than the old Jargon File; its aim was to cover not just AI or PDP-10 hacker culture but all of the technical computing cultures in which the true hacker-nature is manifested. More than half of the entries derived from Usenet and represented jargon then current in the C and Unix communities, but special efforts were made to collect jargon from other cultures including IBM PC programmers, Amiga fans, Mac enthusiasts, and even the IBM mainframe world.
Eric Raymond maintained the new File with assistance from Guy Steele, and is the credited editor of the print version of it, The New Hacker's Dictionary (published by MIT Press in 1991); hereafter Raymond-1991. Some of the changes made under his watch were controversial; early critics accused Raymond of unfairly changing the file's focus to the Unix hacker culture instead of the older hacker cultures where the Jargon File originated. Raymond has responded by saying that the nature of hacking had changed and the Jargon File should report on hacker culture, and not attempt to enshrine it. After the second edition of NHD (MIT Press, 1993; hereafter Raymond-1993), Raymond was accused of adding terms reflecting his own politics and vocabulary, even though he says that entries to be added are checked to make sure that they are in live use, not "just the private coinage of one or two people".
The Raymond version was revised again, to include terminology from the nascent subculture of the public Internet and the World Wide Web, and published by MIT Press as The New Hacker's Dictionary, Third Edition, in 1996.
, no updates have been made to the official Jargon File since 2003. A volunteer editor produced two updates, reflecting later influences (mostly excoriated) from text messaging language, LOLspeak, and Internet slang in general; the last was produced in January 2012.
Impact and reception
Influence
Despite its tongue-in-cheek approach, multiple other style guides and similar works have cited The New Hacker's Dictionary as a reference, and even recommended following some of its "hackish" best practices. The Oxford English Dictionary has used the NHD as a source for computer-related neologisms. The Chicago Manual of Style, the leading American academic and book-publishing style guide, beginning with its 15th edition (2003) explicitly defers, for "computer writing", to the quotation punctuation style logical quotation recommended by the essay "Hacker Writing Style" in The New Hacker's Dictionary (and cites NHD for nothing else). The 16th edition (2010, and the current issue ) does likewise. The National Geographic Style Manual lists NHD among only 8 specialized dictionaries, out of 22 total sources, on which it is based. That manual is the house style of NGS publications, and has been available online for public browsing since 1995. The NGSM does not specify what, in particular, it drew from the NHD or any other source.
Aside from these guides and the Encyclopedia of New Media, the Jargon file, especially in print form, is frequently cited for both its definitions and its essays, by books and other works on hacker history, cyberpunk subculture, computer jargon and online style, and the rise of the Internet as a public medium, in works as diverse as the 20th edition of A Bibliography of Literary Theory, Criticism and Philology edited by José Ángel García Landa (2015); Wired Style: Principles of English Usage in the Digital Age by Constance Hale and Jessie Scanlon of Wired magazine (1999); Transhumanism: The History of a Dangerous Idea by David Livingstone (2015); Mark Dery's Flame Wars: The Discourse of Cyberculture (1994) and Escape Velocity: Cyberculture at the End of the Century (2007); Beyond Cyberpunk! A Do-it-yourself Guide to the Future by Gareth Branwyn and Peter Sugarman (1991); and numerous others.
Time magazine used The New Hacker's Dictionary (Raymond-1993) as the basis for an article about online culture in the November 1995 inaugural edition of the "Time Digital" department. NHD was cited by name on the front page of The Wall Street Journal. Upon the release of the second edition, Newsweek used it as a primary source, and quoted entries in a sidebar, for a major article on the Internet and its history. The MTV show This Week in Rock used excerpts from the Jargon File in its "CyberStuff" segments. Computing Reviews used one of the Jargon File's definitions on its December 1991 cover.
On October 23, 2003, The New Hacker's Dictionary was used in a legal case. SCO Group cited the 1996 edition definition of "FUD" (fear, uncertainty and doubt), which dwelt on questionable IBM business practices, in a legal filing in the civil lawsuit SCO Group, Inc. v. International Business Machines Corp. (In response, Raymond added SCO to the entry in a revised copy of the Jargon File, feeling that SCO's own practices deserved similar criticism.)
Defense of the term hacker
The book is particularly noted for helping (or at least trying) to preserve the distinction between a hacker (a consummate programmer) and a cracker (a computer criminal); even though not reviewing the book in detail, both the London Review of Books and MIT Technology Review remarked on it in this regard. In a substantial entry on the work, the Encyclopedia of New Media by Steve Jones (2002) observed that this defense of the term hacker was a motivating factor for both Steele's and Raymond's print editions:
Reviews and reactions
PC Magazine in 1984, stated that The Hacker's Dictionary was superior to most other computer-humor books, and noted its authenticity to "hard-core programmers' conversations", especially slang from MIT and Stanford. Reviews quoted by the publisher include: William Safire of The New York Times referring to the Raymond-1991 NHD as a "sprightly lexicon" and recommending it as a nerdy gift that holiday season (this reappeared in his "On Language" column again in mid-October 1992); Hugh Kenner in Byte suggesting that it was so engaging that one's reading of it should be "severely timed if you hope to get any work done"; and Mondo 2000 describing it as "slippery, elastic fun with language", as well as "not only a useful guidebook to very much un-official technical terms and street tech slang, but also a de facto ethnography of the early years of the hacker culture". Positive reviews were also published in academic as well as computer-industry publications, including IEEE Spectrum, New Scientist, PC Magazine, PC World, Science, and (repeatedly) Wired.
US game designer Steve Jackson, writing for Boing Boing magazine in its pre-blog, print days, described NHD essay "A Portrait of J. Random Hacker" as "a wonderfully accurate pseudo-demographic description of the people who make up the hacker culture". He was nevertheless critical of Raymond's tendency to editorialize, even "flame", and of the Steele cartoons, which Jackson described as "sophomoric, and embarrassingly out of place beside the dry and sophisticated humor of the text". He wound down his review with some rhetorical questions:
The third print edition garnered additional coverage, in the usual places like Wired (August 1996), and even in mainstream venues, including People magazine (October 21, 1996).
References
Further reading
External links
(2004)
Jargon File Text Archive (1981–2003); Steven Ehrbar's:
post-Raymond; last major revision
1991 non-fiction books
Books about computer hacking
Books by Eric S. Raymond
Books by Guy L. Steele Jr.
Computer books
Computer humour
Computer jargon
Computer programming folklore
Computer-related introductions in 1975
Creative Commons-licensed books
English dictionaries
Free software culture and documents
Software engineering folklore
Works about computer hacking | Jargon File | [
"Technology",
"Engineering"
] | 2,837 | [
"Computing terminology",
"Software engineering folklore",
"Computer jargon",
"Software engineering",
"Works about computing",
"Computer books",
"Natural language and computing"
] |
25,525,656 | https://en.wikipedia.org/wiki/Quantum%20KZ%20equations | In mathematical physics, the quantum KZ equations or quantum Knizhnik–Zamolodchikov equations or qKZ equations are the analogue for quantum affine algebras of the Knizhnik–Zamolodchikov equations for affine Kac–Moody algebras. They are a consistent system of difference equations satisfied by the N-point functions, the vacuum expectations of products of primary fields. In the limit as the deformation parameter q approaches 1, the N-point functions of the quantum affine algebra tend to those of the affine Kac–Moody algebra and the difference equations become partial differential equations. The quantum KZ equations have been used to study exactly solved models in quantum statistical mechanics.
See also
Quantum affine algebras
Yang–Baxter equation
Quantum group
Affine Hecke algebra
Kac–Moody algebra
Two-dimensional conformal field theory
References
Mathematical physics
Conformal field theory
Quantum groups
Equations of physics | Quantum KZ equations | [
"Physics",
"Mathematics"
] | 191 | [
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Equations",
"Mathematical physics",
"Quantum physics stubs"
] |
2,719,708 | https://en.wikipedia.org/wiki/Pidgeon%20process | The Pidgeon process is a practical method for smelting magnesium. The most common method involves the raw material, dolomite being fed into an externally heated reduction tank and then thermally reduced to metallic magnesium using 75% ferrosilicon as a reducing agent in a vacuum. Overall the processes in magnesium smelting via the Pidgeon process involve dolomite calcination, grinding and pelleting, and vacuum thermal reduction.
Besides the Pidgeon process, electrolysis of magnesium chloride for commercial production of magnesium is also used, especially for magnesite ores, which at one point in time accounted for 75% of the world's magnesium production.
With year 2000 technology, it took between 17 and 20 kilowatt-hours per kilo of magnesium produced by the Pidgeon process. The Pidgeon processes in Canada in the year 2000 all used SF6 to cover the reaction so as not to introduce stray oxygen to it. Research to replace SF6 with boron trifluoride was underway in 2000. By 2011 magnesium production had departed under the Kyoto Protocol from Canada. Wu, Han and Liu bragged that "China is the world’s largest producer of primary magnesium and has a magnesium smelting industry that is mainly based on the Pidgeon process" in an era in which China had obtained an 80% market share of production of magnesium metal.
Chemistry
The general reaction that occurs in the Pidgeon process is:
For industrial use, ferrosilicon is used in place of pure silicon because its cheaper and more readily available. The iron from the alloy is a spectator in the reaction. CaC2 may also be used as an even cheaper alternative for silicon and ferrosilicon, but is disadvantageous because it decreases the magnesium yield slightly.
The magnesium raw material of this type of reaction is magnesium oxide, which is obtained in many ways. In all cases, the raw materials must be calcined to remove both water and carbon dioxide. Magnesium oxide can also be obtained from sea or lake water magnesium chloride hydrolyzed to hydroxide. The Mg(OH)2 is thermally dehydrated. Another option is to use mined magnesite (MgCO3) calcined to magnesium oxide.
The most used raw material is mined dolomite, a mixed (Ca,Mg)CO3, where the calcium oxide present in the reaction zone scavenges the silica formed, releasing heat and consuming one of the products, ultimately helping push the equilibrium to the right.
c(1) Dolomite calcination
(2) Reduction
The Pidgeon process is an endothermic reaction (H° ~183.0kJ/mol Si). Thermodynamically speaking, the temperatures decrease when the vacuum is used for both MgO and calcined dolomite.
Summary of Pidgeon process using dolomite
Chinese variant
The Chinese Pidgeon process is described here by Wu, Han and Liu. Being an endothermic reaction, heat is applied to initiate and sustain the reaction. This heat requirement may be very high. To keep reaction temperatures low, the processes are operated under pressure. The rotary kiln is typically used in dolomite calcination. In the rotary kiln, the raw material, calcinated dolomite, is mixed with the finely ground reducing agent, ferrosillicone and the catalyst, fluorite. The materials are mixed together and pressed into sphere shaped pellets and the mixed materials are charged into cylindrical nickel chromium steel retorts. A number of retorts are placed in a furnace in sealed paper bags to avoid moisture absorption so that calcined dolomite activity doesn't reduce magnesium yield. The pellets are then placed into a reduction tank and heated to 1200 °C. The inside of the furnace is vacuumed with a 13.3 Pa or higher, to produce magnesium vapour. Magnesium crystals are removed from the condensers, slag is removed as a solid and the retort is recharged. The crude magnesium is refined via flux, and commercial magnesium ingot is produced. The authors nowhere identify the name or the characteristics of the flux.
Typical flux composition is 49 wt % anhydrous magnesium chloride, 27 wt % potassium chloride, 20 wt % barium chloride and
4 wt % calcium fluoride.
Canadian variant
The Canadian variant is described here with reference to the Chinese variant. In year 2000 Canada had three magnesium smelters. All three used SF6 as cover gas to prevent oxidation and combustion of exposed surfaces of magnesium, which is at STP highly combustible. The SF6 cover gas had been in use at that point for over 20 years by all industries which dealt with raw magnesium. Canadian industry was tasked to discover a suitable alternative cover gas in order not to be sacrificed to Action Plan 2000 on Climate Change. SF6 had been deemed to have a Global Warming Potential (GWP) factor of 23,900 times that of CO2. By 2011 magnesium production had departed from Canada because of the Kyoto Protocol.
Other routes for magnesium processing
Many technologies have been developed for producing magnesium metal. These approaches can be broadly classified as electrolytic and thermic. The main manifestation of the electrolytic is the Dow Process. The main application of thermic routes is the Pidgeon process. The Bolzano process merits mention because it is very similar to the Pidgeon process except that the heating is achieved through electric heating conductors and retorts are placed vertically into large blocks in the Bolzano process. The Pidgeon method is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable.
Disadvantages of the Pidgeon process
Although the Pidgeon process has many advantages, there are some environmental disadvantages of the process as well. With the increased demand for magnesium in recent years, production through ore reduction has been emitting larger amounts of carbon dioxide and particulate matter. There are environmental impacts because to create light weight materials in the first place, more energy is needed compared to the material being replaced, typically iron or steel. Very approximately 10.4 kg of coal is burned and 37 kg of carbon dioxide is released, per 1 kg of magnesium obtained, compared with less than 2 kg of carbon dioxide to produce 1 kg of steel. In China, production of magnesium using the Pidgeon process has a 60% higher global warming impact than aluminum, a competing metal mass-produced in the country as well.
History
The silicothermic reduction of dolomite was first developed by Amati in 1938 at the University of Padua. Immediately afterward, an industrial production was established in Bolzano (Italy), using what is now better known as the Bolzano process.
A few years later in 1939, when Canada and its allies entered WW2, they were short on supplies that required magnesium such as bombs, other military devices and aluminum alloys needed for aircraft. Dr. Lloyd Montgomery Pidgeon at the National Research Council was able to create a method for extracting magnesium from dolomite in a vacuum at high temperature with ferrosilicon as the reducing agent. At this time, the ferrosilicon method was known, however it had yet to be commercialized. By early 1942, a successful pilot test took place.
Since then, the Pidgeon process has continually been widely used, especially in China, the world's largest magnesium producer.
References
Chemical processes
Magnesium processes
Metallurgical processes
Materials science
1942 introductions
20th-century inventions
Canadian inventions | Pidgeon process | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,571 | [
"Applied and interdisciplinary physics",
"Metallurgical processes",
"Magnesium processes",
"Metallurgy",
"Materials science",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
2,721,004 | https://en.wikipedia.org/wiki/Silicothermic%20reaction | Silicothermic reactions are thermic chemical reactions using silicon as the reducing agent at high temperature (800-1400°C).
They were initially commercialized for the production of low-carbon ferromanganese before and during World War I(F. M. Becket played a significant role) and are still used today. They were also historically used for the production of low-carbon ferrochrome, but were displaced by electric methods.
The most prominent example is the Pidgeon process (developed commercially in Canada during the Second World War by Lloyd Montgomery Pidgeon) for reducing magnesium metal from ores. Other processes include the Bolzano process and the magnetherm process. All three are commercially used for magnesium production.
See also
Aluminothermic reaction
Calciothermic reaction
References
Metallurgy
Metallurgical processes
Inorganic reactions
Silicon | Silicothermic reaction | [
"Chemistry",
"Materials_science",
"Engineering"
] | 179 | [
"Metallurgy",
"Metallurgical processes",
"Inorganic reactions",
"Materials science",
"nan",
"Chemical reaction stubs"
] |
2,721,041 | https://en.wikipedia.org/wiki/Aluminothermic%20reaction | Aluminothermic reactions are exothermic chemical reactions using aluminium as the reducing agent at high temperature. The process is industrially useful for production of alloys of iron. The most prominent example is the thermite reaction between iron oxides and aluminium to produce iron itself:
Fe2O3 + 2 Al → 2 Fe + Al2O3
This specific reaction is however not relevant to the most important application of aluminothermic reactions, the production of ferroalloys. For the production of iron, a cheaper reducing agent, coke, is used instead via the carbothermic reaction.
History
Aluminothermy started from the experiments of Russian scientist Nikolay Beketov at the University of Kharkiv in Ukraine, who proved that aluminium restored metals from their oxides under high temperatures. The reaction was first used for the carbon-free reduction of metal oxides. The reaction is highly exothermic, but it has a high activation energy since strong interatomic bonds in the solids must be broken first. The oxide was heated with aluminium in a crucible in a furnace. The runaway reaction made it possible to produce only small quantities of material. Hans Goldschmidt improved the aluminothermic process between 1893 and 1898, by igniting the mixture of fine metal oxide and aluminium powder by a starter reaction without heating the mixture externally. The process was patented in 1898 and used extensively in the later years for rail track welding.
Applications
The aluminothermic reaction is used for the production of several ferroalloys, for example ferroniobium from niobium pentoxide and ferrovanadium from iron, vanadium(V) oxide, and aluminium. The process begins with the reduction of the oxide by the aluminium:
3 V2O5 + 10 Al → 5 Al2O3 + 6 V
Other metals can be produced from their oxides in the same way.
Aluminothermic reactions have been used for welding rail tracks on-site, useful for complex installations or local repairs that cannot be done using continuously welded rail. Another common use is the welding of copper cables (wire) for use in direct burial (grounding/earthing) applications. It is still the only type of electrical connection recognized by the IEEE (IEEE, Std 80–2001) as continuous un-spliced cable.
See also
Thermite
Calciothermic reaction
Silicothermic reaction
References
Inorganic reactions
Metallurgy
Russian inventions
Ukrainian inventions | Aluminothermic reaction | [
"Chemistry",
"Materials_science",
"Engineering"
] | 513 | [
"Metallurgy",
"Inorganic reactions",
"Materials science",
"nan"
] |
2,721,326 | https://en.wikipedia.org/wiki/Calciothermic%20reaction | Calciothermic reactions are metallothermic reduction reactions (more generally, thermic chemical reactions) which use calcium metal as the reducing agent at high temperature.
Calcium is one of the most potent reducing agents available, usually drawn as the strongest oxidic reductant in Ellingham diagrams, though the lanthanides best it in this respect in oxide processes. On the other hand, this trend does not continue to other compounds that are non-oxides, and for instance lanthanum is produced by the calciothermic reduction of the chloride, calcium being a more potent reducing agent than lanthanum involving chlorides.
Calciothermic processes are used in the extraction of metals such as uranium, zirconium, and thorium from oxide ores.
An interesting way of performing calciothermic reductions is by in-situ generated metallic calcium, dissolved in molten calcium chloride, as shown in the FFC Cambridge Process.
See also
Aluminothermic reaction
Silicothermic reaction
References
Inorganic reactions
Metallurgy | Calciothermic reaction | [
"Chemistry",
"Materials_science",
"Engineering"
] | 216 | [
"Metallurgy",
"Inorganic reactions",
"Materials science",
"nan",
"Chemical reaction stubs"
] |
2,722,105 | https://en.wikipedia.org/wiki/Polarizer | A polarizer or polariser is an optical filter that lets light waves of a specific polarization pass through while blocking light waves of other polarizations. It can filter a beam of light of undefined or mixed polarization into a beam of well-defined polarization, known as polarized light. Polarizers are used in many optical techniques and instruments. Polarizers find applications in photography and LCD technology. In photography, a polarizing filter can be used to filter out reflections.
The common types of polarizers are linear polarizers and circular polarizers. Polarizers can also be made for other types of electromagnetic waves besides visible light, such as radio waves, microwaves, and X-rays.
Linear polarizers
Linear polarizers can be divided into two general categories: absorptive polarizers, where the unwanted polarization states are absorbed by the device, and beam-splitting polarizers, where the unpolarized beam is split into two beams with opposite polarization states. Polarizers which maintain the same axes of polarization with varying angles of incidence are often called Cartesian polarizers, since the polarization vectors can be described with simple Cartesian coordinates (for example, horizontal vs. vertical) independent from the orientation of the polarizer surface. When the two polarization states are relative to the direction of a surface (usually found with Fresnel reflection), they are usually termed s and p. This distinction between Cartesian and s–p polarization can be negligible in many cases, but it becomes significant for achieving high contrast and with wide angular spreads of the incident light.
Absorptive polarizers
Certain crystals, due to the effects described by crystal optics, show dichroism, preferential absorption of light which is polarized in particular directions. They can therefore be used as linear polarizers. The best known crystal of this type is tourmaline. However, this crystal is seldom used as a polarizer, since the dichroic effect is strongly wavelength dependent and the crystal appears coloured. Herapathite is also dichroic, and is not strongly coloured, but is difficult to grow in large crystals.
A Polaroid polarizing filter functions similarly on an atomic scale to the wire-grid polarizer. It was originally made of microscopic herapathite crystals. Its current H-sheet form is made from polyvinyl alcohol (PVA) plastic with an iodine doping. Stretching of the sheet during manufacture causes the PVA chains to align in one particular direction. Valence electrons from the iodine dopant are able to move linearly along the polymer chains, but not transverse to them. So incident light polarized parallel to the chains is absorbed by the sheet; light polarized perpendicularly to the chains is transmitted. The durability and practicality of Polaroid makes it the most common type of polarizer in use, for example for sunglasses, photographic filters, and liquid crystal displays. It is also much cheaper than other types of polarizer.
A modern type of absorptive polarizer is made of elongated silver nano-particles embedded in thin (≤0.5 mm) glass plates. These polarizers are more durable, and can polarize light much better than plastic Polaroid film, achieving polarization ratios as high as 100,000:1 and absorption of correctly polarized light as low as 1.5%. Such glass polarizers perform best for long-wavelength infrared light, and are widely used in fiber-optic communication.
Beam-splitting polarizers
Beam-splitting polarizers split the incident beam into two beams of differing linear polarization. For an ideal polarizing beamsplitter these would be fully polarized, with orthogonal polarizations. For many common beam-splitting polarizers, however, only one of the two output beams is fully polarized. The other contains a mixture of polarization states.
Unlike absorptive polarizers, beam splitting polarizers do not need to absorb and dissipate the energy of the rejected polarization state, and so they are more suitable for use with high intensity beams such as laser light. True polarizing beamsplitters are also useful where the two polarization components are to be analyzed or used simultaneously.
Polarization by Fresnel reflection
When light reflects (by Fresnel reflection) at an angle from an interface between two transparent materials, the reflectivity is different for light polarized in the plane of incidence and light polarized perpendicular to it. Light polarized in the plane is said to be p-polarized, while that polarized perpendicular to it is s-polarized. At a special angle known as Brewster's angle, no p-polarized light is reflected from the surface, thus all reflected light must be s-polarized, with an electric field perpendicular to the plane of incidence.
A simple linear polarizer can be made by tilting a stack of glass plates at Brewster's angle to the beam. Some of the s-polarized light is reflected from each surface of each plate. For a stack of plates, each reflection depletes the incident beam of s-polarized light, leaving a greater fraction of p-polarized light in the transmitted beam at each stage. For visible light in air and typical glass, Brewster's angle is about 57°, and about 16% of the s-polarized light present in the beam is reflected for each air-to-glass or glass-to-air transition. It takes many plates to achieve even mediocre polarization of the transmitted beam with this approach. For a stack of 10 plates (20 reflections), about 3% (= (1 − 0.16)20) of the s-polarized light is transmitted. The reflected beam, while fully polarized, is spread out and may not be very useful.
A more useful polarized beam can be obtained by tilting the pile of plates at a steeper angle to the incident beam. Counterintuitively, using incident angles greater than Brewster's angle yields a higher degree of polarization of the transmitted beam, at the expense of decreased overall transmission. For angles of incidence steeper than 80° the polarization of the transmitted beam can approach 100% with as few as four plates, although the transmitted intensity is very low in this case. Adding more plates and reducing the angle allows a better compromise between transmission and polarization to be achieved.
Because their polarization vectors depend on incidence angle, polarizers based on Fresnel reflection inherently tend to produce s–p polarization rather than Cartesian polarization, which limits their use in some applications.
Birefringent polarizers
Other linear polarizers exploit the birefringent properties of crystals such as quartz and calcite. In these crystals, a beam of unpolarized light incident on their surface is split by refraction into two rays. Snell's law holds for both of these rays, the ordinary or o-ray, and the extraordinary or e-ray, with each ray experiencing a different index of refraction (this is called double refraction). In general the two rays will be in different polarization states, though not in linear polarization states except for certain propagation directions relative to the crystal axis.
A Nicol prism was an early type of birefringent polarizer, that consists of a crystal of calcite which has been split and rejoined with Canada balsam. The crystal is cut such that the o- and e-rays are in orthogonal linear polarization states. Total internal reflection of the o-ray occurs at the balsam interface, since it experiences a larger refractive index in calcite than in the balsam, and the ray is deflected to the side of the crystal. The e-ray, which sees a smaller refractive index in the calcite, is transmitted through the interface without deflection. Nicol prisms produce a very high purity of polarized light, and were extensively used in microscopy, though in modern use they have been mostly replaced with alternatives such as the Glan–Thompson prism, Glan–Foucault prism, and Glan–Taylor prism. These prisms are not true polarizing beamsplitters since only the transmitted beam is fully polarized.
A Wollaston prism is another birefringent polarizer consisting of two triangular calcite prisms with orthogonal crystal axes that are cemented together. At the internal interface, an unpolarized beam splits into two linearly polarized rays which leave the prism at a divergence angle of 15°–45°. The Rochon and Sénarmont prisms are similar, but use different optical axis orientations in the two prisms. The Sénarmont prism is air spaced, unlike the Wollaston and Rochon prisms. These prisms truly split the beam into two fully polarized beams with perpendicular polarizations. The Nomarski prism is a variant of the Wollaston prism, which is widely used in differential interference contrast microscopy.
Thin film polarizers
Thin-film linear polarizers (also known as TFPN) are glass substrates on which a special optical coating is applied. Either Brewster's angle reflections or interference effects in the film cause them to act as beam-splitting polarizers. The substrate for the film can either be a plate, which is inserted into the beam at a particular angle, or a wedge of glass that is cemented to a second wedge to form a cube with the film cutting diagonally across the center (one form of this is the very common MacNeille cube).
Thin-film polarizers generally do not perform as well as Glan-type polarizers, but they are inexpensive and provide two beams that are about equally well polarized. The cube-type polarizers generally perform better than the plate polarizers. The former are easily confused with Glan-type birefringent polarizers.
Wire-grid polarizers
One of the simplest linear polarizers is the wire-grid polarizer (WGP), which consists of many fine parallel metallic wires placed in a plane. WGPs mostly reflect the non-transmitted polarization and can thus be used as polarizing beam splitters. The parasitic absorption is relatively high compared to most of the dielectric polarizers though much lower than in absorptive polarizers.
Electromagnetic waves that have a component of their electric fields aligned parallel to the wires will induce the movement of electrons along the length of the wires. Since the electrons are free to move in this direction, the polarizer behaves in a similar manner to the surface of a metal when reflecting light, and the wave is reflected backwards along the incident beam (minus a small amount of energy lost to Joule heating of the wire).
For waves with electric fields perpendicular to the wires, the electrons cannot move very far across the width of each wire. Therefore, little energy is reflected and the incident wave is able to pass through the grid. In this case the grid behaves like a dielectric material.
Overall, this causes the transmitted wave to be linearly polarized with an electric field completely perpendicular to the wires. The hypothesis that the waves "slip through" the gaps between the wires is incorrect.
For practical purposes, the separation between wires must be less than the wavelength of the incident radiation. In addition, the width of each wire should be small compared to the spacing between wires. Therefore, it is relatively easy to construct wire-grid polarizers for microwaves, far-infrared, and mid-infrared radiation. For far-infrared optics, the polarizer can be even made as free standing mesh, entirely without transmissive optics. In addition, advanced lithographic techniques can also build very tight pitch metallic grids (typ. 50‒100 nm), allowing for the polarization of visible or infrared light to a useful degree. Since the degree of polarization depends little on wavelength and angle of incidence, they are used for broad-band applications such as projection.
Analytical solutions using rigorous coupled-wave analysis for wire grid polarizers have shown that for electric field components perpendicular to the wires, the medium behaves like a dielectric, and for electric field components parallel to the wires, the medium behaves like a metal (reflective).
Malus' law and other properties
Malus' law (), which is named after Étienne-Louis Malus, says that when a perfect polarizer is placed in a polarized beam of light, the irradiance, I, of the light that passes through is given by
where I0 is the initial intensity and θi is the angle between the light's initial polarization direction and the axis of the polarizer.
A beam of unpolarized light can be thought of as containing a uniform mixture of linear polarizations at all possible angles. Since the average value of is 1/2, the transmission coefficient becomes
In practice, some light is lost in the polarizer and the actual transmission will be somewhat lower than this, around 38% for Polaroid-type polarizers but considerably higher (>49.9%) for some birefringent prism types.
If two polarizers are placed one after another (the second polarizer is generally called an analyzer), the mutual angle between their polarizing axes gives the value of θ in Malus's law. If the two axes are orthogonal, the polarizers are crossed and in theory no light is transmitted, though again practically speaking no polarizer is perfect and the transmission is not exactly zero (for example, crossed Polaroid sheets appear slightly blue in colour because their extinction ratio is better in the red). If a transparent object is placed between the crossed polarizers, any polarization effects present in the sample (such as birefringence) will be shown as an increase in transmission. This effect is used in polarimetry to measure the optical activity of a sample.
Real polarizers are also not perfect blockers of the polarization orthogonal to their polarization axis; the ratio of the transmission of the unwanted component to the wanted component is called the extinction ratio, and varies from around 1:500 for Polaroid to about 1:106 for Glan–Taylor prism polarizers.
In X-ray the Malus' law (relativistic form):
where – frequency of the polarized radiation falling on the polarizer, – frequency of the radiation passes through polarizer, – Compton wavelength of electron, – speed of light in vacuum.
Circular polarizers
Circular polarizers (CPL or circular polarizing filters) can be used to create circularly polarized light or alternatively to selectively absorb or pass clockwise and counter-clockwise circularly polarized light.
They are used as polarizing filters in photography to reduce oblique reflections from non-metallic surfaces, and are the lenses of the 3D glasses worn for viewing some stereoscopic movies (notably, the RealD 3D variety), where the polarization of light is used to differentiate which image should be seen by the left and right eye.
Creating circularly polarized light
There are several ways to create circularly polarized light, the cheapest and most common involves placing a quarter-wave plate after a linear polarizer and directing unpolarized light through the linear polarizer. The linearly polarized light leaving the linear polarizer is transformed into circularly polarized light by the quarter wave plate.
The transmission axis of the linear polarizer needs to be half way (45°) between the fast and slow axes of the quarter-wave plate.
In the arrangement above, the transmission axis of the linear polarizer is at a positive 45° angle relative to the right horizontal and is represented with an orange line. The quarter-wave plate has a horizontal slow axis and a vertical fast axis and they are also represented using orange lines. In this instance the unpolarized light entering the linear polarizer is displayed as a single wave whose amplitude and angle of linear polarization are suddenly changing.
When one attempts to pass unpolarized light through the linear polarizer, only light that has its electric field at the positive 45° angle leaves the linear polarizer and enters the quarter-wave plate. In the illustration, the three wavelengths of unpolarized light represented would be transformed into the three wavelengths of linearly polarized light on the other side of the linear polarizer.
In the illustration toward the right is the electric field of the linearly polarized light just before it enters the quarter-wave plate. The red line and associated field vectors represent how the magnitude and direction of the electric field varies along the direction of travel. For this plane electromagnetic wave, each vector represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the direction of travel. (Refer to these two images in the plane wave article to better appreciate this.)
Light and all other electromagnetic waves have a magnetic field which is in phase with, and perpendicular to, the electric field being displayed in these illustrations.
To understand the effect the quarter-wave plate has on the linearly polarized light it is useful to think of the light as being divided into two components which are at right angles (orthogonal) to each other. Towards this end, the blue and green lines are projections of the red line onto the vertical and horizontal planes respectively and represent how the electric field changes in the direction of those two planes. The two components have the same amplitude and are in phase.
Because the quarter-wave plate is made of a birefringent material, when in the wave plate, the light travels at different speeds depending on the direction of its electric field. This means that the horizontal component which is along the slow axis of the wave plate will travel at a slower speed than the component that is directed along the vertical fast axis. Initially the two components are in phase, but as the two components travel through the wave plate the horizontal component of the light drifts farther behind that of the vertical. By adjusting the thickness of the wave plate one can control how much the horizontal component is delayed relative to vertical component before the light leaves the wave plate and they begin again to travel at the same speed. When the light leaves the quarter-wave plate the rightward horizontal component will be exactly one quarter of a wavelength behind the vertical component making the light left-hand circularly polarized when viewed from the receiver.
At the top of the illustration toward the right is the circularly polarized light after it leaves the wave plate. Directly below it, for comparison purposes, is the linearly polarized light that entered the quarter-wave plate. In the upper image, because this is a plane wave, each vector leading from the axis to the helix represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the direction of travel. All the electric field vectors have the same magnitude indicating that the strength of the electric field does not change. The direction of the electric field however steadily rotates.
The blue and green lines are projections of the helix onto the vertical and horizontal planes respectively and represent how the electric field changes in the direction of those two planes. Notice how the rightward horizontal component is now one quarter of a wavelength behind the vertical component. It is this quarter of a wavelength phase shift that results in the rotational nature of the electric field. When the magnitude of one component is at a maximum the magnitude of the other component is always zero. This is the reason that there are helix vectors which exactly correspond to the maxima of the two components.
In the instance just cited, using the handedness convention used in many optics textbooks, the light is considered left-handed/counter-clockwise circularly polarized. Referring to the accompanying animation, it is considered left-handed because if one points one's left thumb against the direction of travel, ones fingers curl in the direction the electric field rotates as the wave passes a given point in space. The helix also forms a left-handed helix in space. Similarly this light is considered counter-clockwise circularly polarized because if a stationary observer faces against the direction of travel, the person will observe its electric field rotate in the counter-clockwise direction as the wave passes a given point in space.
To create right-handed, clockwise circularly polarized light one simply rotates the axis of the quarter-wave plate 90° relative to the linear polarizer. This reverses the fast and slow axes of the wave plate relative to the transmission axis of the linear polarizer reversing which component leads and which component lags.
In trying to appreciate how the quarter-wave plate transforms the linearly polarized light, it is important to realize that the two components discussed are not entities in and of themselves but are merely mental constructs one uses to help appreciate what is happening. In the case of linearly and circularly polarized light, at each point in space, there is always a single electric field with a distinct vector direction, the quarter-wave plate merely has the effect of transforming this single electric field.
Absorbing and passing circularly polarized light
Circular polarizers can also be used to selectively absorb or pass right-handed or left-handed circularly polarized light. It is this feature which is utilized by the 3D glasses in stereoscopic cinemas such as RealD Cinema. A given polarizer which creates one of the two polarizations of light will pass that same polarization of light when that light is sent through it in the other direction. In contrast it will block light of the opposite polarization.
The illustration above is identical to the previous similar one with the exception that the left-handed circularly polarized light is now approaching the polarizer from the opposite direction and linearly polarized light is exiting the polarizer toward the right.
First note that a quarter-wave plate always transforms circularly polarized light into linearly polarized light. It is only the resulting angle of polarization of the linearly polarized light that is determined by the orientation of the fast and slow axes of the quarter-wave plate and the handedness of the circularly polarized light. In the illustration, the left-handed circularly polarized light entering the polarizer is transformed into linearly polarized light which has its direction of polarization along the transmission axis of the linear polarizer and it therefore passes. In contrast right-handed circularly polarized light would have been transformed into linearly polarized light that had its direction of polarization along the absorbing axis of the linear polarizer, which is at right angles to the transmission axis, and it would have therefore been blocked.
To understand this process, refer to the illustration on the right. It is absolutely identical to the earlier illustration even though the circularly polarized light at the top is now considered to be approaching the polarizer from the left. One can observe from the illustration that the leftward horizontal (as observed looking along the direction of travel) component is leading the vertical component and that when the horizontal component is retarded by one quarter of a wavelength it will be transformed into the linearly polarized light illustrated at the bottom and it will pass through the linear polarizer.
There is a relatively straightforward way to appreciate why a polarizer which creates a given handedness of circularly polarized light also passes that same handedness of polarized light. First, given the dual usefulness of this image, begin by imagining the circularly polarized light displayed at the top as still leaving the quarter-wave plate and traveling toward the left. Observe that had the horizontal component of the linearly polarized light been retarded by a quarter of wavelength twice, which would amount to a full half wavelength, the result would have been linearly polarized light that was at a right angle to the light that entered. If such orthogonally polarized light were rotated on the horizontal plane and directed back through the linear polarizer section of the circular polarizer it would clearly pass through given its orientation. Now imagine the circularly polarized light which has already passed through the quarter-wave plate once, turned around and directed back toward the circular polarizer again. Let the circularly polarized light illustrated at the top now represent that light. Such light is going to travel through the quarter-wave plate a second time before reaching the linear polarizer and in the process, its horizontal component is going to be retarded a second time by one quarter of a wavelength. Whether that horizontal component is retarded by one quarter of a wavelength in two distinct steps or retarded a full half wavelength all at once, the orientation of the resulting linearly polarized light will be such that it passes through the linear polarizer.
Had it been right-handed, clockwise circularly polarized light approaching the circular polarizer from the left, its horizontal component would have also been retarded, however the resulting linearly polarized light would have been polarized along the absorbing axis of the linear polarizer and it would not have passed.
To create a circular polarizer that instead passes right-handed polarized light and absorbs left-handed light, one again rotates the wave plate and linear polarizer 90° relative to each another. It is easy to appreciate that by reversing the positions of the transmitting and absorbing axes of the linear polarizer relative to the quarter-wave plate, one changes which handedness of polarized light gets transmitted and which gets absorbed.
Homogeneous circular polarizer
A homogeneous circular polarizer passes one handedness of circular polarization unaltered and blocks the other handedness. This is similar to the way that a linear polarizer would fully pass one angle of linearly polarized light unaltered, but would fully block any linearly polarized light that was orthogonal to it.
A homogeneous circular polarizer can be created by sandwiching a linear polarizer between two quarter-wave plates. Specifically we take the circular polarizer described previously, which transforms circularly polarized light into linear polarized light, and add to it a second quarter-wave plate rotated 90° relative to the first one.
Generally speaking, and not making direct reference to the above illustration, when either of the two polarizations of circularly polarized light enters the first quarter-wave plate, one of a pair of orthogonal components is retarded by one quarter of a wavelength relative to the other. This creates one of two linear polarizations depending on the handedness the circularly polarized light. The linear polarizer sandwiched between the quarter wave plates is oriented so that it will pass one linear polarization and block the other. The second quarter-wave plate then takes the linearly polarized light that passes and retards the orthogonal component that was not retarded by the previous quarter-wave plate. This brings the two components back into their initial phase relationship, reestablishing the selected circular polarization.
Note that it does not matter in which direction one passes the circularly polarized light.
Circular and linear polarizing filters for photography
Linear polarizing filters were the first types to be used in photography and can still be used for non-reflex and older single-lens reflex cameras (SLRs). However, cameras with through-the-lens metering (TTL) and autofocusing systems – that is, all modern SLR and DSLR – rely on optical elements that pass linearly polarized light. If light entering the camera is already linearly polarized, it can upset the exposure or autofocus systems. Circular polarizing filters cut out linearly polarized light and so can be used to darken skies, improve saturation and remove reflections, but the circular polarized light it passes does not impair through-the-lens systems.
See also
Photoelastic modulator – a wave plate that can rapidly switch fast and slow axes, and thus produce rapidly alternating left and right circular polarization. They commonly operate in the ultrasonic range
Fresnel rhomb – another way of producing circularly polarized light; it does not use a wave plate
Extinction cross
Poincaré sphere (optics)
Edwin Land
Polariscope
Polarized light microscope
Geometric Phase Lens
References
Further reading
Kliger, David S. Polarized Light in Optics and Spectroscopy, Academic Press (1990),
Mann, James. "Austine Wood Comarow: Paintings in Polarized Light", Wasabi Publishing (2005),
External links
Optical components
Polarization (waves) | Polarizer | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 5,834 | [
"Glass engineering and science",
"Optical components",
"Astrophysics",
"Polarization (waves)",
"Components"
] |
2,722,905 | https://en.wikipedia.org/wiki/Replication%20%28computing%29 | Replication in computing refers to maintaining multiple copies of data, processes, or resources to ensure consistency across redundant components. This fundamental technique spans databases, file systems, and distributed systems, serving to improve availability, fault-tolerance, accessibility, and performance. Through replication, systems can continue operating when components fail (failover), serve requests from geographically distributed locations, and balance load across multiple machines. The challenge lies in maintaining consistency between replicas while managing the fundamental tradeoffs between data consistency, system availability, and network partition tolerance – constraints known as the CAP theorem.
Terminology
Replication in computing can refer to:
Data replication, where the same data is stored on multiple storage devices
Computation replication, where the same computing task is executed many times. Computational tasks may be:
Replicated in space, where tasks are executed on separate devices
Replicated in time, where tasks are executed repeatedly on a single device
Replication in space or in time is often linked to scheduling algorithms.
Access to a replicated entity is typically uniform with access to a single non-replicated entity. The replication itself should be transparent to an external user. In a failure scenario, a failover of replicas should be hidden as much as possible with respect to quality of service.
Computer scientists further describe replication as being either:
Active replication, which is performed by processing the same request at every replica
Passive replication, which involves processing every request on a single replica and transferring the result to the other replicas
When one leader replica is designated via leader election to process all the requests, the system is using a primary-backup or primary-replica scheme, which is predominant in high-availability clusters. In comparison, if any replica can process a request and distribute a new state, the system is using a multi-primary or multi-master scheme. In the latter case, some form of distributed concurrency control must be used, such as a distributed lock manager.
Load balancing differs from task replication, since it distributes a load of different computations across machines, and allows a single computation to be dropped in case of failure. Load balancing, however, sometimes uses data replication (especially multi-master replication) internally, to distribute its data among machines.
Backup differs from replication in that the saved copy of data remains unchanged for a long period of time. Replicas, on the other hand, undergo frequent updates and quickly lose any historical state. Replication is one of the oldest and most important topics in the overall area of distributed systems.
Data replication and computation replication both require processes to handle incoming events. Processes for data replication are passive and operate only to maintain the stored data, reply to read requests and apply updates. Computation replication is usually performed to provide fault-tolerance, and take over an operation if one component fails. In both cases, the underlying needs are to ensure that the replicas see the same events in equivalent orders, so that they stay in consistent states and any replica can respond to queries.
Replication models in distributed systems
Three widely cited models exist for data replication, each having its own properties and performance:
Transactional replication: used for replicating transactional data, such as a database. The one-copy serializability model is employed, which defines valid outcomes of a transaction on replicated data in accordance with the overall ACID (atomicity, consistency, isolation, durability) properties that transactional systems seek to guarantee.
State machine replication: assumes that the replicated process is a deterministic finite automaton and that atomic broadcast of every event is possible. It is based on distributed consensus and has a great deal in common with the transactional replication model. This is sometimes mistakenly used as a synonym of active replication. State machine replication is usually implemented by a replicated log consisting of multiple subsequent rounds of the Paxos algorithm. This was popularized by Google's Chubby system, and is the core behind the open-source Keyspace data store.
Virtual synchrony: involves a group of processes which cooperate to replicate in-memory data or to coordinate actions. The model defines a distributed entity called a process group. A process can join a group and is provided with a checkpoint containing the current state of the data replicated by group members. Processes can then send multicasts to the group and will see incoming multicasts in the identical order. Membership changes are handled as a special multicast that delivers a new "membership view" to the processes in the group.
Database replication
Database replication involves maintaining copies of the same data on multiple machines, typically implemented through three main approaches: single-leader, multi-leader, and leaderless replication.
In single-leader (also called primary/replica) replication, one database instance is designated as the leader (primary), which handles all write operations. The leader logs these updates, which then propagate to replica nodes. Each replica acknowledges receipt of updates, enabling subsequent write operations. Replicas primarily serve read requests, though they may serve stale data due to replication lag – the delay in propagating changes from the leader.
In multi-master replication (also called multi-leader), updates can be submitted to any database node, which then propagate to other servers. This approach is particularly beneficial in multi-data center deployments, where it enables local write processing while masking inter-data center network latency. However, it introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multi-master replication is transactional conflict prevention or resolution when concurrent modifications occur on different leader nodes.
Most synchronous (or eager) replication solutions perform conflict prevention, while asynchronous (or lazy) solutions have to perform conflict resolution. For instance, if the same record is changed on two nodes simultaneously, an eager replication system would detect the conflict before confirming the commit and abort one of the transactions. A lazy replication system would allow both transactions to commit and run a conflict resolution during re-synchronization. Conflict resolution methods can include techniques like last-write-wins, application-specific logic, or merging concurrent updates.
However, replication transparency can not always be achieved. When data is replicated in a database, they will be constrained by CAP theorem or PACELC theorem. In the NoSQL movement, data consistency is usually sacrificed in exchange for other more desired properties, such as availability (A), partition tolerance (P), etc. Various data consistency models have also been developed to serve as Service Level Agreement (SLA) between service providers and the users.
There are several techniques for replicating data changes between nodes:
Statement-based replication: Write requests (such as SQL statements) are logged and transmitted to replicas for execution. This can be problematic with non-deterministic functions or statements having side effects.
Write-ahead log (WAL) shipping: The storage engine's low-level write-ahead log is replicated, ensuring identical data structures across nodes.
Logical (row-based) replication: Changes are described at the row level using a dedicated log format, providing greater flexibility and independence from storage engine internals.
Disk storage replication
Active (real-time) storage replication is usually implemented by distributing updates of a block device to several physical hard disks. This way, any file system supported by the operating system can be replicated without modification, as the file system code works on a level above the block device driver layer. It is implemented either in hardware (in a disk array controller) or in software (in a device driver).
The most basic method is disk mirroring, which is typical for locally connected disks. The storage industry narrows the definitions, so mirroring is a local (short-distance) operation. A replication is extendable across a computer network, so that the disks can be located in physically distant locations, and the primary/replica database replication model is usually applied. The purpose of replication is to prevent damage from failures or disasters that may occur in one location – or in case such events do occur, to improve the ability to recover data. For replication, latency is the key factor because it determines either how far apart the sites can be or the type of replication that can be employed.
The main characteristic of such cross-site replication is how write operations are handled, through either asynchronous or synchronous replication; synchronous replication needs to wait for the destination server's response in any write operation whereas asynchronous replication does not.
Synchronous replication guarantees "zero data loss" by the means of atomic write operations, where the write operation is not considered complete until acknowledged by both the local and remote storage. Most applications wait for a write transaction to complete before proceeding with further work, hence overall performance decreases considerably. Inherently, performance drops proportionally to distance, as minimum latency is dictated by the speed of light. For 10 km distance, the fastest possible roundtrip takes 67 μs, whereas an entire local cached write completes in about 10–20 μs.
In asynchronous replication, the write operation is considered complete as soon as local storage acknowledges it. Remote storage is updated with a small lag. Performance is greatly increased, but in case of a local storage failure, the remote storage is not guaranteed to have the current copy of data (the most recent data may be lost).
Semi-synchronous replication typically considers a write operation complete when acknowledged by local storage and received or logged by the remote server. The actual remote write is performed asynchronously, resulting in better performance but remote storage will lag behind the local storage, so that there is no guarantee of durability (i.e., seamless transparency) in the case of local storage failure.
Point-in-time replication produces periodic snapshots which are replicated instead of primary storage. This is intended to replicate only the changed data instead of the entire volume. As less information is replicated using this method, replication can occur over less-expensive bandwidth links such as iSCSI or T1 instead of fiberoptic lines.
Implementations
Many distributed filesystems use replication to ensure fault tolerance and avoid a single point of failure.
Many commercial synchronous replication systems do not freeze when the remote replica fails or loses connection – behaviour which guarantees zero data loss – but proceed to operate locally, losing the desired zero recovery point objective.
Techniques of wide-area network (WAN) optimization can be applied to address the limits imposed by latency.
File-based replication
File-based replication conducts data replication at the logical level (i.e., individual data files) rather than at the storage block level. There are many different ways of performing this, which almost exclusively rely on software.
Capture with a kernel driver
A kernel driver (specifically a filter driver) can be used to intercept calls to the filesystem functions, capturing any activity as it occurs. This uses the same type of technology that real-time active virus checkers employ. At this level, logical file operations are captured like file open, write, delete, etc. The kernel driver transmits these commands to another process, generally over a network to a different machine, which will mimic the operations of the source machine. Like block-level storage replication, the file-level replication allows both synchronous and asynchronous modes. In synchronous mode, write operations on the source machine are held and not allowed to occur until the destination machine has acknowledged the successful replication. Synchronous mode is less common with file replication products although a few solutions exist.
File-level replication solutions allow for informed decisions about replication based on the location and type of the file. For example, temporary files or parts of a filesystem that hold no business value could be excluded. The data transmitted can also be more granular; if an application writes 100 bytes, only the 100 bytes are transmitted instead of a complete disk block (generally 4,096 bytes). This substantially reduces the amount of data sent from the source machine and the storage burden on the destination machine.
Drawbacks of this software-only solution include the requirement for implementation and maintenance on the operating system level, and an increased burden on the machine's processing power.
File system journal replication
Similarly to database transaction logs, many file systems have the ability to journal their activity. The journal can be sent to another machine, either periodically or in real time by streaming. On the replica side, the journal can be used to play back file system modifications.
One of the notable implementations is Microsoft's System Center Data Protection Manager (DPM), released in 2005, which performs periodic updates but does not offer real-time replication.
Batch replication
This is the process of comparing the source and destination file systems and ensuring that the destination matches the source. The key benefit is that such solutions are generally free or inexpensive. The downside is that the process of synchronizing them is quite system-intensive, and consequently this process generally runs infrequently.
One of the notable implementations is rsync.
Replication within file
In a paging operating system, pages in a paging file are sometimes replicated within a track to reduce rotational latency.
In IBM's VSAM, index data are sometimes replicated within a track to reduce rotational latency.
Distributed shared memory replication
Another example of using replication appears in distributed shared memory systems, where many nodes of the system share the same page of memory. This usually means that each node has a separate copy (replica) of this page.
Primary-backup and multi-primary replication
Many classical approaches to replication are based on a primary-backup model where one device or process has unilateral control over one or more other processes or devices. For example, the primary might perform some computation, streaming a log of updates to a backup (standby) process, which can then take over if the primary fails. This approach is common for replicating databases, despite the risk that if a portion of the log is lost during a failure, the backup might not be in a state identical to the primary, and transactions could then be lost.
A weakness of primary-backup schemes is that only one is actually performing operations. Fault-tolerance is gained, but the identical backup system doubles the costs. For this reason, starting , the distributed systems research community began to explore alternative methods of replicating data. An outgrowth of this work was the emergence of schemes in which a group of replicas could cooperate, with each process acting as a backup while also handling a share of the workload.
Computer scientist Jim Gray analyzed multi-primary replication schemes under the transactional model and published a widely cited paper skeptical of the approach "The Dangers of Replication and a Solution". He argued that unless the data splits in some natural way so that the database can be treated as n disjoint sub-databases, concurrency control conflicts will result in seriously degraded performance and the group of replicas will probably slow as a function of n. Gray suggested that the most common approaches are likely to result in degradation that scales as O(n³). His solution, which is to partition the data, is only viable in situations where data actually has a natural partitioning key.
In the 1985–1987, the virtual synchrony model was proposed and emerged as a widely adopted standard (it was used in the Isis Toolkit, Horus, Transis, Ensemble, Totem, Spread, C-Ensemble, Phoenix and Quicksilver systems, and is the basis for the CORBA fault-tolerant computing standard). Virtual synchrony permits a multi-primary approach in which a group of processes cooperates to parallelize some aspects of request processing. The scheme can only be used for some forms of in-memory data, but can provide linear speedups in the size of the group.
A number of modern products support similar schemes. For example, the Spread Toolkit supports this same virtual synchrony model and can be used to implement a multi-primary replication scheme; it would also be possible to use C-Ensemble or Quicksilver in this manner. WANdisco permits active replication where every node on a network is an exact copy or replica and hence every node on the network is active at one time; this scheme is optimized for use in a wide area network (WAN).
Modern multi-primary replication protocols optimize for the common failure-free operation. Chain replication is a popular family of such protocols. State-of-the-art protocol variants of chain replication offer high throughput and strong consistency by arranging replicas in a chain for writes. This approach enables local reads on all replica nodes but has high latency for writes that must traverse multiple nodes sequentially.
A more recent multi-primary protocol, Hermes, combines cache-coherent-inspired invalidations and logical timestamps to achieve strong consistency with local reads and high-performance writes from all replicas. During fault-free operation, its broadcast-based writes are non-conflicting and commit after just one multicast round-trip to replica nodes. This design results in high throughput and low latency for both reads and writes.
See also
Change data capture
Fault-tolerant computer system
Log shipping
Multi-master replication
Optimistic replication
Shard (data)
State machine replication
Virtual synchrony
References
Data synchronization
Fault-tolerant computer systems
Database management systems | Replication (computing) | [
"Technology",
"Engineering"
] | 3,542 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
2,723,140 | https://en.wikipedia.org/wiki/Profilometer | A profilometer is a measuring instrument used to measure a surface's profile, in order to quantify its roughness. Critical dimensions as step, curvature, flatness are computed from the surface topography.
While the historical notion of a profilometer was a device similar to a phonograph that measures a surface as the surface is moved relative to the contact profilometer's stylus, this notion is changing with the emergence of numerous non-contact profilometry techniques.
Non-scanning technologies measure the surface topography within a single camera acquisition, XYZ scanning is no longer needed. As a consequence, dynamic changes of topography are measured in real-time. Contemporary profilometers are not only measuring static topography, but now also dynamic topography – such systems are described as time-resolved profilometers.
Types
Optical methods include interferometry based methods such as digital holographic microscopy, vertical scanning interferometry/white light interferometry, phase shifting interferometry, and differential interference contrast microscopy (Nomarski microscopy); focus detection methods such as intensity detection, focus variation, differential detection, critical angle method, astigmatic method, Foucault method, and confocal microscopy; pattern projection methods such as fringe projection, Fourier profilometry, Moire, and pattern reflection methods.
Contact and pseudo-contact methods include stylus profilometer (mechanical profilometer), atomic force microscopy, and scanning tunneling microscopy
Contact profilometers
A diamond stylus is moved vertically in contact with a sample and then moved laterally across the sample for a specified distance and specified contact force. A profilometer can measure small surface variations in vertical stylus displacement as a function of position. A typical profilometer can measure small vertical features ranging in height from 10 nanometres to 1 millimetre. The height position of the diamond stylus generates an analog signal which is converted into a digital signal, stored, analyzed, and displayed. The radius of diamond stylus ranges from 20 nanometres to 50 μm, and the horizontal resolution is controlled by the scan speed and data signal sampling rate. The stylus tracking force can range from less than 1 to 50 milligrams.
Advantages of contact profilometers include acceptance, surface independence, resolution, it is a direct technique with no modeling required. Most of the world's surface finish standards are written for contact profilometers. To follow the prescribed methodology, this type of profilometer is often required. Contacting the surface is often an advantage in dirty environments where non-contact methods can end up measuring surface contaminants instead of the surface itself. Because the stylus is in contact with the surface, this method is not sensitive to surface reflectance or color. The stylus tip radius can be as small as 20 nanometres, significantly better than white-light optical profiling. Vertical resolution is typically sub-nanometer as well.
Non-contact profilometers
An optical profilometer is a non-contact method for providing much of the same information as a stylus based profilometer. There are many different techniques which are currently being employed, such as laser triangulation (triangulation sensor), confocal microscopy (used for profiling very small objects), coherence scanning interferometry, and digital holography.
Advantages of optical profilometers are speed, reliability and spot size. For small steps and requirements to do 3D scanning, because the non-contact profilometer does not touch the surface the scan speeds are dictated by the light reflected from the surface and the speed of the acquisition electronics. For doing large steps, a 3D scan on an optical profiler can be much slower than a 2D scan on a stylus profiler. Optical profilometers do not touch the surface and therefore cannot be damaged by surface wear or careless operators. Many non-contact profilometers are solid-state which tends to reduce the required maintenance significantly. The spot size, or lateral resolution, of optical methods ranges from a few micrometres down to sub micrometre.
Time-resolved profilometers
Non-scanning technologies as digital holographic microscopy enable 3D topography measurement in real-time. 3D topography is measured from a single camera acquisition as a consequence the acquisition rate is only limited by the camera acquisition rate, some systems measure topography at a frame rate of 1000 fps. Time-resolved systems enable measurement of topography changes as healing of smart materials or measurement of moving specimens.
Time-resolved profilometers can be combined with a stroboscopic unit to measure MEMS vibrations in the MHz range. The stroboscopic unit provides excitation signal to the MEMS and provides trigger signal to light source and camera.
The advantage of time-resolved profilometers is that they are robust against vibrations. Unlike scanning methods, time-resolved profilometer acquisition time is in the milliseconds range. There is no need of vertical calibration: vertical measurement does not depend on a scanning mechanism, digital holographic microscopy vertical measurement has an intrinsic vertical calibration based on laser source wavelength. Samples are not static and there is response of the specimen topography to external stimulus. With on-flight measurement the topography of a moving sample is acquired with short exposure time. MEMS vibrations measurement can be accomplished when the system is combined with a stroboscopic unit.
Fiber-based optical profilometers
Optical fiber-based optical profilometers scan surfaces with optical probes which send light interference signals back to the profilometer detector via an optical fiber. Fiber-based probes can be physically located hundreds of meters away from the detector enclosure, without signal degradation. The additional advantages of using fiber-based optical profilometers are flexibility, long profile acquisition, ruggedness, and ease of incorporating into industrial processes. With the small diameter of certain probes, surfaces can be scanned even inside hard-to-reach spaces, such as narrow crevices or small-diameter tubes.
Because these probes generally acquire one point at a time and at high sample speeds, acquisition of long (continuous) surface profiles is possible. Scanning can take place in hostile environments, including very hot or cryogenic temperatures, or in radioactive chambers, while the detector is located at a distance, in a human-safe environment.
Fiber-based probes are easily installed in-process, such as above moving webs or mounted onto a variety of positioning systems.
Applications
A furrow profilometer is used for the measurement of the cross-sectional geometry of furrows and corrugations, and is important in furrow assessments.
See also
Road profilometery
Surface metrology
References
External links
Dimensional instruments
Metalworking measuring instruments | Profilometer | [
"Physics",
"Mathematics"
] | 1,393 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
2,723,752 | https://en.wikipedia.org/wiki/Industrial%20fermentation | Industrial fermentation is the intentional use of fermentation in manufacturing processes. In addition to the mass production of fermented foods and drinks, industrial fermentation has widespread applications in chemical industry. Commodity chemicals, such as acetic acid, citric acid, and ethanol are made by fermentation. Moreover, nearly all commercially produced industrial enzymes, such as lipase, invertase and rennet, are made by fermentation with genetically modified microbes. In some cases, production of biomass itself is the objective, as is the case for single-cell proteins, baker's yeast, and starter cultures for lactic acid bacteria used in cheesemaking.
In general, fermentations can be divided into four types:
Production of biomass (viable cellular material)
Production of extracellular metabolites (chemical compounds)
Production of intracellular components (enzymes and other proteins)
Transformation of substrate (in which the transformed substrate is itself the product)
These types are not necessarily disjoined from each other, but provide a framework for understanding the differences in approach. The organisms used are typically microorganisms, particularly bacteria, algae, and fungi, such as yeasts and molds, but industrial fermentation may also involve cell cultures from plants and animals, such as CHO cells and insect cells. Special considerations are required for the specific organisms used in the fermentation, such as the dissolved oxygen level, nutrient levels, and temperature. The rate of fermentation depends on the concentration of microorganisms, cells, cellular components, and enzymes as well as temperature, pH and level of oxygen for aerobic fermentation. Product recovery frequently involves the concentration of the dilute solution.
General process overview
In most industrial fermentations, the organisms or eukaryotic cells are submerged in a liquid medium; in others, such as the fermentation of cocoa beans, coffee cherries, and miso, fermentation takes place on the moist surface of the medium.
There are also industrial considerations related to the fermentation process. For instance, to avoid biological process contamination, the fermentation medium, air, and equipment are sterilized. Foam control can be achieved by either mechanical foam destruction or chemical anti-foaming agents. Several other factors must be measured and controlled such as pressure, temperature, agitator shaft power, and viscosity. An important element for industrial fermentations is scale up. This is the conversion of a laboratory procedure to an industrial process. It is well established in the field of industrial microbiology that what works well at the laboratory scale may work poorly or not at all when first attempted at large scale. It is generally not possible to take fermentation conditions that have worked in the laboratory and blindly apply them to industrial scale equipment. Although many parameters have been tested for use as scale up criteria, there is no general formula because of the variation in fermentation processes. The most important methods are the maintenance of constant power consumption per unit of broth and the maintenance of constant volumetric transfer rate.
Phases of growth
Fermentation begins once the growth medium is inoculated with the organism of interest. Growth of the inoculum does not occur immediately. This is the period of adaptation, called the lag phase. Following the lag phase, the rate of growth of the organism steadily increases, for a certain period—this period is the log or exponential phase.
After a phase of exponential growth, the rate of growth slows down, due to the continuously falling concentrations of nutrients and/or a continuously increasing (accumulating) concentrations of toxic substances. This phase, where the increase of the rate of growth is checked, is the deceleration phase. After the deceleration phase, growth ceases and the culture enters a stationary phase or a steady state. The biomass remains constant, except when certain accumulated chemicals in the culture chemically break down the cells in a process called chemolysis. Unless other microorganisms contaminate the culture, the chemical constitution remains unchanged. If all of the nutrients in the medium are consumed, or if the concentration of toxins is too great, the cells may become senescent and begin to die off. The total amount of biomass may not decrease, but the number of viable organisms will decrease.
Fermentation medium
The microbes or eukaryotic cells used for fermentation grow in (or on) specially designed growth medium which supplies the nutrients required by the organisms or cells. A variety of media exist, but invariably contain a carbon source, a nitrogen source, water, salts, and micronutrients. In the production of wine, the medium is grape must. In the production of bio-ethanol, the medium may consist mostly of whatever inexpensive carbon source is available.
Carbon sources are typically sugars or other carbohydrates, although in the case of substrate transformations (such as the production of vinegar) the carbon source may be an alcohol or something else altogether. For large scale fermentations, such as those used for the production of ethanol, inexpensive sources of carbohydrates, such as molasses, corn steep liquor, sugar cane juice, or sugar beet juice are used to minimize costs. More sensitive fermentations may instead use purified glucose, sucrose, glycerol or other sugars, which reduces variation and helps ensure the purity of the final product. Organisms meant to produce enzymes such as beta galactosidase, invertase or other amylases may be fed starch to select for organisms that express the enzymes in large quantity.
Fixed nitrogen sources are required for most organisms to synthesize proteins, nucleic acids and other cellular components. Depending on the enzyme capabilities of the organism, nitrogen may be provided as bulk protein, such as soy meal; as pre-digested polypeptides, such as peptone or tryptone; or as ammonia or nitrate salts. Cost is also an important factor in the choice of a nitrogen source. Phosphorus is needed for production of phospholipids in cellular membranes and for the production of nucleic acids. The amount of phosphate which must be added depends upon the composition of the broth and the needs of the organism, as well as the objective of the fermentation. For instance, some cultures will not produce secondary metabolites in the presence of phosphate.
Growth factors and trace nutrients are included in the fermentation broth for organisms incapable of producing all of the vitamins they require. Yeast extract is a common source of micronutrients and vitamins for fermentation media. Inorganic nutrients, including trace elements such as iron, zinc, copper, manganese, molybdenum, and cobalt are typically present in unrefined carbon and nitrogen sources, but may have to be added when purified carbon and nitrogen sources are used. Fermentations which produce large amounts of gas (or which require the addition of gas) will tend to form a layer of foam, since fermentation broth typically contains a variety of foam-reinforcing proteins, peptides or starches. To prevent this foam from occurring or accumulating, antifoaming agents may be added. Mineral buffering salts, such as carbonates and phosphates, may be used to stabilize pH near optimum. When metal ions are present in high concentrations, use of a chelating agent may be necessary.
Developing an optimal medium for fermentation is a key concept to efficient optimization. One-factor-at-a-time (OFAT) is the preferential choice that researchers use for designing a medium composition. This method involves changing only one factor at a time while keeping the other concentrations constant. This method can be separated into some sub groups. One is Removal Experiments. In this experiment all the components of the medium are removed one at a time and their effects on the medium are observed. Supplementation experiments involve evaluating the effects of nitrogen and carbon supplements on production. The final experiment is a replacement experiment. This involves replacing the nitrogen and carbon sources that show an enhancement effect on the intended production. Overall OFAT is a major advantage over other optimization methods because of its simplicity.
Production of biomass
Microbial cells or biomass is sometimes the intended product of fermentation. Examples include single cell protein, bakers yeast, lactobacillus, E. coli, and others. In the case of single-cell protein, algae is grown in large open ponds which allow photosynthesis to occur. If the biomass is to be used for inoculation of other fermentations, care must be taken to prevent mutations from occurring.
Production of extracellular metabolites
Metabolites can be divided into two groups: those produced during the growth phase of the organism, called primary metabolites and those produced during the stationary phase, called secondary metabolites. Some examples of primary metabolites are ethanol, citric acid, glutamic acid, lysine, vitamins and polysaccharides. Some examples of secondary metabolites are penicillin, cyclosporin A, gibberellin, and lovastatin.
Primary metabolites
Primary metabolites are compounds made during the ordinary metabolism of the organism during the growth phase. A common example is ethanol or lactic acid, produced during glycolysis. Citric acid is produced by some strains of Aspergillus niger as part of the citric acid cycle to acidify their environment and prevent competitors from taking over. Glutamate is produced by some Micrococcus species, and some Corynebacterium species produce lysine, threonine, tryptophan and other amino acids. All of these compounds are produced during the normal "business" of the cell and released into the environment. There is therefore no need to rupture the cells for product recovery.
Secondary metabolites
Secondary metabolites are compounds made in the stationary phase; penicillin, for instance, prevents the growth of bacteria which could compete with Penicillium molds for resources. Some bacteria, such as Lactobacillus species, are able to produce bacteriocins which prevent the growth of bacterial competitors as well. These compounds are of obvious value to humans wishing to prevent the growth of bacteria, either as antibiotics or as antiseptics (such as gramicidin S). Fungicides, such as griseofulvin are also produced as secondary metabolites. Typically secondary metabolites are not produced in the presence of glucose or other carbon sources which would encourage growth, and like primary metabolites are released into the surrounding medium without rupture of the cell membrane.
In the early days of the biotechnology industry, most biopharmaceutical products were made in E. coli; by 2004 more biopharmaceuticals were manufactured in eukaryotic cells, such as CHO cells, than in microbes, but used similar bioreactor systems. Insect cell culture systems came into use in the 2000s as well.
Production of intracellular components
Of primary interest among the intracellular components are microbial enzymes: catalase, amylase, protease, pectinase, cellulase, hemicellulase, lipase, lactase, streptokinase and many others. Recombinant proteins, such as insulin, hepatitis B vaccine, interferon, granulocyte colony-stimulating factor, streptokinase and others are also made this way. The largest difference between this process and the others is that the cells must be ruptured (lysed) at the end of fermentation, and the environment must be manipulated to maximize the amount of the product. Furthermore, the product (typically a protein) must be separated from all of the other cellular proteins in the lysate to be purified.
Transformation of substrate
Substrate transformation involves the transformation of a specific compound into another, such as in the case of phenylacetylcarbinol, and steroid biotransformation, or the transformation of a raw material into a finished product, in the case of food fermentations and sewage treatment.
Food fermentation
In the history of food, ancient fermented food processes, such as making bread, wine, cheese, curds, idli, dosa, among others can be dated to more than seven thousand years ago. They were developed long before humanity had any knowledge of the existence of the microorganisms involved. Some foods such as Marmite are the byproduct of the fermentation process, in this case in the production of beer.
Ethanol fuel
Fermentation is the main source of ethanol in the production of ethanol fuel. Common crops such as sugar cane, potato, cassava, and maize are fermented by yeast to produce ethanol which is further processed to become fuel.
Sewage treatment
In the process of sewage treatment, sewage is digested by enzymes secreted by bacteria. Solid organic matters are broken down into harmless, soluble substances and carbon dioxide. Liquids that result are disinfected to remove pathogens before being discharged into rivers or the sea or can be used as liquid fertilizers. Digested solids, known also as sludge, is dried and used as fertilizer. Gaseous byproducts such as methane can be utilized as biogas to fuel electrical generators. One advantage of bacterial digestion is that it reduces the bulk and odor of sewage, thus reducing space needed for dumping. The main disadvantage of bacterial digestion in sewage disposal is that it is a very slow process.
Agricultural feed
A wide variety of agroindustrial waste products can be fermented to use as food for animals, especially ruminants. Fungi have been employed to break down cellulosic wastes to increase protein content and improve in vitro digestibility.
Precision fermentation
Precision fermentation is an approach to manufacturing specific functional products which intends to minimise the production of unwanted by-products through the application of synthetic biology, particularly by generating synthetic "cell factories" with engineered genomes and metabolic pathways optimised to produce the desired compounds as efficiently as possible with the available resources. Precision fermentation of genetically modified microorganisms may be used to manufacture proteins needed for cell culture media, providing for serum-free cell culture media in the manufacturing process of cultured meat. A 2021 publication showed that photovoltaic-driven microbial protein production could use 10 times less land for an equivalent amount of protein compared to soybean cultivation. Some Food Regulatory Agencies such as the FDA do not require the labeling of precision fermented foods as GMO since they are produced by, but do not contain the genetically engineered organisms. It is unclear how regulation will be handled in EU markets, with some Startups such as Formo and Those Vegan Cowboys forming the Food Fermentation Europe (FFE) alliance together with other alt-protein startups to seek regulatory approval.
See also
References
Bibliography
Fermentation
Microbiology techniques
Biotechnology
Drug manufacturing
Waste treatment technology
Industrial processes
Synthetic biology | Industrial fermentation | [
"Chemistry",
"Engineering",
"Biology"
] | 3,113 | [
"Synthetic biology",
"Biological engineering",
"Cellular respiration",
"Water treatment",
"Biotechnology",
"Bioinformatics",
"Microbiology techniques",
"Molecular genetics",
"nan",
"Environmental engineering",
"Biochemistry",
"Waste treatment technology",
"Fermentation"
] |
2,724,082 | https://en.wikipedia.org/wiki/Resolution%20%28logic%29 | In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness.
The clause produced by a resolution rule is sometimes called a resolvent.
Resolution in propositional logic
Resolution rule
The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following,
is taken to be the complement to ). The resulting clause contains all the literals that do not have complements.
Formally:
where
all , , and are literals,
the dividing line stands for "entails".
The above may also be written as:
Or schematically as:
We have the following terminology:
The clauses and are the inference's premises
(the resolvent of the premises) is its conclusion.
The literal is the left resolved literal,
The literal is the right resolved literal,
is the resolved atom or pivot.
The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms.
When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology.
Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause).
is equivalent to
A resolution technique
When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form. The steps are as follows.
All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected.
The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S, of clauses.
For example, gives rise to the set .
The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the clause contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to S, and is considered for further resolution inferences.
If after applying a resolution rule the empty clause is derived, the original formula is unsatisfiable (or contradictory), and hence it can be concluded that the initial conjecture follows from the axioms.
If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base.
One instance of this algorithm is the original Davis–Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvents.
This description of the resolution technique uses a set S as the underlying data-structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause. Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent.
A simple example
In plain language: Suppose is false. In order for the premise to be true, must be true.
Alternatively, suppose is true. In order for the premise to be true, must be true. Therefore, regardless of falsehood or veracity of , if both premises hold, then the conclusion is true.
Resolution in first-order logic
Resolution rule can be generalized to first-order logic to:
where is a most general unifier of and , and and have no common variables.
Example
The clauses and can apply this rule with as unifier.
Here x is a variable and b is a constant.
Here we see that
The clauses and are the inference's premises
(the resolvent of the premises) is its conclusion.
The literal is the left resolved literal,
The literal is the right resolved literal,
is the resolved atom or pivot.
is the most general unifier of the resolved literals.
Informal explanation
In first-order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term logic:
All Greeks are Europeans.
Homer is a Greek.
Therefore, Homer is a European.
Or, more generally:
Therefore,
To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF). In this form, all quantification becomes implicit: universal quantifiers on variables (X, Y, ...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions.
Therefore,
So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple:
Find two clauses containing the same predicate, where it is negated in one clause but not in the other.
Perform a unification on the two predicates. (If the unification fails, you made a bad choice of predicates. Go back to the previous step and try again.)
If any unbound variables which were bound in the unified predicates also occur in other predicates in the two clauses, replace them with their bound values (terms) there as well.
Discard the unified predicates, and combine the remaining ones from the two clauses into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
¬P(X)
in the first clause, and in non-negated form
P(a)
in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the substitution
X a
Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion:
Q(a)
For another example, consider the syllogistic form
All Cretans are islanders.
All islanders are liars.
Therefore all Cretans are liars.
Or more generally,
∀X P(X) → Q(X)
∀X Q(X) → R(X)
Therefore, ∀X P(X) → R(X)
In CNF, the antecedents become:
¬P(X) ∨ Q(X)
¬Q(Y) ∨ R(Y)
(The variable in the second clause was renamed to make it clear that variables in different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion:
¬P(X) ∨ R(X)
Factoring
The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above. The resulting inference rule is refutation-complete, in that a set of clauses is unsatisfiable if and only if there exists a derivation of the empty clause using only resolution, enhanced by factoring.
An example for an unsatisfiable clause set for which factoring is needed to derive the empty clause is:
Since each clause consists of two literals, so does each possible resolvent. Therefore, by resolution without factoring, the empty clause can never be obtained.
Using factoring, it can be obtained e.g. as follows:
Non-clausal resolution
Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form.
These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form, and sometimes save resolution steps.
Non-clausal resolution in propositional logic
For propositional logic, Murray and Manna and Waldinger use the rule
,
where denotes an arbitrary formula, denotes a formula containing as a subformula, and is built by replacing in every occurrence of by ; likewise for .
The resolvent is intended to be simplified using rules like , etc.
In order to prevent generating useless trivial resolvents, the rule shall be applied only when has at least one "negative" and "positive" occurrence in and , respectively. Murray has shown that this rule is complete if augmented by appropriate logical transformation rules.
Traugott uses the rule
,
where the exponents of indicate the polarity of its occurrences. While and are built as before, the formula is obtained by replacing each positive and each negative occurrence of in with and , respectively. Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent. Traugott proved his rule to be complete, provided are the only connectives used in formulas.
Traugott's resolvent is stronger than Murray's. Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution. However, formulas may grow longer when a small is replaced multiple times with a larger and/or .
Propositional non-clausal resolution example
As an example, starting from the user-given assumptions
the Murray rule can be used as follows to infer a contradiction:
For the same purpose, the Traugott rule can be used as follows :
From a comparison of both deductions, the following issues can be seen:
Traugott's rule may yield a sharper resolvent: compare (5) and (10), which both resolve (1) and (2) on .
Murray's rule introduced 3 new disjunction symbols: in (5), (6), and (7), while Traugott's rule did not introduce any new symbol; in this sense, Traugott's intermediate formulas resemble the user's style more closely than Murray's.
Due to the latter issue, Traugott's rule can take advantage of the implication in assumption (4), using as the non-atomic formula in step (12). Using Murray's rules, the semantically equivalent formula was obtained as (7), however, it could not be used as due to its syntactic form.
Non-clausal resolution in first-order logic
For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas and of and , respectively. If is the most general unifier of and , then the generalized resolvent is . While the rule remains sound if a more special substitution is used, no such rule applications are needed to achieve completeness.
Traugott's rule is generalized to allow several pairwise distinct subformulas of and of , as long as have a common most general unifier, say . The generalized resolvent is obtained after applying to the parent formulas, thus making the propositional version applicable. Traugott's completeness proof relies on the assumption that this fully general rule is used; it is not clear whether his rule would remain complete if restricted to and .
Paramodulation
Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It generates all "equal" versions of clauses, except reflexive identities. The paramodulation operation takes a positive from clause, which must contain an equality literal. It then searches an into clause with a subterm that unifies with one side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation is to reduce the system to atoms, reducing the size of the terms when substituting.
Implementations
CARINE
GKC
Otter
Prover9
SNARK
SPASS
Vampire
Logictools online prover
See also
Condensed detachment — an earlier version of resolution
Inductive logic programming
Inverse resolution
Logic programming
Method of analytic tableaux
SLD resolution
Resolution inference
Notes
References
External links
1965 introductions
Automated theorem proving
Propositional calculus
Proof theory
Rules of inference
Theorems in propositional logic | Resolution (logic) | [
"Mathematics"
] | 2,909 | [
"Automated theorem proving",
"Proof theory",
"Mathematical logic",
"Computational mathematics",
"Rules of inference",
"Theorems in propositional logic",
"Theorems in the foundations of mathematics"
] |
2,724,263 | https://en.wikipedia.org/wiki/Swain%20equation | The Swain equation relates the kinetic isotope effect for the protium/tritium combination with that of the protium/deuterium combination according to:
where kH,D,T are the reaction rate constants for the protonated, deuterated and tritiated reactants respectively.
External links
Applied Swain equation
References
Use of Hydrogen Isotope Effects to Identify the Attacking Nucleophile in the Enolization of Ketones Catalyzed by Acetic Acid C. Gardner Swain, Edward C. Stivers, Joseph F. Reuwer, Jr. Lawrence J. Schaad; J. Am. Chem. Soc.; 1958; 80(21); 5885-5893.
Chemical kinetics
Equations | Swain equation | [
"Chemistry",
"Mathematics"
] | 155 | [
"Chemical kinetics",
"Equations",
"Mathematical objects",
"Chemical reaction engineering"
] |
2,725,360 | https://en.wikipedia.org/wiki/Deconstruction%20%28building%29 | In the context of physical construction, deconstruction is the selective dismantlement of building components, specifically for reuse, repurposing, recycling, and waste management. It differs from demolition where a site is cleared of its building by the most expedient means. Deconstruction has also been defined as "construction in reverse". Deconstruction requires a substantially higher degree of hands-on labor than does traditional demolition, but as such provides a viable platform for unskilled or unemployed workers to receive job skills training. The process of dismantling structures is an ancient activity that has been revived by the growing fields of sustainable and green building.
When buildings reach the end of their useful life, they are typically demolished and hauled to landfills. Building implosions or ‘wrecking-ball’ style demolitions are relatively inexpensive and offer a quick method of clearing sites for new structures. On the other hand, these methods create substantial amounts of waste. Components within old buildings may still be valuable, sometimes more valuable than at the time the building was constructed. Deconstruction is a method of harvesting what is commonly considered “waste” and reclaiming it into useful building material. Most modern buildings are difficult to deconstruct due to the designs of such buildings.
Contribution to sustainability
Deconstruction has strong ties to environmental sustainability. In addition to giving materials a new life cycle, deconstructing buildings helps to lower the need for virgin resources. This in turn leads to energy and emissions reductions from the refining and manufacture of new materials, especially when considering that an estimated 40% of global material flows can be attributed to construction, maintenance, and renovation of structures. As deconstruction is often done on a local level, many times on-site, energy and emissions are also saved in the transportation of materials. Deconstruction can potentially support communities by providing local jobs and renovated structures. Deconstruction creates 6-8 jobs, for every job created by traditional demolition. In addition, solid waste from conventional demolition is diverted from landfills. This is a major benefit because construction and demolition waste accounts for approximately 20% - 40% of the solid waste stream. 90% of this construction and demolition waste stream is generated during the process of demolition. In 2015 548 million tons of construction and demolition waste were created in the United States alone.
Deconstruction allows for substantially higher levels of material reuse and recycling than does conventional processes of demolition. Up to 25% of material in a traditional residential structure can be readily reused, while up to 70% of material can be recycled.
In 2022, The Catherine Commons Deconstruction Project at Cornell University showcased the environmental benefits of deconstruction. By recycling and reusing about 90% of the building materials, such as fir, oak, and walnut boards, this project highlighted a significant reduction in waste and resource use compared to traditional demolition.
Benefits of avoiding wood waste
In Canada, the Neutral Alliance has created a website with resources for regulators and municipalities, developers and contractors, business owners and operators, and individuals and households. Benefits for municipalities include:
Reducing disposal costs where waste collection, hauling or disposal is supported by the tax base
Establishing additional revenue streams
Making existing landfills last longer
Reducing greenhouse gas emissions caused by the decomposition of wood waste into methane from landfills
Stimulating local economies with new industries and employment
Improving the local environment and overall sustainability of your community
For every three square feet of deconstruction, enough lumber can be salvaged to build one square foot of new construction. At this rate, if deconstruction replaced residential demolition, the United States could generate enough recovered wood to construct 120,000 new affordable homes each year. The deconstruction of a typical wood-frame home can yield 6,000 board feet of reusable lumber.
Every year the United States buries about 33 million tons of wood-related construction and demolition debris in landfills. As anaerobic microorganisms decompose this wood, it will release about five million tons of carbon equivalent in the form of methane gas.
Typical methods of deconstruction
Deconstruction is commonly separated into two categories; structural and non-structural. Non-structural deconstruction, also known as “soft-stripping”, consists of reclaiming non-structural components, appliances, doors, windows, and finish materials. The reuse of these types of materials is commonplace and considered to be a mature market in many locales.
Structural deconstruction involves dismantling the structural components of a building. Traditionally this had only been performed to reclaim expensive or rare materials such as used brick, dimension stone, and extinct wood. In antiquity, it was common to raze stone buildings and reuse the stone; it was also common to steal stones from a building that was not being totally demolished: this is the literal meaning of the word dilapidated. Used brick and dimension limestone, in particular, have a long tradition of reuse due to their durability and color changes over time. Recently, the rise of environmental awareness and sustainable building has made a much wider range of materials worthy of structural deconstruction. Low-end, commonplace materials such as dimensional lumber have become part of this newly emerging market.
The United States military has utilized structural deconstruction in many of its bases. The construction methods of barracks, among other base structures, are usually relatively simple. They typically contained large amounts of lumber and used minimal adhesives and finish-work. In addition, the buildings are often identical, making the process of deconstructing multiple buildings much easier. Many barracks were the era prior to WWII, and have aged to the point where they now need to be torn down. Deconstruction was deemed very practical due to the abundance of labor the military has access to and the value of the materials themselves.
Natural disasters, such as hurricanes, floods, tsunamis, and earthquakes often leave a vast amount of usable building materials in their wake. Structures that remain standing are often deconstructed to provide materials for rebuilding the region.
Economic potential
Deconstruction's economic viability varies from project to project. The amount of time and cost of labor are the main drawbacks. Harvesting materials from a structure can take weeks, whereas demolition may be completed in roughly a day. However, some of the costs, if not all, can be recovered. Reusing the materials in a new on-site structure, selling reclaimed materials, donating materials for income tax write-offs, and avoiding landfill “tipping fees” are all ways in which the cost of deconstruction can be made comparable to demolition.
Reclaiming the materials for a new on-site structure is the most economically and environmentally efficient option. Tipping fees and the costs of new materials are avoided; in addition, the transportation of the materials is non-existent. Selling the used materials or donating them to non-profit organizations are another effective way of gaining capital. Donations to NPO's such as Habitat for Humanity’s ReStore are tax-deductible. Many times it is possible to claim the value to be half of what that particular material would cost new. When donating rare or antique components it is sometimes possible to claim a higher value than a comparable, brand-new material.
Value can also be added to new structures that are built by implementing reused materials. The United States Green Building Council's program entitled Leadership in Energy and Environmental Design (LEED) offers seven credits relating to reusing materials. (This accounts for seven out of a maximum sixty-nine credits) These include credits for building-shell reuse, material reuse, and diverting waste from landfills. Building shell-reuse is particularly appropriate for shells made of dimension stone.
Deconstruction is well suited to job training for the construction trades. Taking down a building is an excellent way for a worker to learn how to put a building up. This is vital for the economic recovery of inner-city communities. Unskilled and low-skilled workers can receive on-the-job training in use of basic carpentry tools and techniques, as well as learning teamwork, problem-solving, critical thinking and good work habits.
Process
When choosing to deconstruct a building there are some important aspects that need to be taken into consideration. Developing a list of local contacts that are able to take used materials is an essential first step. These might include commercial architectural salvage businesses, reclamation yards, not-for-profit and social enterprise salvage warehouses, and dismantling contractors. Materials that cannot be salvaged may be recycled on-site or off-site, or taken to landfills. The next step involves identifying which, if any, are hazardous materials. Lead paint and asbestos are two substances in particular that need to be handled extremely cautiously and disposed of properly.
Salvaged goods that are contaminated with hazardous materials such as Lead Paint will need additional processing in order to be reused again, which adds an additional cost barrier to the effective reuse of certain materials reclaimed in a deconstruction project. To address this challenge, some deconstruction contractors have begun utilizing specialized sealed processing trailers that utilize negative pressure to provide on-site lead remediation processing for salvaged timber.
The following set of questions can aid in developing a deconstruction plan:
What parts of the building support other parts?
What parts of the building are self-supporting?
Where do specialized service inputs and outputs (telecommunications, electricity, water, gas, wastewater, supply and exhaust air) occur and how are these flow mechanisms constructed?
What parts of the building are subject to the most stresses from climate?
What parts of the building are most subject to wear from human use and change from aesthetic preference?
What parts of the building are most subject to alteration based upon functional, economic, life-expectancy, or technological requirements?
What parts of the building are composed of components and sub-components based upon a complex set of functional requirements and what parts serve only one function and hence are composed of relatively homogeneous materials?
What parts of a building pose the greatest worker hazards in disassembly?
What are the functional sizes of the principal elements and components of a building?
What are the most expensive elements of a building, which have the highest reuse and recycling value and which impact the life-cycle efficiency of a building the most?
It is common practice, and common sense, to “soft-strip” the structure first; remove all appliances, windows, doors, and other finishing materials. These will account for a large percentage of the marketable components. After the non-structural deconstruction, structural is the next step. It is best to start at the roof and work down to the foundation.
Building components that are dismantled will need to be stored in a secure, dry location. This will protect them from water damage and theft. Once separated from the structure, materials can also be cleaned and/or refinished to increase value. Building an inventory list of the materials at hand will help determine where each item will be sent.
Deconstruction vs. Demolition
As opposed to the method of demolishing a building, deconstructing a building is a much safer method for both the environment, as well as the overall health of humans in terms of air pollution. Structures are usually taken down using the method of implosion, where explosives are used to implode the building on itself. This in itself causes a variety of harmful substances to enter the atmosphere and affect our air quality. Although not done through the method of implosion, the September 11, 2001 attacks on New York City's World Trade Center serve as a good reference point to the harmful effects that come along with the demolition of such large structures such as these buildings. The reason for this is primarily because of the similarities between a controlled demolition and the way in which the Twin Towers collapsed that day. The environmental effects that followed these attacks included the release of numerous harmful and toxic particles into the air, which had a huge impact on New York City's air quality. Not only has this been detrimental to the environment, but also to the physical health of many people. In many instances, the substances that are released by these practices are directly linked to numerous diseases and illnesses found in many people who have been within a certain proximity of a demolition. Again relating to 9/11, there have been countless instances and studies performed to show how these ailments arose in 9/11 survivors. As a healthier alternative, deconstruction is used in many instances due to the fact that it does not share any of the same negative affects to the air quality with its counter method of demolition. As previously stated above, this method involves carefully taking the building apart through the dismantling of each part, ultimately reducing the amount of pollutants released into the environment, as well as aiding in the processes of recycling and waste management. It is because of this that so many believe deconstruction to be a much safer and environmentally-friendly method of taking down structures.
Designing for deconstruction (DfD)
An upstream approach to deconstruction can be implemented into buildings during their design process, known as designing for deconstruction (DfD). This is a current trend in sustainable architecture. DfD structures typically use simple construction methods combined with high-grade, durable materials. Separating layers of a building's infrastructure and making them visible can significantly simplify its deconstruction. Making components within systems separable also assists in being able to dismantle materials quickly and efficiently. This can be achieved by using mechanical fasteners such as bolts to connect parts. Allowing physical access to these fasteners is another necessary aspect of this design, as well as the use of standardized materials assembled consistently throughout the project.
Consolidation of plumbing, HVAC, and other utility service points within a building reduces the need for long service lines, as well as points of entanglement and conflict with other building elements. Similarly, utilizing raised floor or dropped ceiling methods allows easier access to mechanical and electrical services, and can reduce the time needed to remove these components during the process of deconstruction.
Some conventional construction methods and materials are difficult or impossible to deconstruct; the use of nails and adhesives significantly slows down the deconstruction process and can render unusable materials that could otherwise be reused. The presence of hazardous materials is also an obstacle for deconstruction. Using mixed material grades makes the process of identifying pieces for resale difficult.
Some commercial buildings that have been designed according to DfD principles use built in anchor points and other features intended to provide additional fall protection options. Such design considerations can increase overall worker safety, and decrease amount of overall time spent on deconstruction.
DfD not only enables the end of a building's life-cycle, but can also make the building easier to maintain and adapt to new uses. Saving the shell of a building or adapting interior spaces to meet new needs can reduce the environmental impact of new structures.
Other approaches include modular building, like the Habitat 67 project in Montreal, Quebec, Canada. This was a residential structure consisting of separate, functional apartments that could be put together in a variety of ways. As people moved in or out, the units could be reconfigured as needed.
See also
Repurposed building stone
Concrete recycling
Dimension stone Stone recycling and reuse
Green building
Modular construction systems
Recycling timber
Articulation
Denailer
Reverse engineering (a different but related concept)
Slighting
References
External links
- U.S. Environmental Protection Agency - deconstruction case studies and links
Home Resource - A non-profit building materials re-use center
The Building Materials Reuse Association - national organization for deconstruction and reuse
Construction
Demolition
Recycled building materials
Building materials
Sustainable building
Sustainable architecture
Recycling | Deconstruction (building) | [
"Physics",
"Engineering",
"Environmental_science"
] | 3,248 | [
"Demolition",
"Sustainable building",
"Sustainable architecture",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Environmental social science",
"Matter",
"Building materials"
] |
2,726,086 | https://en.wikipedia.org/wiki/Uncleftish%20Beholding | "Uncleftish Beholding" is a short text by Poul Anderson, first published in the Mid-December 1989 issue of the magazine Analog Science Fiction and Fact (with no indication of its fictional or factual status) and included in his anthology All One Universe (1996). It is designed to illustrate what English might look like without its large number of words derived from languages such as French, Greek, and Latin, especially with regard to the proportion of scientific words with origins in those languages.
Written as a demonstration of linguistic purism in English, the work explains atomic theory using Germanic words almost exclusively and coining new words when necessary; many of these new words have cognates in modern German, an important scientific language in its own right. The title phrase uncleftish beholding calques "atomic theory."
To illustrate, the text begins:
It goes on to define firststuffs (chemical elements), such as waterstuff (hydrogen), sourstuff (oxygen), and ymirstuff (uranium), as well as bulkbits (molecules), bindings (compounds), and several other terms important to uncleftish worldken (atomic science). and are the modern German words for hydrogen and oxygen, and in Dutch the modern equivalents are and . Sunstuff refers to helium, which derives from , the Ancient Greek word for 'sun'. Ymirstuff references Ymir, a giant in Norse mythology similar to Uranus in Greek mythology.
Glossary
The vocabulary used in "Uncleftish Beholding" does not completely derive from Anglo-Saxon. Around, from Old French (Modern French ), completely displaced Old English (modern English (now obsolete), cognate to German and Latin ) and left no "native" English word for this concept. The text also contains the French-derived words rest, ordinary and sort.
The text gained increased exposure and popularity after being circulated around the Internet, and has served as inspiration for some inventors of Germanic English conlangs. Douglas Hofstadter, in discussing the piece in his book , jocularly refers to the use of only Germanic roots for scientific pieces as "Ander-Saxon."
See also
Anglish
Thing Explainer
References
External links
English language
Atomic physics
1989 documents
Works by Poul Anderson
Linguistic purism
Books written in fictional dialects | Uncleftish Beholding | [
"Physics",
"Chemistry"
] | 486 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
27,388,322 | https://en.wikipedia.org/wiki/Icophone | The icophone is an instrument of speech synthesis conceived by Émile Leipp in 1964 and used for synthesizing the French language. The two first icophones were made in the laboratory of physical mechanics of Saint-Cyr-l'École.
The principle of the icophone is the representation of the sound by a spectrograph. The spectrogram analyzes a word, a phrase, or more generally a sound, and shows the distribution of the different frequencies with their relative intensities. The first machines to synthesize words were made by displaying the form of the spectrogram on a transparent tape, which controls a series of oscillators following the presence or absence of a black mark on the tape. Leipp succeeded in decomposing the segments of a spoken sound phenomenon, and in synthesizing them from a very simplified display.
References
Acoustics
Signal processing
Time–frequency analysis | Icophone | [
"Physics",
"Technology",
"Engineering"
] | 185 | [
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Computer engineering",
"Signal processing",
"Frequency-domain analysis",
"Classical mechanics",
"Acoustics"
] |
27,391,465 | https://en.wikipedia.org/wiki/Monocular%20deprivation | Monocular deprivation is an experimental technique used by neuroscientists to study central nervous system plasticity. Generally, one of an animal's eyes is sutured shut during a period of high cortical plasticity (4–5 weeks-old in mice (Gordon 1997)). This manipulation serves as an animal model for amblyopia, a permanent deficit in visual sensation not due to abnormalities in the eye (which occurs, for example, in children who grow up with cataracts - even after cataract removal, they do not see as well as others).
Background
David Hubel and Torsten Wiesel (who won the Nobel Prize in Physiology for their elucidation of receptive field properties of cells in primary visual cortex) first performed the technique in felines. Kittens, although less-closely related evolutionarily to humans even than rodents, have a remarkably similar visual system to humans. They found that ocular dominance columns (the orderly clustering of V1 neurons representing visual input from one or both eyes) were dramatically disrupted when one eye was sewn shut for 2 months. In the normal feline, about 85% of cells are responsive to input to both eyes; in the monocularly-deprived animals, no cells receive input from both eyes. The monocular deprivation often leads to amblyopia that is irreversible.
This physiological change was paralleled by dramatic anatomical changes. The layers representing the deprived eye in the lateral geniculate nucleus of the thalamus are atrophied. In V1, ocular dominance columns representing the open eye are dramatically enlarged, at the expense of cortical surface area representing the sutured eye (Fig. 1 - Effect of monocular deprivation on ocular dominance columns. Light areas represent V1 neurons receiving input from an eye which has been injected with radioactive amino acid. Dark areas represent neurons receiving input from the other, noninjected, eye. Image A represents normal ocular dominance columns; Image B represents ocular dominance columns after monocular deprivation). These results were confirmed in the monkey.
In felines, the critical period (the period during which deprivation could cause permanent deficits) can last up to one year with the peak occurring around 4 weeks. In monkeys, the critical period peak is around 6 months. Depriving an eye, for even a few days, during this period is sufficient to cause major changes in ocular-dominance-column anatomy and physiology. However, the results of monocular deprivation in adult cats are not the same. The ocular dominance columns do not show results of being disturbed even after the adult cat has had one of its eyes shut for over a year.
References
Visual system
Plasticity (physics) | Monocular deprivation | [
"Materials_science"
] | 563 | [
"Deformation (mechanics)",
"Plasticity (physics)"
] |
27,392,412 | https://en.wikipedia.org/wiki/Radially%20unbounded%20function | In mathematics, a radially unbounded function is a function for which
Or equivalently,
Such functions are applied in control theory and required in optimization for determination of compact spaces.
Notice that the norm used in the definition can be any norm defined on , and that the behavior of the function along the axes does not necessarily reveal that it is radially unbounded or not; i.e. to be radially unbounded the condition must be verified along any path that results in:
For example, the functions
are not radially unbounded since along the line , the condition is not verified even though the second function is globally positive definite.
References
Real analysis
Types of functions | Radially unbounded function | [
"Mathematics"
] | 139 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
27,393,000 | https://en.wikipedia.org/wiki/Output%20power%20of%20an%20analog%20TV%20transmitter | The output power of a TV transmitter is the electric power applied to antenna system.
There are two definitions: nominal (or peak) and thermal. Analogue television systems put about 70% to 90% of the transmitters power into the sync pulses. The remainder of the transmitter's power goes into transmitting the video's higher frequencies and the FM audio carrier. Digital television modulation systems are about 30% more efficient than analogue modulation systems overall.
Analogue vs digital
Analogue
The large amount of energy that Sync Pulses use is largely independent of the measurement system and efficiency of the analogue TV transmitter (as most analogue transmitters have on average 75% efficiency).
The transmission of FM audio (including Stereo subcarriers) is only overall the 3rd largest consumer of TV transmitter power.
Power consumption (most to least) : Sync Pulses, High Frequency Video, FM Audio, Vestigial AM
Digital
DVB like transmission systems, with their groups of mathematically related carriers are not quite as energy efficient as 8VSB systems
8VSB transmission systems only provide a limited "Forced DC" signal (that consumes about 7% of the transmitters energy) that under multipath conditions can be lost causing a signal lock loss event
Power defined in terms of voltage
The average power for a sinusoidal drive is
For a system where the voltage and the current are in phase, the output power can be given as
In this equation is the resistance and is the output voltage
Nominal power of a TV transmitter
Nominal power of a TV transmitter is given as the power during the sync interval. (For the sake of simplicity aural power is omitted) Since, the voltage during the sync interval is a fixed value,
where is the rms value of the output voltage.
To measure the nominal output power, measuring devices with time constants much greater than the line time are used. So the measuring equipment's measure only the highest level (sync pulse) of a line waveform which is 100%.
This power level is the commercial power of the transmitter.
Thermal power
In analogue TV broadcasting, the video signal modulates a carrier by a kind of amplitude modulation (VSB modulation or C3F). The modulation polarity is negative. That means that the higher the level of the video signal the lower the power of the RF signal.
The lowest possible modulating signal during the synchrone interval yields 100% of the carrier. (The nominal power of the transmitter.) The blanking level (300 mV) yields 73% (in an ideally linear transmitter). Usually the figure 75% is found to be acceptable. The highest modulating signal at white (1000 mV) yields only 10% of the carrier. (so called residual carrier). Sometimes 12.5% is used as the residual carrier so the output power applied to the antenna system is considerably lower than the nominal power.
The thermal power which can be measured by a microwave power meter depends on the program content as well as the residual carrier and sync depths.
Ratio of thermal power to nominal power
Since the program content is variable, the thermal power varies during the transmission. However, for testing purposes a standard line waveform can be applied to the transmitter.
Usually line waveforms corresponding to 350 mV or 300 mV black image (and without field sync) are applied to the input of the transmitter.
For System B, the duration of the black level 300 mV (together with the front and back porches), is 59.3 μs and it corresponds to 73% of maximum voltage level. The duration of the sync pulse is 4.7 μs. The total duration of the line is 64 μs.
So the maximum thermal power applied to the antenna system is 57% of the nominal power, even in the black scene. In normal program content this ratio may be around 25% or less.
References
Television technology
Broadcast transmitters
Radio transmission power | Output power of an analog TV transmitter | [
"Physics",
"Technology"
] | 777 | [
"Information and communications technology",
"Physical quantities",
"Television technology",
"Radio transmission power",
"Power (physics)"
] |
27,395,553 | https://en.wikipedia.org/wiki/Bjerrum%20plot | A Bjerrum plot (named after Niels Bjerrum), sometimes also known as a Sillén diagram (after Lars Gunnar Sillén), or a Hägg diagram (after Gunnar Hägg) is a graph of the concentrations of the different species of a polyprotic acid in a solution, as a function of pH, when the solution is at equilibrium. Due to the many orders of magnitude spanned by the concentrations, they are commonly plotted on a logarithmic scale. Sometimes the ratios of the concentrations are plotted rather than the actual concentrations. Occasionally H+ and OH− are also plotted.
Most often, the carbonate system is plotted, where the polyprotic acid is carbonic acid (a diprotic acid), and the different species are dissolved carbon dioxide, carbonic acid, bicarbonate, and carbonate. In acidic conditions, the dominant form is ; in basic (alkaline) conditions, the dominant form is ; and in between, the dominant form is . At every pH, the concentration of carbonic acid is assumed to be negligible compared to the concentration of dissolved , and so is often omitted from Bjerrum plots. These plots are very helpful in solution chemistry and natural water chemistry. In the example given here, it illustrates the response of seawater pH and carbonate speciation due to the input of man-made emission by the fossil fuel combustion.
The Bjerrum plots for other polyprotic acids, including silicic, boric, sulfuric and phosphoric acids, are other commonly used examples.
Bjerrum plot equations for carbonate system
If carbon dioxide, carbonic acid, hydrogen ions, bicarbonate and carbonate are all dissolved in water, and at chemical equilibrium, their equilibrium concentrations are often assumed to be given by:
where the subscript 'eq' denotes that these are equilibrium concentrations, K1 is the equilibrium constant for the reaction + H+ + (i.e. the first acid dissociation constant for carbonic acid), K2 is the equilibrium constant for the reaction H+ + (i.e. the second acid dissociation constant for carbonic acid), and DIC is the (unchanging) total concentration of dissolved inorganic carbon in the system, i.e. [] + [] + []. K1, K2 and DIC each have units of a concentration, e.g. mol/L.
A Bjerrum plot is obtained by using these three equations to plot these three species against , for given K1, K2 and DIC. The fractions in these equations give the three species' relative proportions, and so if DIC is unknown, or the actual concentrations are unimportant, these proportions may be plotted instead.
These three equations show that the curves for and intersect at , and the curves for and intersect at . Therefore, the values of K1 and K2 that were used to create a given Bjerrum plot can easily be found from that plot, by reading off the concentrations at these points of intersection. An example with linear Y axis is shown in the accompanying graph. The values of K1 and K2, and therefore the curves in the Bjerrum plot, vary substantially with temperature and salinity.
Chemical and mathematical derivation of Bjerrum plot equations for carbonate system
Suppose that the reactions between carbon dioxide, hydrogen ions, bicarbonate and carbonate ions, all dissolved in water, are as follows:
Note that reaction is actually the combination of two elementary reactions:
+ H+ +
Assuming the mass action law applies to these two reactions, that water is abundant, and that the different chemical species are always well-mixed, their rate equations are
where denotes concentration, t is time, and K1 and k−1 are appropriate proportionality constants for reaction , called respectively the forwards and reverse rate constants for this reaction. (Similarly K2 and k−2 for reaction .)
, the concentrations are unchanging, hence the left hand sides of these equations are zero. Then, from the first of these four equations, the ratio of reaction 's rate constants equals the ratio of its equilibrium concentrations, and this ratio, called K1, is called the equilibrium constant for reaction , i.e.
where the subscript 'eq' denotes that these are equilibrium concentrations.
Similarly, from the fourth equation for the equilibrium constant K2 for reaction ,
Rearranging gives
and rearranging , then substituting in , gives
The total concentration of dissolved inorganic carbon in the system is given by substituting in and :
Re-arranging this gives the equation for :
The equations for and are obtained by substituting this into and .
See also
Charlot equation
Gran plot (also known as Gran titration or the Gran method)
Henderson–Hasselbalch equation
Hill equation (biochemistry)
Ion speciation
Fresh water
Seawater
Thermohaline circulation
References
Acid–base chemistry
Aquatic ecology
Chemical oceanography
Geochemistry
Limnology
Oceanography
Soil chemistry
Thermodynamics
Water chemistry | Bjerrum plot | [
"Physics",
"Chemistry",
"Mathematics",
"Biology",
"Environmental_science"
] | 1,028 | [
"Acid–base chemistry",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Equilibrium chemistry",
"Chemical oceanography",
"Soil chemistry",
"Thermodynamics",
"nan",
"Ecosystems",
"Aquatic ecology",
"Dynamical systems"
] |
4,992,829 | https://en.wikipedia.org/wiki/Papaveretum | Papaveretum (BAN) is a preparation containing a mixture of hydrochloride salts of opium alkaloids. Since 1993, papaveretum has been defined in the British Pharmacopoeia (BP) as a mixture of 253 parts morphine hydrochloride, 23 parts papaverine hydrochloride, and 20 parts codeine hydrochloride. It is commonly marketed to medical agencies under the trade name Omnopon.
Although the use of papaveretum is now relatively uncommon following the wide availability of single-component opiates and synthetic opioids (e.g. pethidine), it is still used to relieve moderate to severe pain and for pre-operative sedation. In clinical settings, papaveretum is usually administered to patients via subcutaneous, intramuscular, or intravenous routes. Additionally, the morphine syrettes found in combat medical kits issued to military personnel actually contain Omnopon.
Prior to 1993, papaveretum also contained noscapine, though this component was removed from the BP formulation due to the genotoxic potential of noscapine.
References
Opiates | Papaveretum | [
"Chemistry"
] | 238 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
4,998,099 | https://en.wikipedia.org/wiki/Embrittlement | Embrittlement is a significant decrease of ductility of a material, which makes the material brittle. Embrittlement is used to describe any phenomena where the environment compromises a stressed material's mechanical performance, such as temperature or environmental composition. This is oftentimes undesirable as brittle fracture occurs quicker and can much more easily propagate than ductile fracture, leading to complete failure of the equipment. Various materials have different mechanisms of embrittlement, therefore it can manifest in a variety of ways, from slow crack growth to a reduction of tensile ductility and toughness.
Mechanisms
Embrittlement is a series complex mechanism that is not completely understood. The mechanisms can be driven by temperature, stresses, grain boundaries, or material composition. However, by studying the embrittlement process, preventative measures can be put in place to mitigate the effects. There are several ways to study the mechanisms. During metal embrittlement (ME), crack-growth rates can be measured. Computer simulations can also be used to enlighten the mechanisms behind embrittlement. This is helpful for understanding hydrogen embrittlement (HE), as the diffusion of hydrogen through materials can be modeled. The embrittler does not play a role in final fracture; it is mostly responsible for crack propagation. Cracks must first nucleate. Most embrittlement mechanisms can cause fracture transgranularly or intergranularly. For metal embrittlement, only certain combinations of metals, stresses, and temperatures are susceptible. This is contrasted to stress-corrosion cracking where virtually any metal can be susceptible given the correct environment. Yet this mechanism is much slower than that of liquid metal embrittlement (LME), suggesting that it directs a flow of atoms both towards and away from the crack. For neutron embrittlement, the main mechanism is collisions within the material from the fission byproducts.
Embrittlement of metals
Hydrogen embrittlement
One of the most well discussed, and detrimental, embrittlement is hydrogen embrittlement in metals. There are multiple ways that hydrogen atoms can diffuse into metals, including from environment or during processing (e.g. electroplating). The exact mechanism that causes hydrogen embrittlement is still not determined, but many theories are proposed and are still undergoing verification. Hydrogen atoms are likely to diffuse to grain boundaries of metals, which becomes a barrier for dislocation motion and builds up stress near the atoms. When the metal is stressed, the stress is concentrated near the grain boundaries due to hydrogen atoms, allowing a crack to nucleate and propagate along the grain boundaries to relieve the built-up stress.
There are many ways to prevent or reduce the impact of hydrogen embrittlement in metals. One of the more conventional ways is to place coatings around the metal, which will act as diffusion barriers that prevents hydrogen from being introduced from the environment into the material. Another way is to add traps or absorbers in the alloy which takes into the hydrogen atom and forms another compound.
475 °C embrittlement
Duplex stainless steel is widely used in industry because it possesses excellent oxidation resistance, but it can have limited toughness due to its large ferritic grain size and embrittlement tendencies at temperatures ranging from 280 to 500 °C, especially at 475 °C, where spinodal decomposition of the supersaturated solid ferrite solution into Fe-rich nanophase () and Cr-rich nanophase (), accompanied by G-phase precipitation, occurs, which makes the ferrite phase a preferential initiation site for micro-cracks.
Radiation embrittlement
Radiation embrittlement, also known as neutron embrittlement, is a phenomenon more commonly observed in reactors and nuclear plants as these materials are constantly exposed to a steady amount of radiation. When a neutron irradiates the metal, voids are created in the material, which is known as void swelling. If the material is under creep (under low strain rate and high temperature condition), the voids will coalesce into vacancies which compromises the mechanical strength of the workpiece.
Low temperature embrittlement
At low temperatures, some metals can undergo a ductile-brittle transition which makes the material brittle and could lead to catastrophic failure during operation. This temperature is commonly called a ductile-brittle transition temperature or embrittlement temperature. Research has shown that low temperature embrittlement and brittle fracture only occurs under these specific criteria:
There is enough stress to nucleate a crack.
The stress at the crack exceeds a critical value that will open up the crack (also known as Griffith's criterion for crack opening).
High resistance to dislocation movement.
There should be a small amount of viscous drag of dislocation to ensure opening of crack.
All metals can fulfill criteria 1, 2, 4. However, only BCC and some HCP metals meets the third condition as they have high Peierl's barrier and strong energy of elastic interaction of dislocation and defects. All FCC and most HCP metals have low Peierl's barrier and weak elastic interaction energy. Plastics and rubbers also exhibit the same transition at low temperatures.
Historically, there are multiple instances where people are operating equipment at cold temperatures that led to unexpected, but also catastrophic, failure. In Cleveland in 1944, a cylindrical steel tank containing liquefied natural gas ruptured because of its low ductility at the operating temperature. Another famous example was the unexpected fracture of 160 World War II liberty ships during winter months. The crack was formed at the middle of the ships and propagated through, breaking the ships in half quite literally.
Other types of embrittlement
Stress corrosion cracking (SCC) is the embrittlement caused by exposure to aqueous, corrosive materials. It relies on both a corrosive environment and the presence of tensile (not compressive) stress.
Sulfide stress cracking is the embrittlement caused by absorption of hydrogen sulfide.
Adsorption embrittlement is the embrittlement caused by wetting.
Liquid metal embrittlement (LME) is the embrittlement caused by liquid metals.
Metal-induced embrittlement (MIE) is the embrittlement caused by diffusion of atoms of metal, either solid or liquid, into the material. For example, cadmium coating on high-strength steel, which was originally done to prevent corrosion.
Grain boundary segregation can cause brittle intergranular fracture. During solidification the grain boundaries end up as the repository for the impurities in the alloy by segregation. This grain boundary segregation can create a network of low-toughness paths through the material.
The primary embrittlement mechanism of plastics is gradual loss of plasticizers, usually by overheating or aging.
The primary embrittlement mechanism of asphalt is by oxidation, which is most severe in warmer climates. Asphalt pavement embrittlement (aka crocodile cracking) can lead to various forms of cracking patterns, including longitudinal, transverse, and block (hexagonal). Asphalt oxidation is related to polymer degradation, as these materials bear similarities in their chemical composition.
Embrittlement of inorganic glasses and ceramics
The mechanisms of embrittlement are similar to those of metals. Inorganic glass embrittlement can be manifested via static fatigue. Embrittlement in glasses, such as Pyrex, is a function of humidity. Growth rate of cracks vary linearly with humidity, suggesting a first-order kinetic relationship. The static fatigue of Pyrex by this mechanism requires dissolution to be concentrated at the tip of the crack. If the dissolution is uniform along the crack flat surfaces, the crack tip will be blunted. This blunting can actually increase the fracture strength of the material by 100 times.
The embrittlement of SiC/alumina composites serves as an instructive example. The mechanism for this system is primarily the diffusion of oxygen into the material through cracks in the matrix. The oxygen reaches the SiC fibers and produces silicate. Stress concentrates around the newly formed silicate and the fibers' strength is degraded. This ultimately leads to fracture at stresses less than the material's typical fracture stress.
Embrittlement of polymers
Polymers come in a wide variety of compositions, and this diversity of chemistry results in wide-ranging embrittlement mechanisms. The most common sources of polymer embrittlement include oxygen in the air, water in liquid or vapor form, ultraviolet radiation from the sun, acids, and organic solvents.
One of the ways these sources alter the mechanical properties of polymers is through chain scission and chain cross-linking. Chain scission occurs when atomic bonds are broken in the main chain, so environments with elements such as solar radiation lead to this form of embrittlement. Chain scission reduces the length of the polymer chains in a material, resulting in a reduction of strength. Chain cross-linking has the opposite effect. An increase in the number of cross-links (due to an oxidative environment for example), results in stronger, less ductile material.
The thermal oxidation of polyethylene provides a quality example of chain scission embrittlement. The random chain scission induced a change from ductile to brittle behavior once the average molar mass of the chains dropped below a critical value. For the polyethylene system, embrittlement occurred when the weight average molar mass fell below 90 kg/mol. The reason for this change was hypothesized to be a reduction of entanglement and an increase in crystallinity. The ductility of polymers is typically a result of their amorphous structure, so an increase in crystallinity makes the polymer more brittle. In the case of polyethylene terephthalate, hydrolysis produces chain scission embrittlement. It has been demonstrated that the degradation of the mechanical properties correlates with the reduction of the mobile amorphous fraction (MAF), and that the ductile-to-brittle transition occurs when the minimum MAF is reached. This supports a micromechanical interpretation of the embrittlement mechanism rather than a molecular interpretation.
The embrittlement of silicone rubber is due to an increase in the amount of chain cross-linking. When silicone rubber is exposed to air at temperatures above oxidative cross-linking reactions occur at methyl side groups along the main chain. These cross-links make the rubber significantly less ductile.
Solvent stress cracking is a significant polymer embrittlement mechanism. It occurs when liquids or gasses are absorbed into the polymer, ultimately swelling the system. The polymer swelling results in less shear flow and an increase in crazing susceptibility. Solvent stress cracking from organic solvents typically results in static fatigue because of the low mobility of fluids. Solvent stress cracking from gasses is more likely to result in greater crazing susceptibility.
Polycarbonate provides a good example of solvent stress cracking. Numerous solvents have been shown to embrittle polycarbonate (i.e. benzene, toluene, acetone) through a similar mechanism. The solvent diffuses into the bulk, swells the polymer, induces crystallization, and ultimately produces interfaces between ordered and disordered regions. These interfaces produce voids and stress fields that can be propagated throughout the material at stresses much lower than the typical tensile strength of the polymer.
References
Corrosion
Materials degradation | Embrittlement | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,388 | [
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Materials degradation"
] |
4,998,376 | https://en.wikipedia.org/wiki/Electrodipping%20force | The electrodipping force is a force proposed to explain the observed attraction that arises among small colloidal particles attached to an interface between immiscible liquids. The particles are held there by surface tension. Normally the surface tension does not in itself give rise to an attraction or repulsion among particles on a meniscus. A capillary interaction requires that the particles are pushed or pulled away from the meniscus, for instance because of their weight, if the particles are large and heavy enough.
It has been proposed by Nikolaides et al. that the observed attractions are the result of an electrostatic pressure on the liquid interface, due to electric charges on the particles. The electrostatic pressure arises because the dielectric constants of the liquids differ. Due to the pressure, the liquid interface deforms. This pressure is balanced by a simultaneous electrostatic force acting on the charges, and hence on the particle itself. The force has been coined the electrodipping force by Kralchevsky et al. - it dips the particle in one of the liquids.
According to Nikolaides, the electrostatic force engenders a long range capillary attraction. However, this explanation is controversial; other authors have argued that the capillary effect of the electrodipping force is in fact cancelled by the electrostatic pressure on the interface, so the resulting capillary effect would be insignificant.
References
M. G. Nikolaides, A. R. Bausch, M. F. Hsu, A. D. Dinsmore, M. P. Brenner, C. Gay, D. A. Weitz, Electric-field-induced capillary attraction between like-charged particles at liquid interfaces, Nature 420, 299 (2002), Physik
K. D. Danov, P. A. Kralchevsky, M. P. Boneva, Langmuir 20, 6139 (2004)
M. Megens, J. Aizenberg, Nature 242, 1014 (2003), see
L. Foret, A. Wuerger, Physical Review Letters 92 58302 (2004),
Fluid mechanics | Electrodipping force | [
"Engineering"
] | 444 | [
"Civil engineering",
"Fluid mechanics"
] |
28,648,000 | https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli%20theorem | Rouché–Capelli theorem is a theorem in linear algebra that determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:
Rouché–Capelli theorem in English speaking countries, Italy and Brazil;
Kronecker–Capelli theorem in Austria, Poland, Ukraine, Croatia, Romania, Serbia and Russia;
Rouché–Fontené theorem in France;
Rouché–Frobenius theorem in Spain and many countries in Latin America;
Frobenius theorem in the Czech Republic and in Slovakia.
Formal statement
A system of linear equations with variables and coefficients in a field has a solution if and only if its coefficient matrix and its augmented matrix have the same rank. If there are solutions, they form an affine subspace of of dimension . In particular:
if , the solution is unique,
if and is an infinite field, the system of linear equations admits infinitely many solutions,
if is a finite field, the number of solutions is finite, namely .
Example
Consider the system of equations
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 2.
The coefficient matrix is
and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are infinitely many solutions.
In contrast, consider the system
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 5.
The coefficient matrix is
and the augmented matrix is
In this example the coefficient matrix has rank 2, while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent columns has made the system of equations inconsistent.
Proof
There are several proofs of the theorem. One of them is the following one.
The use of Gaussian elimination for putting the augmented matrix in reduced row echelon form does not change the set of solutions and the ranks of the involved matrices. The theorem can be read almost directly on the reduced row echelon form as follows.
The rank of a matrice is number of nonzero rows in its reduced row echelon form. If the ranks of the coefficient matrix and the augmented matrix are different, then the last non zero row has the form corresponding to the equation . Otherwise, the th row of the reduced row echelon form allows expressing the th pivot variable as the sum of a constant and a linear combination of the non-pivot variables, showing that the dimension of the set of solutions is the number of non-pivot variables.
See also
Cramer's rule
References
External links
Kronecker-Capelli Theorem at Wikibooks
Kronecker-Capelli's Theorem - YouTube video with a proof
Kronecker-Capelli theorem in the Encyclopaedia of Mathematics
Theorems in linear algebra
Matrix theory | Rouché–Capelli theorem | [
"Mathematics"
] | 623 | [
"Theorems in algebra",
"Theorems in linear algebra"
] |
28,653,915 | https://en.wikipedia.org/wiki/Kanamori%E2%80%93McAloon%20theorem | In mathematical logic, the Kanamori–McAloon theorem, due to , gives an example of an incompleteness in Peano arithmetic, similar to that of the Paris–Harrington theorem. They showed that a certain finitistic theorem in Ramsey theory is not provable in Peano arithmetic (PA).
Statement
Given a set of non-negative integers, let denote the minimum element of . Let denote the set of all n-element subsets of .
A function where is said to be regressive if for all not containing 0.
The Kanamori–McAloon theorem states that the following proposition, denoted by in the original reference, is not provable in PA:
For every , there exists an such that for all regressive , there exists a set such that for all with , we have .
See also
Paris–Harrington theorem
Goodstein's theorem
Kruskal's tree theorem
References
Independence results
Theorems in the foundations of mathematics | Kanamori–McAloon theorem | [
"Mathematics"
] | 196 | [
"Independence results",
"Mathematical theorems",
"Foundations of mathematics",
"Mathematical logic",
"Mathematical logic stubs",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
1,962,679 | https://en.wikipedia.org/wiki/Three-body%20force | A three-body force is a force that does not exist in a system of two objects but appears in a three-body system. In general, if the behaviour of a system of more than two objects cannot be described by the two-body interactions between all possible pairs, as a first approximation, the deviation is mainly due to a three-body force.
The fundamental strong interaction does exhibit such behaviour, the most important example being the stability experimentally observed for the helium-3 isotope, which can be described as a 3-body quantum cluster entity of two protons and one neutron [PNP] in stable superposition. Direct evidence of a 3-body force in helium-3 is known: . The existence of stable [PNP] cluster calls into question models of the atomic nucleus that restrict nucleon interactions within shells to 2-body phenomenon. The three-nucleon-interaction is fundamentally possible because gluons, the mediators of the strong interaction, can couple to themselves. In particle physics, the interactions between the three quarks that compose hadrons can be described in a diquark model which might be equivalent to the hypothesis of a three-body force. There is growing evidence in the field of nuclear physics that three-body forces exist among the nucleons inside atomic nuclei for many different isotopes (three-nucleon force).
See also
Faddeev equation
Few-body systems
N-body problem
Hydrogen molecular ion
Borromean nucleus
Efimov state
Chiral perturbation theory
References
Force
Nuclear physics | Three-body force | [
"Physics",
"Mathematics"
] | 319 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Nuclear physics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
1,962,912 | https://en.wikipedia.org/wiki/Hiyama%20coupling | The Hiyama coupling is a palladium-catalyzed cross-coupling reaction of organosilanes with organic halides used in organic chemistry to form carbon–carbon bonds (C-C bonds). This reaction was discovered in 1988 by Tamejiro Hiyama and Yasuo Hatanaka as a method to form carbon-carbon bonds synthetically with chemo- and regioselectivity. The Hiyama coupling has been applied to the synthesis of various natural products.
R: aryl, alkenyl or alkynyl
R': aryl, alkenyl, alkynyl or alkyl
R'': Cl, F or alkyl
X: Cl, Br, I or OTf
Reaction history
The Hiyama coupling was developed to combat the issues associated with other organometallic reagents. The initial reactivity of organosilicon was not actually first reported by Hiyama, as Kumada reported a coupling reaction using organofluorosilicates shown below. Organosilanes were then discovered, by Hiyama, to have reactivity when activated by a fluoride source. This reactivity, when combined with a palladium salt, creates a carbon-carbon bond with an electrophillic carbon, like an organic halide. Compared to the inherent issues of well-used organometalics reagents, such as organomagnesium (Grignard reagents) and organocopper reagents, which are very reactive and are known to have low chemoselectivity, enough to destroy functional groups on both coupling partners, organosilicon compounds are inactive. Other organometallic reagents using metals such as zinc, tin, and boron, reduce the reactivity issue, but have other problems associated with each reagent. Organozinc reagents are moisture sensitive, organotin compounds are toxic, and organoboron reagents are not readily available, are expensive, and aren't often stable. Organosilanes are readily available compounds that, upon activation (much like organotin or organoboron compounds) from fluoride or a base, can react with organohalides to form C-C bonds in a chemo- and regioselective manner. The reaction first reported was used to couple easily made (and activated) organosilicon nucleophiles and organohalides (electrophiles) in the presence of a palladium catalyst. Since this discovery, work has been done by various groups to expand the scope of this reaction and to "fix" the issues with this first coupling, such as the need for fluoride activation of the organosilane.
Mechanism
The organosilane is activated with fluoride (as some sort of salt such as TBAF or TASF) or a base to form a pentavalent silicon center which is labile enough to allow for the breaking of a C-Si bond during the transmetalation step. The general scheme to form this key intermediate is shown below. This step occurs in situ or at the same time as the catalytic cycle in the reaction.
The mechanism for the Hiyama coupling follows a catalytic cycle, including an A) oxidative addition step, in which the organic halide adds to the palladium oxidizing the metal from palladium(0) to palladium(II); a B) transmetalation step, in which the C-Si bond is broken and the second carbon fragment is bound to the palladium center; and finally C) a reductive elimination step, in which the C-C bond is formed and the palladium returns to its zero-valent state to start the cycle over again. The catalytic cycle is shown below.
Scope and limitations
Scope
The Hiyama coupling can be applied to the formation of Csp2-Csp2 (e.g. aryl–aryl) bonds as well as Csp2-Csp3 (e.g. aryl–alkyl) bonds. Good synthetic yields are obtained with couplings of aryl halides, vinyl halides, and allylic halides and organoiodides afford the best yields.
The scope of this reaction was expanded to include closure of medium-sized rings by Scott E. Denmark.
The coupling of alkyl halides with organohalosilanes as alternative organosilanes has also been performed. Organochlorosilanes allow couplings with aryl chlorides, which are abundant and generally more economical than aryl iodides. A nickel catalyst allows for access to new reactivity of organotrifluorosilanes as reported by GC Fu et al. Secondary alkyl halides are coupled with aryl silanes with good yields using this reaction.
Limitations
The Hiyama coupling is limited by the need for fluoride in order to activate the organosilicon reagent. Addition of fluoride cleaves any silicon protecting groups (e.g. silyl ethers), which are frequently employed in organic synthesis. The fluoride ion is also basic, so base sensitive protecting groups, acidic protons, and functional groups may be affected by the addition of this activator. Most of the active research concerning this reaction involves circumventing this problem. To overcome this issue, many groups have looked to the use of other basic additives for activation, or use of a different organosilane reagent all together, leading to the multiple variations of the original Hiyama coupling.
Variations
One modification of the Hiyama coupling utilizes a silacyclobutane ring and a fluoride source that is hydrated as shown below. This mimics the use of an alkoxysilane/organosilanol rather than the use of alkylsilane. The mechanism of this reaction, using a fluoride source, allowed for the design of future reactions that can avoid the use of the fluoride source.
Fluoride-free Hiyama couplings
Many modifications to the Hiyama coupling have been developed that avoid the use of a fluoride activator/base. Using organochlorosilanes, Hiyama found a coupling scheme utilizing NaOH as the basic activator. Modifications using alkoxysilanes have been reported with the use of milder bases like NaOH and even water. Study of these mechanisms have led to the development of the Hiyama–Denmark coupling which utilize organosilanols as coupling partners.
Another class of fluoride-free Hiyama couplings include the use of a Lewis acid additive, which allows for bases such as K3PO4 to be utilized, or for the reaction to proceed without a basic additive. The addition of a copper co-catalyst has also been reported to allow for the use of a milder activating agent and has even been shown to get turnover in which both the palladium(II) and copper(I) turnover in the catalytic cycle rather than addition of stoichiometric Lewis acid (e.g. silver(I), copper(I)).
Hiyama–Denmark coupling
The Hiyama–Denmark coupling is the modification of the Hiyama coupling that does not require a fluoride additive to utilize organosilanols and organic halides as coupling partners. The general reaction scheme is shown below, showcasing the utilization of a Brønsted base as the activating agent as opposed to fluoride, phosphine ligands are also used on the metal center.
A specific example of this reaction is shown with reagents. If fluoride had been used, as in the original Hiyama protocol, the tert-butyldimethylsilyl (TBS) ether would have likely been destroyed.
Hiyama–Denmark coupling mechanism
Examination of this reaction's mechanism suggests that the formation of the silonate is all that is needed to activate addition of the organosilane to the palladium center. The presence of a pentavalent silicon is not needed and kinetic analysis has shown that this reaction has first order dependence on silonate concentration. This is due to the key bond being formed, the Pd-O bond during the transmetalation step, that then allows for transfer of the carbon fragment onto the palladium center. Based on this observation, it seems that the rate limiting step in this catalytic cycle is the Pd-O bond formation, in which increased silonate concentrations increase the rate of this reaction (indicative of faster reactions).
See also
Heck reaction
Kumada coupling
Negishi coupling
Sonogashira coupling
Stille reaction
Suzuki reaction
Palladium-catalyzed coupling reactions
External links
Information about Hiyama couplings
Information about Hiyama–Denmark couplings
References
Organometallic chemistry
Carbon-carbon bond forming reactions
Palladium
Name reactions | Hiyama coupling | [
"Chemistry"
] | 1,839 | [
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions",
"Organometallic chemistry"
] |
1,962,960 | https://en.wikipedia.org/wiki/Two-photon%20physics | Two-photon physics, also called gamma–gamma physics, is a branch of particle physics that describes the interactions between two photons. Normally, beams of light pass through each other unperturbed. Inside an optical material, and if the intensity of the beams is high enough, the beams may affect each other through a variety of non-linear effects. In pure vacuum, some weak scattering of light by light exists as well. Also, above some threshold of this center-of-mass energy of the system of the two photons, matter can be created.
Astronomy
Cosmological/intergalactic gamma rays
Photon–photon interactions limit the spectrum of observed gamma-ray photons at moderate cosmological distances to a photon energy below around 20 GeV, that is, to a wavelength of greater than approximately . This limit reaches up to around 20 TeV at merely intergalactic distances.
An analogy would be light traveling through a fog: at near distances a light source is more clearly visible than at long distances due to the scattering of light by fog particles. Similarly, the further a gamma-ray travels through the universe, the more likely it is to be scattered by an interaction with a low energy photon from the extragalactic background light.
At those energies and distances, very high energy gamma-ray photons have a significant probability of a photon-photon interaction with a low energy background photon from the extragalactic background light resulting in either the creation of particle-antiparticle pairs via direct pair production or (less often) by photon-photon scattering events that lower the incident photon energies. This renders the universe effectively opaque to very high energy photons at intergalactic to cosmological distances.
Experiments
Two-photon physics can be studied with high-energy particle accelerators, where the accelerated particles are not the photons themselves but charged particles that will radiate photons. The most significant studies so far were performed at the Large Electron–Positron Collider (LEP) at CERN. If the transverse momentum transfer and thus the deflection is large, one or both electrons can be detected; this is called tagging. The other particles that are created in the interaction are tracked by large detectors to reconstruct the physics of the interaction.
Frequently, photon-photon interactions will be studied via ultraperipheral collisions (UPCs) of heavy ions, such as gold or lead. These are collisions in which the colliding nuclei do not touch each other; i.e., the impact parameter is larger than the sum of the radii of the nuclei. The strong interaction between the quarks composing the nuclei is thus greatly suppressed, making the weaker electromagnetic interaction much more visible. In UPCs, because the ions are heavily charged, it is possible to have two independent interactions between a single ion pair, such as production of two electron-positron pairs. UPCs are studied with the STARlight simulation code.
Light-by-light scattering, as predicted in, can be studied using the strong electromagnetic fields of the hadrons collided at the LHC, it has first been seen in 2016 by the ATLAS collaboration and was then confirmed by the CMS collaboration., including at high two-photon energies. The best previous constraint on the elastic photon–photon scattering cross section was set by PVLAS, which reported an upper limit far above the level predicted by the Standard Model. Observation of a cross section larger than that predicted by the Standard Model could signify new physics such as axions, the search of which is the primary goal of PVLAS and several similar experiments.
Processes
From quantum electrodynamics it can be found that photons cannot couple directly to each other and a fermionic field according to the Landau-Yang theorem since they carry no charge and no 2 fermion + 2 boson vertex exists due to requirements of renormalizability, but they can interact through higher-order processes or couple directly to each other in a vertex with an additional two W bosons:
a photon can, within the bounds of the uncertainty principle, fluctuate into a virtual charged fermion–antifermion pair, to either of which the other photon can couple. This fermion pair can be leptons or quarks. Thus, two-photon physics experiments can be used as ways to study the photon structure, or, somewhat metaphorically, what is "inside" the photon.
There are three interaction processes:
Direct or pointlike: The photon couples directly to a quark inside the target photon. If a lepton–antilepton pair is created, this process involves only quantum electrodynamics (QED), but if a quark–antiquark pair is created, it involves both QED and perturbative quantum chromodynamics (QCD).
The intrinsic quark content of the photon is described by the photon structure function, experimentally analyzed in deep-inelastic electron–photon scattering.
Single resolved: The quark pair of the target photon form a vector meson. The probing photon couples to a constituent of this meson.
Double resolved: Both target and probe photon have formed a vector meson. This results in an interaction between two hadrons.
For the latter two cases, the scale of the interaction is such as the strong coupling constant is large. This is called vector meson dominance (VMD) and has to be modelled in non-perturbative QCD.
See also
Channelling radiation has been considered as a method to generate polarized high energy photon beams for gamma–gamma colliders.
Matter creation
Pair production
Delbrück scattering
Breit–Wheeler process
References
External links
Lauber,J A, 1997, A small tutorial in gamma–gamma Physics Archive
Two-photon physics at LEP
Two-photon physics at CESR Archive
Particle physics
Quantum electrodynamics
Experimental particle physics | Two-photon physics | [
"Physics"
] | 1,210 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
1,963,076 | https://en.wikipedia.org/wiki/Multiresolution%20analysis | A multiresolution analysis (MRA) or multiscale approximation (MSA) is the design method of most of the practically relevant discrete wavelet transforms (DWT) and the justification for the algorithm of the fast wavelet transform (FWT). It was introduced in this context in 1988/89 by Stephane Mallat and Yves Meyer and has predecessors in the microlocal analysis in the theory of differential equations (the ironing method) and the pyramid methods of image processing as introduced in 1981/83 by Peter J. Burt, Edward H. Adelson and James L. Crowley.
Definition
A multiresolution analysis of the Lebesgue space consists of a sequence of nested subspaces
that satisfies certain self-similarity relations in time-space and scale-frequency, as well as completeness and regularity relations.
Self-similarity in time demands that each subspace Vk is invariant under shifts by integer multiples of 2k. That is, for each the function g defined as also contained in .
Self-similarity in scale demands that all subspaces are time-scaled versions of each other, with scaling respectively dilation factor 2k-l. I.e., for each there is a with .
In the sequence of subspaces, for k>l the space resolution 2l of the l-th subspace is higher than the resolution 2k of the k-th subspace.
Regularity demands that the model subspace V0 be generated as the linear hull (algebraically or even topologically closed) of the integer shifts of one or a finite number of generating functions or . Those integer shifts should at least form a frame for the subspace , which imposes certain conditions on the decay at infinity. The generating functions are also known as scaling functions or father wavelets. In most cases one demands of those functions to be piecewise continuous with compact support.
Completeness demands that those nested subspaces fill the whole space, i.e., their union should be dense in , and that they are not too redundant, i.e., their intersection should only contain the zero element.
Algorithms
This section explores the core algorithms that form the foundation of multiresolution analysis, enabling its wide range of applications.
Subdivision Schemes
Subdivision schemes are iterative algorithms used to generate smooth curves and surfaces from an initial set of control points. These schemes progressively refine the control polygon or mesh to produce increasingly detailed representations.
Key characteristics of subdivision schemes include:
Masks: Define the rules for generating new points at each refinement step.
Flexibility: Enable local modifications at varying resolution levels, making them ideal for multiresolution editing.
A notable example is the Lane-Riesenfeld algorithm, which constructs smooth B-spline curves by iteratively averaging control points. Subdivision schemes are widely applied in geometric modeling, particularly for creating and editing shapes with varying levels of detail.
Discrete Wavelet Transform (DWT)
The Discrete Wavelet Transform (DWT) is a pivotal algorithm in multiresolution analysis, offering a multiscale representation of signals through decomposition into different frequency sub-bands.
Key features of DWT:
Decomposition: The signal is passed through high-pass and low-pass filters, yielding detail coefficients (high frequencies) and approximation coefficients (low frequencies).
Reconstruction: The original signal is reconstructed using inverse filters.
Efficiency: With a computational complexity of , DWT is well-suited for large-scale data processing tasks like image compression and feature extraction.
Pyramidal Algorithms
Pyramidal algorithms leverage a hierarchical structure, akin to a pyramid, where each level represents the signal at a progressively coarser resolution.
Core steps include:
Decomposition: Downsampling and smoothing the signal at each level to create a hierarchy of representations.
Reconstruction: Upsampling and combining information from different levels to restore the original signal.
These algorithms are computationally efficient and extensively used in image processing, computer vision, and pattern recognition.
Fast Decomposition and Reconstruction Algorithms
The Mallat algorithm is a fast, hierarchical method for wavelet decomposition and reconstruction. It processes data at multiple scales, enabling efficient computation of wavelet coefficients and their reconstruction.
Applications
Image Fusion in Remote Sensing
MRA is instrumental in merging images from sensors with varying resolutions and spectral bands. For instance, a high-resolution panchromatic image can be fused with a low-resolution multispectral image, producing a single output with enhanced spatial and spectral resolution. Techniques like the "à trous" wavelet algorithm and Laplacian pyramids preserve spatial connectivity and minimize artifacts.
Multiresolution Editing in Geometric Modeling
MRA enhances geometric modeling by enabling efficient representation and manipulation of complex shapes:
Hierarchical B-splines: Allow local and global modifications, simplifying both coarse adjustments and detailed refinements.
Flexible design: Provides a multiresolution framework for iterative editing, streamlining the creative process in computer-aided design (CAD).
Shape Compression Using Semi-Regular Remeshing
MRA contributes to efficient 3D model compression through semi-regular remeshing:
Simplification: Reduces unnecessary connectivity and parameterization data.
Parameterization: Maps the input mesh onto base triangular domains, resulting in a compact representation.
This approach facilitates the efficient storage, transmission, and rendering of 3D models in applications like gaming, virtual reality, and scientific visualization.
Emerging Fields
Machine Learning: MRA aids in multiscale feature extraction for tasks like image recognition and natural language processing.
Quantum Wavelet Transforms: Leveraging quantum computing principles, MRA is being explored for high-dimensional datasets.
Seismic Analysis: MRA enhances the interpretation of seismic data, identifying subsurface structures with high precision.
Practical Examples
Case Study: Image Compression
JPEG 2000, a widely used image compression standard, relies on MRA through the DWT. By retaining critical wavelet coefficients, it achieves high compression ratios with minimal loss of image quality.
Additional Case Studies
Climate Data Analysis: Detects patterns in multiscale climate datasets.
Financial Market Trends: Analyzes stock market data for trend detection and anomaly identification.
Medical Imaging: Enhances feature detection and clarity in MRI and CT scans.
Important conclusions
In the case of one continuous (or at least with bounded variation) compactly supported scaling function with orthogonal shifts, one may make a number of deductions. The proof of existence of this class of functions is due to Ingrid Daubechies.
Assuming the scaling function has compact support, then implies that there is a finite sequence of coefficients for , and for , such that
Defining another function, known as mother wavelet or just the wavelet
one can show that the space , which is defined as the (closed) linear hull of the mother wavelet's integer shifts, is the orthogonal complement to inside . Or put differently, is the orthogonal sum (denoted by ) of and . By self-similarity, there are scaled versions of and by completeness one has
thus the set
is a countable complete orthonormal wavelet basis in .
See also
Multigrid method
Multiscale modeling
Scale space
Time–frequency analysis
Wavelet
References
Crowley, J. L., (1982). A Representations for Visual Information, Doctoral Thesis, Carnegie-Mellon University, 1982.
Time–frequency analysis
Wavelets | Multiresolution analysis | [
"Physics"
] | 1,485 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)",
"Time–frequency analysis"
] |
1,963,405 | https://en.wikipedia.org/wiki/HTC | HTC Corporation (), or High Tech Computer Corporation (abbreviated and trading as HTC), is a Taiwanese consumer electronics corporation headquartered in Taoyuan District, Taoyuan, Taiwan. Founded in 1997, HTC began as an original design manufacturer and original equipment manufacturer that designed and manufactured laptop computers.
After initially making smartphones based mostly on Windows Mobile, HTC became one of 34 cofounding members of the Open Handset Alliance, a group of handset manufacturers and mobile network operators dedicated to the development of the Android operating system. The HTC Dream (marketed by T-Mobile in many countries as the T-Mobile G1) was the first phone on the market to run Android.
Although initially successful as a smartphone vendor as it became the largest smartphone vendor in the U.S. in Q3 2011, competition from Samsung and Apple, among others, diluted its market share, which dropped to just 7.2% by April 2015, and the company has experienced consecutive net losses. In 2016, HTC began to diversify its business beyond smartphones and has partnered with Valve to produce a virtual reality platform known as HTC Vive. After having collaborated with Google on its Google Pixel, HTC sold roughly half of its design and research talent, as well as non-exclusive rights to smartphone-related intellectual property, to Google in 2017 for billion.
History
Foundation
Cher Wang () and H. T. Cho () founded HTC in 1997. Initially a manufacturer of notebook computers, HTC began designing some of the world's first touch and wireless hand-held devices in 1998.
HTC started making Windows Mobile PDAs and smartphones starting from 2004 under the Qtek brand. In 2006 the range was rebranded as HTC with the launch of the HTC TyTN.
In 2007, HTC acquired the mobile device company Dopod International.
In 2008, HTC unveiled the HTC Max 4G, the first GSM mobile phone to support WiMAX networks.
Android
HTC joined Google's Open Handset Alliance and then developed and released the first device powered by Android in 2008, the HTC Dream.
On October 15, 2009, HTC launched the brand tagline "quietly brilliant"', and the "YOU" campaign, HTC's first global advertising campaign.
In November 2009 HTC released the HTC HD2, the first Windows Mobile device with a Touchscreen. The same year, HTC Sense debuted as a user interface which continues to be used as of 2018.
In July 2010, HTC announced it would begin selling HTC-branded smartphones in China in a partnership with China Mobile. In October 2010, the HTC HD7 was released as one of the launch models of Microsoft's revitalised Windows Phone. In 2010, HTC sold over 24.6 million handsets, up 111% over 2009.
At the Mobile World Congress in February 2011, the GSMA named HTC the "Device Manufacturer of the Year" in its Global Mobile Awards. In April 2011, HTC surpassed Nokia as the third-largest smartphone manufacturer by market share, behind Apple and Samsung.
On 6 July 2011, it was announced that HTC would buy VIA Technologies' stake in S3 Graphics. On 6 August 2011, HTC acquired Dashwire for $18.5M. In August 2011, HTC confirmed a plan for a strategic partnership with Beats Electronics involving acquiring 51 percent of the company.
The 2011 Best Global Brands rankings released by Interbrand, listed HTC at #98 and valued it at $3.6 billion. Based on researcher Canalys, in Q3 2011 HTC Corporation became the largest smartphone vendor in the U.S. with 24 percent market share, ahead of Samsung's 21 percent, Apple's 20 percent and BlackBerry's 9 percent. HTC Corporation made different models for each operator.
During early 2012, HTC lost much of this U.S. market share due to increased competition from Apple and Samsung. According to analyst firm ComScore, HTC only accounted for 9.3% of the United States smartphone market as of February 2013. In light of the company's decrease in prominence, Chief Executive Peter Chou had informed executives that he would step down if the company's newest flagship phone, the 2013 HTC One (M7), had failed to generate impressive sales results. HTC's first quarter results for 2013 showed its year-over-year profit drop by 98.1%, making it the smallest-ever profit for the company—the delay of the launch of the HTC One was cited as one of the factors. In June 2012, HTC moved its headquarters from Taoyuan City (now Taoyuan District) to Xindian District, New Taipei City. On 14 January 2013, HTC launched its smartphones in Burma.
Litigation
In March 2010, Apple Inc. filed a complaint with the US International Trade Commission claiming infringement of 20 of its patents covering aspects of the iPhone user interface and hardware. HTC disagreed with Apple's actions and reiterated its commitment to creating innovative smartphones. HTC also filed a complaint against Apple for infringing on five of its patents and sought to ban the import of Apple products into the US from manufacturing facilities in Asia. Apple expanded its original complaint by adding two more patents.
On 10 November 2012, Apple and HTC reached a 10-year license-agreement covering current and future patents held by the two companies. The terms of the agreement remain confidential.
Previously, Apple ignored HTC's long held rights over the trade name Touch by calling its new iPod range the same.
In February 2013, HTC settled with the U.S. Federal Trade Commission concerning poor security on more than 18 million smartphones and tablets it had shipped to customers and agreed to security patches.
Post-settlement
The HTC One (M7) was released in mid-2013 and, subsequently won various industry awards in the best smartphone and best design categories, but global sales of the HTC One were lower than those for Samsung's Samsung Galaxy S4 flagship handset and HTC recorded its first ever quarterly loss in early October 2013: a deficit of just under NT$3 billion (about US$100m, £62m). Marketing problems were identified by HTC as the primary reason for its comparative performance, a factor that had been previously cited by the company.
During 2013, Microsoft was in negotiations to purchase HTC. This was revealed in 2018 by Risto Siilasmaa, chairman of Nokia, in an interview with the Helsingin Sanomat. Microsoft would eventually purchase Nokia's mobile phone business that year.
In August 2013, HTC debuted a new "Here's To Change" global marketing campaign featuring actor Robert Downey Jr., who signed a two-year contract to be HTC's new "Instigator of Change.". On 27 September 2013, HTC announced that it would sell back its stake in Beats Electronics
Following the release of the HTC One, two variants were released to form a trio for the 2013 HTC One lineup. A smaller variant named the HTC One Mini was released in August 2013, and a larger variant named the HTC One Max was released in October 2013. Similar in design and features to the HTC One, the upgraded aspects of the One Max include a display measuring , a fingerprint sensor and a removable back cover for expandable memory. The product was released into the European and Asian retail environment in October 2013, followed by a US launch in early November 2013.
In March 2014, HTC released the HTC One (M8), the next version of the HTC One flagship, at press conferences in London and New York City. In a change from previous launches, the HTC One was made available for purchase on the company website and North American mobile carrier websites on the same day a few hours after the launch.
In April 2014, HTC reported sales climbing 12.7 percent to NT$22.1 billion, the company's fastest growth since October 2011. In September 2014, Google selected HTC to make its Nexus 9 tablet. In August 2014 HTC announced a Windows Phone-powered variant of the One (M8), their first using the operating system since 2012. HTC ended its long relationship with Microsoft afterwards due to Nokia's dominance in Windows Phone devices, and started focusing solely on Android.
Vive and Pixel
On 1 March 2015, HTC unveiled Vive, a virtual reality head-mounted display in collaboration with Valve. In June and October 2015, HTC reported net losses; the company has faced increased competition from other smartphone makers, including Apple, Samsung, and others, which had resulted in a decline in its smartphone sales, as well a major loss of market share. Its smartphone market share had risen back to 7.2 percent in April 2015 due to its strong sales of recent devices, but HTC's stock price had fallen by 90 percent since 2011.
In November 2016, HTC reported that it had sold at least 140,000 Vive units, and that each unit was being sold at a profit. In January 2017, HTC unveiled its new U series smartphone line, the U Play and U Ultra; the company described the U series as a "new direction" for its phones, emphasizing an integrated virtual assistant developed by the company. In February 2017, HTC reported that in the fourth quarter of 2016, its operating losses had decreased by 13% year-over-year, citing "robust sales performance" and sequential revenue increases throughout the year.
On 21 September 2017, Google announced that it would acquire roughly half of the 4,000 employees who worked in HTC's design and research staff, and non-exclusive licences to smartphone-related intellectual property held by HTC, for US$1.1 billion. The employees included the team involved with Google's Google Pixel, which was manufactured by HTC. Google stated that the purchase was part of its efforts to bolster its first-party hardware business. The transaction was completed on 30 January 2018; while HTC will continue to produce its own smartphones, the company has stated that it planned to increase its focus on Internet of Things and virtual reality going forward.
After 2017
On 26 March 2018, HTC reported a quarterly net loss of US$337 million in the fourth quarter of 2017, citing "market competition, product mix, pricing, and recognized inventory write-downs". HTC stated that it would use the revenue to further its investments in "emerging technologies". The company had also cited its increasing VR investments, including its upcoming Vive Pro model, and Vive Focus—a standalone "all-in-one" VR headset unveiled in November 2017.
In July 2018, HTC entered into a partnership with games and apps developer and publisher Animoca Brands. This includes product development and joint collaboration in areas such as games, blockchain, artificial intelligence, machine learning, augmented reality and virtual reality. Animoca's games will be pre-installed on HTC devices in the future.
On 5 February 2019, HTC released its first "Cryptophone", focused on providing universal finance through Bitcoin and creating a portal towards realizing a truly decentralized web.
On 11 May 2019, HTC announced that its Cryptophone will be the first smartphone to support a bitcoin full node.
On 17 September 2019, HTC appointed Yves Maitre, former executive vice president of consumer equipment and partnerships of Orange, as CEO where Cher Wang will continue her role as chairwoman.
On 3 September 2020, HTC CEO Yves Maitre stepped down from the position citing personal reasons. Co-founder Cher Wang then stepped in and is now the current CEO of HTC.
Corporate affairs
The key trends for HTC are (as of the financial year ending December 31):
HTC's chairwoman and acting CEO is Cher Wang who is the daughter of the late Wang Yung-ching, founder of the plastics and petrochemicals conglomerate Formosa Plastics Group. Peter Chou serves as head of the HTC Future Development Lab, and HT Cho as Director of the Board and Chairman of HTC Foundation. HTC's CFO is Hui-Ming Cheng. In addition to being chair of HTC, Cher Wang is also acting chair of VIA Technologies. HTC's main divisions, including the IA (Information Appliance) engineering division and the WM (Wireless Mobile) engineering division, are ISO 9001/ISO 14001-qualified facilities.
HTC's sales revenue totalled $2.2 billion for 2005, a 102% increase from the prior year. In 2005 it was listed as the fastest-growing tech company in BusinessWeeks Info Tech 100.
In 2010 HTC worked with Google to build mobile phones running Google's Android mobile OS such as the Nexus One.
In April 2010, HTC grew exponentially after it was chosen by Microsoft as a hardware platform development partner for the now defunct Windows Mobile operating system (based on Windows CE).
HTC invested strongly in research and development, which accounts for a quarter of its employees. The company's North American headquarters are located in Bellevue, Washington. HTC runs a software design office in Seattle (near its North American headquarters) where it designs its own interface for its phones. In 2011, HTC also opened a research and development office in Durham, North Carolina, a location the company chose over Seattle and Atlanta, to focus on multiple areas of wireless technology.
On 17 February 2010, Fast Company ranked HTC as the 31st most innovative company in the world. On 27 May 2011, in response to customer feedback, HTC announced that they will no longer lock the bootloaders on their Android-based phones.
Sponsorships
HTC sponsored the HTC-Highroad professional cycling team from 2009 to 2011.
In 2012, HTC became the official smartphone sponsor of the UEFA Champions League and UEFA Europa League. HTC also became the shirt sponsors for the Indian Super League franchise NorthEast United for the 2014, 2015 and 2016 season.
Ahead of the 2015 season, Indian Premier League franchise Kings XI Punjab signed a sponsorship deal with HTC. According to the agreement, HTC would be the team's official principal sponsor, and the company's logo would occupy the right chest position on the Kings XI Punjab playing jersey.
HTC sponsors professional eSports teams FaZe Clan, Team SoloMid, Cloud9, Team Liquid, and J Team, (formerly known as Taipei Assassins). HTC sponsored a Super Smash Bros. Melee tournament, HTC Throwdown, which was held on 19 September 2015, in San Francisco. At the end of 2015, the company also sponsored the creation of that year's SSBMRank, the annual rankings of the best Melee players in the world.
See also
List of companies of Taiwan
Comparison of HTC devices
HTC Sense
HTC Vive
IHTC
TouchFLO
TouchFLO 3D
References
External links
Computer companies of Taiwan
Computer hardware companies
Taiwanese companies established in 1997
2002 initial public offerings
Companies listed on the Taiwan Stock Exchange
Electronics companies of Taiwan
Electronics companies established in 1997
Manufacturing companies of Taiwan
Mobile phone manufacturers
Multinational companies headquartered in Taiwan
Virtual reality companies | HTC | [
"Technology"
] | 3,121 | [
"Computer hardware companies",
"Computers"
] |
1,963,514 | https://en.wikipedia.org/wiki/ISO%2010993 | The ISO 10993 set entails a series of standards for evaluating the biocompatibility of medical devices to manage biological risk. These documents were preceded by the Tripartite agreement and is a part of the international harmonisation of the safe use evaluation of medical devices.
For the purpose of the ISO 10993 family of standards, biocompatibility is defined as the "ability of a medical device or material to perform with an appropriate host response in a specific application".
ISO 10993-1:2009 & FDA endpoints for consideration
The following table provides a framework for the development of a biocompatibility evaluation. Different biological endpoints may require evaluation for particular medical devices, including either additional or fewer endpoints than indicated. If it is unclear in which category a device falls, consulting device-specific guidances or contacting the appropriate US Food and Drug Administration (FDA) review division for more information is possible. The table "Endpoints to be addressed in a biological risk assessment" was revised by the 2018 edition of ISO 10993-1. The selection of endpoints for the biocompatibility evaluation is determined by the nature of body contact (e.g. implant device) and contact duration (e.g. long term contact of more than 30 days).
List of the standards in the 10993 series
Part 1: Evaluation and testing within a risk management process
Part 2: Animal welfare requirements
Part 3: Tests for genotoxicity, carcinogenicity and reproductive toxicity
Part 4: Selection of tests for interactions with blood
Part 5: Tests for in vitro cytotoxicity.
Part 6: Tests for local effects after implantation
Part 7: Ethylene oxide sterilization residuals
Part 8: Selection of reference materials (withdrawn)
Part 9: Framework for identification and quantification of potential degradation products
Part 10: Tests for skin sensitization
Part 11: Tests for systemic toxicity
Part 12: Sample preparation and reference materials (available in English only)
Part 13: Identification and quantification of degradation products from polymeric medical devices
Part 14: Identification and quantification of degradation products from ceramics
Part 15: Identification and quantification of degradation products from metals and alloys
Part 16: Toxicokinetic study design for degradation products and leachables
Part 17: Establishment of allowable limits for leachable substances
Part 18: Chemical characterization of medical device materials within a risk management process
Part 19: Physico-chemical, morphological and topographical characterization of materials
Part 20: Principles and methods for immunotoxicology testing of medical devices
Part 22: Guidance on nanomaterials
Part 23: Tests for irritation
Part 33: Guidance on tests to evaluate genotoxicity
See also
List of ISO standards
ISO Standards catalogue: 11.100.20 - Biological evaluation of medical devices
References
10993
Regulation of medical devices
Medical devices | ISO 10993 | [
"Biology"
] | 583 | [
"Medical devices",
"Medical technology"
] |
1,964,288 | https://en.wikipedia.org/wiki/Jahn%E2%80%93Teller%20effect | The Jahn–Teller effect (JT effect or JTE) is an important mechanism of spontaneous symmetry breaking in molecular and solid-state systems which has far-reaching consequences in different fields, and is responsible for a variety of phenomena in spectroscopy, stereochemistry, crystal chemistry, molecular and solid-state physics, and materials science. The effect is named for Hermann Arthur Jahn and Edward Teller, who first reported studies about it in 1937.
Simplified overview
The Jahn–Teller effect, sometimes also referred to as Jahn–Teller distortion, describes the geometrical distortion of molecules and ions that results from certain electron configurations. The Jahn–Teller theorem essentially states that any non-linear molecule with a spatially degenerate electronic ground state will undergo a geometrical distortion that removes that degeneracy, because the distortion lowers the overall energy of the species. For a description of another type of geometrical distortion that occurs in crystals with substitutional impurities see article off-center ions.
Transition metal chemistry
The Jahn–Teller effect is most often encountered in octahedral complexes of the transition metals. The phenomenon is very common in six-coordinate copper(II) complexes. The d9 electronic configuration of this ion gives three electrons in the two degenerate eg orbitals, leading to a doubly degenerate electronic ground state. Such complexes distort along one of the molecular fourfold axes (always labelled the z axis), which has the effect of removing the orbital and electronic degeneracies and lowering the overall energy. The distortion normally takes the form of elongating the bonds to the ligands lying along the z axis, but occasionally occurs as a shortening of these bonds instead (the Jahn–Teller theorem does not predict the direction of the distortion, only the presence of an unstable geometry). When such an elongation occurs, the effect is to lower the electrostatic repulsion between the electron-pair on the Lewis basic ligand and any electrons in orbitals with a z component, thus lowering the energy of the complex. The inversion centre is preserved after the distortion.
In octahedral complexes, the Jahn–Teller effect is most pronounced when an odd number of electrons occupy the eg orbitals. This situation arises in complexes with the configurations d9, low-spin d7 or high-spin d4 complexes, all of which have doubly degenerate ground states. In such compounds the eg orbitals involved in the degeneracy point directly at the ligands, so distortion can result in a large energetic stabilisation. Strictly speaking, the effect also occurs when there is a degeneracy due to the electrons in the t2g orbitals (i.e. configurations such as d1 or d2, both of which are triply degenerate). In such cases, however, the effect is much less noticeable, because there is a much smaller lowering of repulsion on taking ligands further away from the t2g orbitals, which do not point directly at the ligands (see the table below). The same is true in tetrahedral complexes (e.g. manganate: distortion is very subtle because there is less stabilisation to be gained because the ligands are not pointing directly at the orbitals.
The expected effects for octahedral coordination are given in the following table:
w: weak Jahn–Teller effect (t2g orbitals unevenly occupied)
s: strong Jahn–Teller effect expected (eg orbitals unevenly occupied)
blank: no Jahn–Teller effect expected.
The Jahn–Teller effect is manifested in the UV-VIS absorbance spectra of some compounds, where it often causes splitting of bands. It is readily apparent in the structures of many copper(II) complexes. Additional, detailed information about the anisotropy of such complexes and the nature of the ligand binding can be however obtained from the fine structure of the low-temperature electron spin resonance spectra.
Related effects
The underlying cause of the Jahn–Teller effect is the presence of molecular orbitals that are both degenerate and open shell (i.e., incompletely occupied). This situation is not unique to coordination complexes and can be encountered in other areas of chemistry. In organic chemistry the phenomenon of antiaromaticity has the same cause and also often sees molecules distorting; as in the case of cyclobutadiene and cyclooctatetraene (COT).
Advanced treatment
The Jahn–Teller theorem
The JT theorem can be stated in different forms, two of which are given here:
"A nonlinear polyatomic system in a spatially degenerate electronic state distorts spontaneously in such a way that the degeneracy is lifted and a new equilibrium structure of lower symmetry is attained."
Alternatively and considerably shorter:
"... stability and degeneracy are not possible simultaneously unless the molecule is a linear one ...".
Spin-degeneracy was an exception in the original treatment and was later treated separately.
The formal mathematical proof of the Jahn–Teller theorem rests heavily on symmetry arguments, more specifically the theory of molecular point groups. The argument of Jahn and Teller assumes no details about the electronic structure of the system. Jahn and Teller made no statement about the strength of the effect, which may be so small that it is immeasurable. Indeed, for electrons in non-bonding or weakly bonding molecular orbitals, the effect is expected to be weak. However, in many situations the JT effect is important.
Historic developments
Interest in the JTE increased after its first experimental verification. Various model systems were developed probing the degree of degeneracy and the type of symmetry. These were solved partly analytically and partly numerically to obtain the shape of the pertinent potential energy surfaces (PES) and the energy levels for the nuclear motion on the JT-split PES. These energy levels are not vibrational energy levels in the traditional sense because of the intricate coupling to the electronic motion that occurs, and are better termed vibronic energy levels. The new field of ‘vibronic coupling’ or ‘vibronic coupling theory’ was born.
A further breakthrough occurred upon the advent of modern ("ab initio") electronic structure calculations whereby the relevant parameters characterising JT systems can be reliably determined from first principles. Thus one could go beyond studies of model systems that explore the effect of parameter variations on the PES and vibronic energy levels; one could also go on beyond fitting these parameters to experimental data without clear knowledge about the significance of the fit. Instead, well-founded theoretical investigations became possible which greatly improved the insight into the phenomena at hand and into the details of the underlying mechanisms.
While recognizing the JTE distortion as a concrete example of the general spontaneous symmetry breaking mechanism, the exact degeneracy of the involved electronic state was identified as a non-essential ingredient for this symmetry breaking in polyatomic systems. Even systems that in the undistorted symmetric configuration present electronic states which are near in energy but not precisely degenerate, can show a similar tendency to distort. The distortions of these systems can be treated within the related theory of the pseudo Jahn–Teller effect (in the literature often referred to as "second-order JTE"). This mechanism is associated to the vibronic couplings between adiabatic PES separated by nonzero energy gaps across the configuration space: its inclusion extends the applicability of JT-related models to symmetry breaking in a far broader range of molecular and solid-state systems.
Chronology:
1934: Lev Landau, in discussion with Edward Teller, suggested that electronic states of certain degenerate nuclear configurations are unstable with respect to nuclear displacements that lower the symmetry (see 'An historical note' by Englman).
1937: Hermann Arthur Jahn and Edward Teller formulated what is now known as the Jahn–Teller theorem.
1939: John Hasbrouck Van Vleck extended the Jahn–Teller theorem to ions in crystals. As attempts to observe the Jahn–Teller effect experimentally had been unconvincing, he noted that 'it is of great merit of the Jahn–Teller effect that it disappears when not needed'.
1950–2: Brebis Bleaney and co-workers first obtained unambiguous experimental evidence of the Jahn–Teller effect, by carrying out electron paramagnetic resonance studies on paramagnetic ions in crystals
1957–8: Öpik and Pryce showed that spin–orbit coupling can stabilise symmetric configurations against distortions from a weak JTE. Moffitt et al. and Longuet-Higgins et al. argued that the states of JT systems have inextricably mixed electronic and vibrational components, which they called vibronic states, with energies very different to the electronic states.
1962–4: Isaac Bersuker and Mary O’Brien investigated tunnelling in the lowest-energy vibronic states, the so-called tunnelling splitting, and the dynamic nature of the JT effect. The article by O'Brien shows the influence of the geometric phase factor (later called Berry phase) on the ordering of the vibronic states.
1965: Frank Ham realised the effect of coherent dynamics on the measurement of observables. This influence can be described in terms of reduction factors multiplying orbital operators and specific formulae were proposed for the magnetism of JT ions.
1984: Generalization of the concept of geometric phase by Berry (or Berry phase as it is also known) provided a general background to aid understanding of the rotation-dependent phase associated with the electronic and vibrational wavefunction of JT systems, as discovered by Longuet-Higgins, and further discussed by Herzberg and Longuet-Higgins, Longuet-Higgins, O'Brien, and Mead and Truhlar.
1990s: Advances in computing power meant that ab initio methods including those based on the Density Functional Theory started to be used to solve JT problems.
Relation to important discoveries
In 1985, Harry Kroto and co-workers discovered a class of closed-cage carbon molecules known as fullerenes. Buckminsterfullerene (C60), which has icosahedral symmetry, becomes JT-active upon addition or removal of one electron. The ordering of energy levels may not be the same as that predicted by Hund's rule.
Discovery in 1986 by Bednorz and Müller of superconductivity in cuprates with a transition temperature of 35 K, which was higher than the upper limit allowed according to standard BCS theory, was motivated by earlier work by Müller on JT ions in crystals.
Colossal magnetoresistance, a property of manganese-based perovskites and other materials, has been explained in terms of competition between dynamic Jahn–Teller and double-exchange effects.
Peierls theorem, which states that a one-dimensional equally spaced chain of ions with one electron per ion is unstable, has common roots with the JT effect.
Theory
Symmetry of JT systems and categorisation using group theory
A given JT problem will have a particular point group symmetry, such as Td symmetry for magnetic impurity ions in semiconductors or Ih symmetry for the fullerene C60. JT problems are conventionally classified using labels for the irreducible representations (irreps) that apply to the symmetry of the electronic and vibrational states. For example, E ⊗ e would refer to an electronic doublet state transforming as E coupled to a vibrational doublet state transforming as e.
In general, a vibrational mode transforming as Λ will couple to an electronic state transforming as Γ if the symmetric part of the Kronecker product [Γ ⊗ Γ]S contains Λ, unless Γ is a double group representation when the antisymmetric part {Γ ⊗ Γ}A is considered instead. Modes which do couple are said to be JT-active.
As an example, consider a doublet electronic state E in cubic symmetry. The symmetric part of E ⊗ E is A1 + E. Therefore, the state E will couple to vibrational modes transforming as a1 and e. However, the a1 modes will result in the same energy shift to all states and therefore do not contribute to any JT splitting. They can therefore be neglected. The result is an E ⊗ e JT effect. This JT effect is experienced by triangular molecules X3, tetrahedral molecules ML4, and octahedral molecules ML6 when their electronic state has E symmetry.
Components of a given vibrational mode are also labelled according to their transformation properties. For example, the two components of an e mode are usually labelled and , which in octahedral symmetry transform as and respectively.
The JT Hamiltonian
Eigenvalues of the Hamiltonian of a polyatomic system define PESs as functions of normal modes of the system (i.e. linear combinations of the nuclear displacements with specific symmetry properties). At the reference point of high symmetry, where the symmetry-induced degeneracy occurs, several of the eigenvalues coincide. By a detailed and laborious analysis, Jahn and Teller showed that – excepting linear molecules – there are always first-order terms in an expansion of the matrix elements of the Hamiltonian in terms of symmetry-lowering (in the language of group theory: non-totally symmetric) normal modes. These linear terms represent forces that distort the system along these coordinates and lift the degeneracy. The point of degeneracy can thus not be stationary, and the system distorts toward a stationary point of lower symmetry where stability can be attained.
Proof of the JT theorem follows from the theory of molecular symmetry (point group theory). A less rigorous but more intuitive explanation is given in section .
To arrive at a quantitative description of the JT effect, the forces appearing between the component wave functions are described by expanding the Hamiltonian in a power series in the . Owing to the very nature of the degeneracy, the Hamiltonian takes the form of a matrix referring to the degenerate wave function components. A matrix element between states and generally reads as:
The expansion can be truncated after terms linear in the , or extended to include terms quadratic (or higher) in the .
The adiabatic potential energy surfaces (APES) are then obtained as the eigenvalues of this matrix. In the original paper it is proven that there are always linear terms in the expansion. It follows that the degeneracy of the wave function cannot correspond to a stable structure.
Potential energy surfaces
Mexican-hat potential
In mathematical terms, the APESs characterising the JT distortion arise as the eigenvalues of the potential energy matrix. Generally, the APESs take the characteristic appearance of a double cone, circular or elliptic, where the point of contact, i.e. degeneracy, denotes the high-symmetry configuration for which the JT theorem applies. For the above case of the linear E ⊗ e JT effect the situation is illustrated by the APES
displayed in the figure, with part cut away to reveal its shape, which is known as a Mexican Hat potential. Here, is the frequency of the vibrational e mode, is its mass and is a measure of the strength of the JT coupling.
The conical shape near the degeneracy at the origin makes it immediately clear that this point cannot be stationary, that is, the system is unstable against asymmetric distortions, which leads to a symmetry lowering. In this particular case there are infinitely many isoenergetic JT distortions. The giving these distortions are arranged in a circle, as shown by the red curve in the figure. Quadratic coupling or cubic elastic terms lead to a warping along this "minimum energy path", replacing this infinite manifold by three equivalent potential minima and three equivalent saddle points. In other JT systems, linear coupling results in discrete minima.
Conical intersections
The high symmetry of the double-cone topology of the linear E ⊗ e JT system directly reflects the high underlying symmetry. It is one of the earliest (if not the earliest) examples in the literature of a conical intersection of potential energy surfaces. Conical intersections have received wide attention in the literature starting in the 1990s and are now considered paradigms of nonadiabatic excited-state dynamics, with far-reaching consequences in molecular spectroscopy, photochemistry and photophysics. Some of these will be commented upon further below. In general, conical intersections are far less symmetric than depicted in the figure. They can be tilted and elliptical in shape etc., and also peaked and sloped intersections have been distinguished in the literature. Furthermore, for more than two degrees of freedom, they are not point-like structures but instead they are seams and complicated, curved hypersurfaces, also known as intersection space. The coordinate sub-space displayed in the figure is also known as a branching plane.
Implications for dynamics
The characteristic shape of the JT-split APES has specific consequences for the nuclear dynamics, here considered in the fully quantum sense. For sufficiently strong JT coupling, the minimum points are sufficiently far (at least by a few vibrational energy quanta) below the JT intersection. Two different energy regimes are then to be distinguished, those of low and high energy.
In the low-energy regime the nuclear motion is confined to regions near the "minimum energy points". The distorted configurations sampled impart their geometrical parameters on, for example, the rotational fine structure in a spectrum. Due to the existence of barriers between the various minima in the APES, like those appearing due to the warping of the , motion on the low-energy regime is usually classified as either a static JTE, dynamic JTE or incoherent hopping. Each regime shows particular fingerprints on experimental measurements.
Static JTE: In this case, the system is trapped in one of the lowest-energy minima of the APES (usually determined by small perturbations created by the environment of the JT system) and does not have enough energy to cross the barrier towards another minimum during the typical time associated to the measurement. Quantum dynamical effects like tunnelling are negligible, and effectively the molecule or solid displays the low symmetry associated with a single minimum.
Dynamic JTE: In this case, the barriers are sufficiently small compared to, for example, the zero-point energy associated to the minima, so that vibronic wavefunctions (and all observables) display the symmetry of the reference (undistorted) system. In the linear E ⊗ e problem, the motion associated to this regime would be around the circular path in the figure. When the barrier is sufficiently small, this is called (free) pseudorotation (not to be confused with the rotation of a rigid body in space, see difference between real and pseudo rotations illustrated here for the fullerene molecule C60). When the barrier between the minima and the saddle points on the warped path exceeds a vibrational quantum, pseudorotational motion is slowed down and occurs through tunnelling. This is called hindered pseudorotation. In both free and hindered pseudorotation, the important phenomenon of the geometric (Berry) phase alters the ordering of the levels.
Incoherent hopping: Another way in which the system can overcome the barrier is through thermal energy. In this case, while the system moves throughout the minima of the system, the state is not a quantum coherent one but a statistical mixture. This difference can be observed experimentally.
The dynamics is quite different for high energies, such as occur from an optical transition from a non-degenerate initial state with a high-symmetry (JT undistorted) equilibrium geometry into a JT distorted state. This leads the system to the region near the conical intersection of the JT-split APES in the centre of the figure. Here the nonadiabatic couplings become very large and the behaviour of the system cannot be described within the familiar Born–Oppenheimer (BO) separation between the electronic and nuclear motions. The nuclear motion ceases to be confined to a single, well-defined APES and the transitions between the adiabatic surfaces occur yielding effects like Slonzcewsky resonances. In molecules this is usually a femtosecond timescale, which amounts to ultrafast (femtosecond) internal conversion processes, accompanied by broad spectral bands also under isolated-molecule conditions and highly complex spectral features. Examples for these phenomena will be covered in section .
As already stated above, the distinction of low and high energy regimes is valid only for sufficiently strong JT couplings, that is, when several or many vibrational energy quanta fit into the energy window between the conical intersection and the minimum of the lower JT-split APES. For the many cases of small to intermediate JT couplings this energy window and the corresponding adiabatic low-energy regime does not exist. Rather, the levels on both JT-split APES are intricately mixed for all energies and the nuclear motion always proceeds on both JT split APES simultaneously.
Ham factors
In 1965, Frank Ham proposed that the dynamic JTE could reduce the expected values of observables associated with the orbital wavefunctions due to the superposition of several electronic states in the total vibronic wavefunction. This effect leads, for example, to a partial quenching of the spin–orbit interaction and allowed the results of previous Electron Paramagnetic Resonance (EPR) experiments to be explained.
In general, the result of an orbital operator acting on vibronic states can be replaced by an effective orbital operator acting on purely electronic states. In first order, the effective orbital operator equals the actual orbital operator multiplied by a constant, whose value is less than one, known as a first-order (Ham) reduction factor. For example, within a triplet T1 electronic state, the spin–orbit coupling operator can be replaced by , where is a function of the strength of the JT coupling which varies from 1 in zero coupling to 0 in very strong coupling. Furthermore, when second-order perturbation corrections are included, additional terms are introduced involving additional numerical factors, known as second-order (Ham) reduction factors. These factors are zero when there is no JT coupling but can dominate over first-order terms in strong coupling, when the first-order effects have been significantly reduced.
Reduction factors are particularly useful for describing experimental results, such as EPR and optical spectra, of paramagnetic impurities in semiconducting, dielectric, diamagnetic and ferrimagnetic hosts.
Modern developments
For a long time, applications of JT theory consisted mainly in parameter studies (model studies) where the APES and dynamical properties of JT systems have been investigated as functions on the system parameters such as coupling constants etc. Fits of these parameters to experimental data were often doubtful and inconclusive. The situation changed in the 1980s when efficient ab initio methods were developed and computational resources became powerful enough to allow for a reliable determination of these parameters from first principles. Apart from wave function-based
techniques (which are sometimes considered genuinely ab initio in the literature) the advent of density functional theory (DFT) opened up new avenues to treat larger systems including solids. This allowed details of JT systems to be characterised and experimental findings to be reliably interpreted. It lies at the heart of most developments addressed in section .
Two different strategies are conceivable and have been used in the literature. One can
take the applicability of a certain coupling scheme for granted and limit oneself to determine the parameters of the model, for example from the energy gain achieved through the JT distortion, also termed JT stabilisation energy.
map parts of the APES in whole or reduced dimensionality and thus get an insight into the applicability of the model, possibly also deriving ideas how to extend it.
Naturally, the more accurate approach (2) may be limited to smaller systems, while the simpler approach (1) lends itself to studies of larger systems.
Applications
Effects on structure
Small molecules and ions
The JT distortion of small molecules (or molecular ions) is directly deduced from electronic structure calculations of their APES (through DFT and/or ab initio computations). These molecules / ions are often radicals, such as trimers of alkali atoms (Li3 and Na3), that have unpaired spins and in particular in (but not restricted to) doublet states. Besides the JTE in 2E′ and 2E″ states, also the pseudo JTE between an E state and a nearby A state may play a role. The JT distortion reduces the symmetry from D3h to C2v (see figure), and it depends on the details of the interactions whether the isosceles triangle has an acute or an obtuse-angled (such as Na3) minimum energy structure. Natural extensions are systems like NO3 and NH3+ where a JT distortion has been documented in the literature for ground or excited electronic states.
A somewhat special role is played by tetrahedral systems like CH4+ and P4+. Here threefold degenerate electronic states and vibrational modes come into play. Nevertheless, also twofold degeneracies continue to be important. The dynamics of Jahn-Teller distortion in CH4+ has been characterized by transient X-ray absorption spectroscopy, revealing that symmetry breaking occurs within ten femtoseconds in this prototypical system.
Among larger systems, a focus in the literature has been on benzene and its radical cation, as well as on their halo (especially fluoro) derivatives. Already in the early 1980s, a wealth of information emerged from the detailed analysis of experimental emission spectra of 1,3,5- trifluoro- and hexafluoro (and chloro) benzene radical cations. For the parent benzene cation one has to rely on photoelectron spectra with comparatively lower resolution because this species does not fluoresce (see also section ). Rather detailed ab initio calculations have been carried out which
document the JT stabilization energies for the various (four) JT active modes and also quantify the moderate barriers for the JT pseudorotation.
Finally, a somewhat special role is played by systems with a fivefold symmetry axis like the cyclopentadienyl radical. Careful laser spectroscopic investigations have shed useful light on the JT interactions. In particular they reveal that the barrier to pseudorotation almost vanishes (the system is highly "fluxional") which can be attributed to the fact that the 2nd-order coupling terms vanish by symmetry and the leading higher-order terms are of 4th order.
Coordination chemistry
The JTE is usually stronger where the electron density associated with the degenerate orbitals is more concentrated. This effect therefore plays a large role in determining the structure of transition metal complexes with active internal 3d orbitals.
The most iconic and prominent of the JT systems in coordination chemistry is probably the case of Cu(II) octahedral complexes. While in perfectly equivalent coordination, like a CuF6 complex associated to a Cu(II) impurity in a cubic crystal like KMgF3, perfect octahedral (Oh) symmetry is expected. In fact a lower tetragonal symmetry is usually found experimentally. The origin of this JTE distortion it revealed by examining the electronic configuration of the undistorted complex. For an octahedral geometry, the five 3d orbitals partition into t2g and eg orbitals (see diagram). These orbitals are occupied by nine electrons corresponding to the electronic configuration of Cu(II). Thus, the t2g shell is filled, and the eg shell contains 3 electrons. Overall the unpaired electron produces a 2Eg state, which is Jahn–Teller active. The third electron can occupy either of the orbitals comprising the eg shell: the mainly orbital or the mainly orbital. If the electron occupies the mainly level, which antibonding orbital the final geometry of the complex would be elongated as the axial ligands will be pushed away to reduce the global energy of the system. On the other hand, if the electron went into the mainly antibonding orbital the complex would distort into a compressed geometry. Experimentally elongated geometries are overwhelmingly observed and this fact has been attributed both to metal-ligand anharmonic interactions and 3d-4s hybridisations. Given that all the directions containing a fourfold axis are equivalent the distortion is equally likely to happen in any of these orientations. From the electronic point of view this means that the and orbitals, that are degenerate and free to hybridise in the octahedral geometry, will mix to produce appropriate equivalent orbitals in each direction like or .
The JTE is not just restricted to Cu(II) octahedral complexes. There are many other configurations, involving changes both in the initial structure and electronic configuration of the metal that yield degenerate states and, thus, JTE. However, the amount of distortion and stabilisation energy of the effect is strongly dependent on the particular case. In octahedral Cu(II), the JTE is particularly strong because
the degenerate orbitals display a strongly antibonding σ character
Cu is a transition metal with a relatively strong electronegativity yielding more covalent bonds than other metals which allows to increase the JT linear coupling constant.
In other configurations involving π or δ bonding, like for example when the degenerate state is associated to the t2g orbitals of an octahedral configuration, the distortion and stabilisation energies are usually much smaller and the possibility of not observing the distortion due to dynamic JT effects is much higher. Similarly for rare-earth ions where covalency is very small, the distortions associated to the JTE are usually very weak.
Importantly, the JTE is associated with strict degeneracy in the electronic subsystem and so it cannot appear in systems without this property. For example, the JTE is often associated to cases like quasi-octahedral CuX2Y4 complexes where the distances to X and Y ligands are clearly different. However, the intrinsic symmetry of these complexes is already tetragonal and no degenerate eg orbital exists, having split into a1g (mainly ) and b1g (mainly ) orbitals due to the different electronic interactions with axial X ligands and equatorial Y ligands. In this and other similar cases some remaining vibronic effects related to the JTE are still present but are quenched with respect to the case with degeneracy due to the splitting of the orbitals.
Spectroscopy and reactivity
From spectra with rotational resolution, moments of inertia and hence bond lengths and angles can be determined "directly" (at least in principle). From less well-resolved spectra one can still determine important quantities like JT stabilization energies and energy barriers (e.g. to pseudorotation). However, in the whole spectral intensity distribution of an electronic transition more information is encoded. It has been used to decide on the presence (or absence) of the geometric phase which is accumulated during the pseudorotational motion around the JT (or other type of) conical intersection. Prominent examples of either type are the ground (X) or an excited (B) state of Na3. The Fourier transform of , the so-called autocorrelation function reflects the motion of the wavepacket after an optical (= vertical) transition to the APES of the final electronic state. Typically it will move on the timescale of a vibrational period which is (for small molecules) of the order of 5–50 fs, i.e. ultrafast. Besides a nearly periodic motion, mode–mode interactions with very irregular (also chaotic) behaviour and spreading of the wavepacket may also occur. Near a conical intersection this will be accompanied/complemented by nonradiative transitions (termed internal conversion) to other APESs occurring on the same ultrafast time scale.
For the JT case the situation is somewhat special, as compared to a general conical intersection, because the different JT potential sheets are symmetry-related to each other and have (exactly or nearly) the same energy minimum. The "transition" between them is thus more oscillatory than one would normally expect, and their time-averaged populations are close to 1/2. For a more typical scenario a more general conical intersection is "required".
The JT effect still comes into play, namely in combination with a different nearby, in general non-degenerate electronic state. The result is a pseudo Jahn–Teller effect, for example, of an E state interacting with an A state. This situation is common in JT systems, just as interactions between two nondegenerate electronic states are common for non-JT systems. Examples are excited electronic states of NH3+ and the benzene radical cation. Here, crossings between the E and A state APESs amount to triple intersections, which are associated with very complex spectral features (dense line structures and diffuse spectral envelopes under low resolution). The population transfer between the states is also ultrafast, so fast that fluorescence (proceeding on a nanosecond time scale) cannot compete. This helps to understand why the benzene cation, like many other organic radical cation, does not fluoresce.
To be sure, photochemical reactivity emerges when the internal conversion makes the system explore the nuclear configuration space such that new chemical species are formed. There is a plethora of femtosecond pump-probe spectroscopic techniques to reveal details of these processes occurring, for example, in the process of vision.
Solid-state problems
As proposed originally by Landau
free electrons in a solid, introduced for example by doping or irradiation, can interact with the vibrations of the lattice to form a localized quasi-particle known as a polaron. Strongly localized polarons (also called Holstein polarons) can condensate around high-symmetry sites of the lattice with electrons or holes occupying local degenerate orbitals that experience the JTE. These Jahn–Teller polarons break both translational and point group symmetries of the lattice where they are found and have been attributed important roles in effects like colossal magnetoresistance and superconductivity.
Paramagnetic impurities in semiconducting, dielectric, diamagnetic and ferrimagnetic hosts can all be described using a JT model. For example, these models were used extensively in the 1980s and 1990s to describe ions of Cr, V and Ti substituting for Ga in GaAs and GaP.
The fullerene C60 can form solid compounds with alkali metals known as fullerides. Cs3C60 can be superconducting at temperatures up to 38K under applied pressure, whereas compounds of the form A4C60 are insulating (as reviewed by Gunnarsson). JT effects both within the C60 molecules (intramolecular) and between C60 molecules (intermolecular) play a part in the mechanisms behind various observed properties in these systems. For example, they could mean that the Migdal–Eliashberg treatment of superconductivity breaks down. Also, the fullerides can form a so-called new state of matter known as a Jahn–Teller metal, where localised electrons coexist with metallicity and JT distortions on the C60 molecules persist.
Cooperative JT effect in crystals
The JTE is usually associated with degeneracies that are well localised in space, like those occurring in a small molecule or associated to an isolated transition metal complex. However, in many periodic high-symmetry solid-state systems, like perovskites, some crystalline sites allow for electronic degeneracy giving rise under adequate compositions to lattices of JT-active centers. This can produce a cooperative JTE, where global distortions of the crystal occur due to local degeneracies.
In order to determine the final electronic and geometric structure of a cooperative JT system, it is necessary to take into account both the local distortions and the interaction between the different sites, which will take such form necessary to minimise the global energy of the crystal.
While works on the cooperative JTE started in the late fifties, it was in 1960 that Kanamori published the first work on the cooperative JTE where many important elements present in the modern theory for this effect were introduced. This included the use of pseudospin notation to discuss orbital ordering, and discussions of the importance of the JTE to discuss magnetism, the competition of this effect with the spin–orbit coupling and the coupling of the distortions with the strain of the lattice. This point was later stressed in the review by Gehring and Gehring as being the key element to establish long-range order between the distortions in the lattice. An important part of the modern theory of the cooperative JTE, can lead to structural phase transitions.
Many cooperative JT systems would be expected to be metals from band theory, as to produce them, a degenerate orbital has to be partially filled and the associated band would be metallic. However, under the perturbation of the symmetry-breaking distortion associated to the cooperative JTE, the degeneracies in the electronic structure are destroyed and the ground state of these systems is often found to be insulating (see e.g.). In many important cases like the parent compound for colossal magnetoresistance perovskites, LaMnO3, an increase of temperature leads to disorder in the distortions which lowers the band splitting due to the cooperative JTE, thus triggering a metal–insulator transition.
JT-related effects: Orbital ordering
In modern solid-state physics, it is common to classify systems according to the kind of degrees of freedom they have available, like electron (metals) or spin (magnetism). In crystals that can display the JTE, and before this effect is realised by symmetry-breaking distortions, it is found that there exists an orbital degree of freedom consisting of how electrons occupy the local degenerate orbitals. As initially discussed by Kugel and Khomskii, not all configurations are equivalent. The key is the relative orientation of these occupied orbital, in the same way that spin orientation is important in magnetic systems, and the ground state can only be realised for some particular orbital pattern. Both this pattern and the effect giving rise to this phenomenon is usually denominated orbital-ordering.
In order to predict the orbital-ordering pattern, Kugel and Khomskii used a particularisation of the Hubbard model. In particular they established how superexchange interactions, usually described by the Anderson–Kanamori–Goodenough rules, change in the presence of degenerate orbitals. Their model, using a pseudospin representation for the local orbitals, leads to a Heisenberg-like model in which the ground state is a combination of orbital and spin patterns. Using this model it can be shown, for example, that the origin of the unusual ground insulating ferromagnetic state of a solid like K2CuF4 can be traced to its orbital ordering.
Even when starting from a relatively high-symmetry structure the combined effect of exchange interactions, spin–orbit coupling, orbital-ordering and crystal deformations activated by the JTE can lead to very low symmetry magnetic patterns with specific properties. For example, in CsCuCl3 an incommensurable helicoidal pattern appears both for the orbitals and the distortions along the -axis. Moreover, many of these compounds show complex phase diagrams when varying temperature or pressure.
References
External links
A series of (mostly biannual) international symposia deal with current problems and modern developments in the field, the most recent of which are
JT2012, Tsukuba, Japan
JT2014, Graz, Austria
JT2016, Tartu Estonia
The conferences are overseen and guided by the international JT steering committee.
The difference between real rotation and pseudorotation for a fullerene molecule is illustrated here.
Inorganic chemistry
Condensed matter physics
Chemical bonding
Coordination chemistry
Solid-state chemistry
Edward Teller | Jahn–Teller effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 8,346 | [
"Coordination chemistry",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Matter",
"Solid-state chemistry"
] |
1,965,347 | https://en.wikipedia.org/wiki/Quadrupole%20mass%20analyzer | In mass spectrometry, the quadrupole mass analyzer (or quadrupole mass filter) is a type of mass analyzer originally conceived by Nobel laureate Wolfgang Paul and his student Helmut Steinwedel. As the name implies, it consists of four cylindrical rods, set parallel to each other. In a quadrupole mass spectrometer (QMS) the quadrupole is the mass analyzer – the component of the instrument responsible for selecting sample ions based on their mass-to-charge ratio (m/z). Ions are separated in a quadrupole based on the stability of their trajectories in the oscillating electric fields that are applied to the rods.
Principle of operation
The quadrupole consists of four parallel metal rods. Each opposing rod pair is connected together electrically, and a radio frequency (RF) voltage with a DC offset voltage is applied between one pair of rods and the other. Ions travel down the quadrupole between the rods. Only ions of a certain mass-to-charge ratio will reach the detector for a given ratio of voltages: other ions have unstable trajectories and will collide with the rods. This permits selection of an ion with a particular m/z or allows the operator to scan for a range of m/z-values by continuously varying the applied voltage. Mathematically this can be modeled with the help of the Mathieu differential equation.
Ideally, the rods are hyperbolic, however cylindrical rods with a specific ratio of rod diameter-to-spacing provide an easier-to-manufacture adequate approximation to hyperbolas. Small variations in the ratio have large effects on resolution and peak shape. Different manufacturers choose slightly different ratios to fine-tune operating characteristics in context of anticipated application requirements. Since the 1980s, the MAT company and subsequently Finnigan Instrument Corporation used hyperbolic rods produced with a mechanical tolerance of 0.001 mm, whose exact production process was a well-kept secret within the company.
Multiple quadrupoles, hybrids and variations
A linear series of three quadrupoles is known as a triple quadrupole mass spectrometer. The first (Q1) and third (Q3) quadrupoles act as mass filters, and the middle (q2) quadrupole is employed as a collision cell. This collision cell is an RF-only quadrupole (non-mass filtering) using Ar, He, or N2 gas (~10−3 Torr, ~30 eV) for collision induced dissociation of selected parent ion(s) from Q1. Subsequent fragments are passed through to Q3 where they may be filtered or fully scanned.
This process allows for the study of fragments that are useful in structural elucidation by tandem mass spectrometry. For example, the Q1 may be set to 'filter' for a drug ion of known mass, which is fragmented in q2. The third quadrupole (Q3) can then be set to scan the entire m/z range, giving information on the intensities of the fragments. Thus, the structure of the original ion can be deduced.
The arrangement of three quadrupoles was first developed by Jim Morrison of La Trobe University in Australia for the purpose of studying the photodissociation of gas-phase ions. The first triple-quadrupole mass spectrometer was developed at Michigan State University by Christie Enke and graduate student Richard Yost in the late 1970s.
Quadrupoles can be used in hybrid mass spectrometers. For example, a sector instrument can be combined with a collision quadrupole and quadrupole mass analyzer to form a hybrid instrument.
A mass-selecting quadrupole and collision quadrupole with time-of-flight device as the second mass selection stage is a hybrid known as a quadrupole time-of-flight mass spectrometer (QTOF MS). Quadrupole-quadrupole-time-of-flight (QqTOF) configurations are also possible and used especially the mass spectrometry of peptides and other large biological polymers.
A variant of the quadrupole mass analyzer called the monopole was invented by von Zahn which operates with two electrodes and generates one quarter of the quadrupole field. It has one circular electrode and one V-shaped electrode. The performance is, however, lower than that of a quadrupole mass analyzer.
An enhancement to the performance of the quadrupole mass analyzer has been demonstrated to occur when a magnetic field is applied to the instrument. Manifold improvements in resolution and sensitivity have been reported for a magnetic field applied in various orientations to a QMS.
Applications
These mass spectrometers excel at applications where particular ions of interest are being studied because they can stay tuned on a single ion for extended periods of time. One place where this is useful is in liquid chromatography-mass spectrometry or gas chromatography-mass spectrometry where they serve as exceptionally high specificity detectors. Quadrupole instruments are often reasonably priced and make good multi-purpose instruments. A single quadrupole mass spectrometer with an electron impact ionizer is used as a standalone analyzer in residual gas analyzers, real-time gas analyzers, plasma diagnostics and SIMS surface analysis systems.
See also
Fourier transform ion cyclotron resonance
Quadrupole magnet
Radio-frequency quadrupole beam cooler
References
External links
Mass spectrometry | Quadrupole mass analyzer | [
"Physics",
"Chemistry"
] | 1,124 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
1,965,869 | https://en.wikipedia.org/wiki/Banks%E2%80%93Zaks%20fixed%20point | In quantum chromodynamics (and also N = 1 super quantum chromodynamics) with massless flavors, if the number of flavors, Nf, is sufficiently small (i.e. small enough to guarantee asymptotic freedom, depending on the number of colors), the theory can flow to an interacting conformal fixed point of the renormalization group. If the value of the coupling at that point is less than one (i.e. one can perform perturbation theory in weak coupling), then the fixed point is called a Banks–Zaks fixed point. The existence of the fixed point was first reported in 1974 by Belavin and Migdal and by Caswell, and later used by Banks and Zaks in their analysis of the phase structure of vector-like gauge theories with massless fermions. The name Caswell–Banks–Zaks fixed point is also used.
More specifically, suppose that we find that the beta function of a theory up to two loops has the form
where and are positive constants. Then there exists a value such that :
If we can arrange to be smaller than , then we have . It follows that when the theory flows to the IR it is a conformal, weakly coupled theory with coupling .
For the case of a non-Abelian gauge theory with gauge group and Dirac fermions in the fundamental representation of the gauge group for the flavored particles we have
where is the number of colors and the number of flavors. Then should lie just below in order for the Banks–Zaks fixed point to appear. Note that this fixed point only occurs if, in addition to the previous requirement on (which guarantees asymptotic freedom),
where the lower bound comes from requiring . This way remains positive while is still negative (see first equation in article) and one can solve with real solutions for . The coefficient was first correctly computed by Caswell, while the earlier paper by Belavin and Migdal has a wrong answer.
See also
Beta function
References
T. J. Hollowood, "Renormalization Group and Fixed Points in Quantum Field Theory", Springer, 2013, .
Gauge theories
Quantum chromodynamics
Fixed points (mathematics)
Renormalization group
Conformal field theory
Supersymmetric quantum field theory | Banks–Zaks fixed point | [
"Physics",
"Mathematics"
] | 468 | [
"Symmetry",
"Physical phenomena",
"Supersymmetric quantum field theory",
"Mathematical analysis",
"Fixed points (mathematics)",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Topology",
"Statistical mechanics",
"Supersymmetry",
"Quantum physics stubs",
"Dynamical system... |
1,966,606 | https://en.wikipedia.org/wiki/Hydroxyapatite | Hydroxyapatite (IMA name: hydroxylapatite) (Hap, HAp, or HA) is a naturally occurring mineral form of calcium apatite with the formula , often written to denote that the crystal unit cell comprises two entities. It is the hydroxyl endmember of the complex apatite group. The ion can be replaced by fluoride or chloride, producing fluorapatite or chlorapatite. It crystallizes in the hexagonal crystal system. Pure hydroxyapatite powder is white. Naturally occurring apatites can, however, also have brown, yellow, or green colorations, comparable to the discolorations of dental fluorosis.
Up to 50% by volume and 70% by weight of human bone is a modified form of hydroxyapatite, known as bone mineral. Carbonated calcium-deficient hydroxyapatite is the main mineral of which dental enamel and dentin are composed. Hydroxyapatite crystals are also found in pathological calcifications such as those found in breast tumors, as well as calcifications within the pineal gland (and other structures of the brain) known as corpora arenacea or "brain sand".
Chemical synthesis
Hydroxyapatite can be synthesized via several methods, such as wet chemical deposition, biomimetic deposition, sol-gel route (wet-chemical precipitation) or electrodeposition. The hydroxyapatite nanocrystal suspension can be prepared by a wet chemical precipitation reaction following the reaction equation below:
The ability to synthetically replicate hydroxyapatite has invaluable clinical implications, especially in dentistry. Each technique yields hydroxyapatite crystals of varied characteristics, such as size and shape. These variations have a marked effect on the biological and mechanical properties of the compound, and therefore these hydroxyapatite products have different clinical uses.
Calcium-deficient hydroxyapatite
Calcium-deficient (non-stochiometric) hydroxyapatite, (where x is between 0 and 1) has a Ca/P ratio between 1.67 and 1.5. The Ca/P ratio is often used in the discussion of calcium phosphate phases. Stoichiometric apatite has a Ca/P ratio of 10:6 normally expressed as 1.67. The non-stoichiometric phases have the hydroxyapatite structure with cation vacancies () and anion () vacancies. The sites occupied solely by phosphate anions in stoichiometric hydroxyapatite, are occupied by phosphate or hydrogen phosphate, , anions.
These calcium-deficient phases can be prepared by precipitation from a mixture of calcium nitrate and diammonium phosphate with the desired Ca/P ratio, for example, to make a sample with a Ca/P ratio of 1.6:
Sintering these non-stoichiometric phases forms a solid phase which is an intimate mixture of tricalcium phosphate and hydroxyapatite, termed biphasic calcium phosphate:
Biological function
Mammals (including humans)
Hydroxyapatite is present in bones and teeth; bone is made primarily of HA crystals interspersed in a collagen matrix—65 to 70% of the mass of bone is HA. Similarly HA is 70 to 80% of the mass of dentin and enamel in teeth. In enamel, the matrix for HA is formed by amelogenins and enamelins instead of collagen. Importantly, hydroxyapatite-coated orthopedic implants perform better in certain patients. For instance, for patients with steatotic liver disease hydroxyapatite-coated titanium has superior properties . Hence, the potential of hydroxyapatite for the engineering biomaterials is considered substantial.
Hydroxyapatite deposits in tendons around joints results in the medical condition calcific tendinitis.
Hydroxyapatite is a constituent of calcium phosphate kidney stones.
Remineralisation of tooth enamel
Remineralisation of tooth enamel involves the reintroduction of mineral ions into demineralised enamel. Hydroxyapatite is the main mineral component of enamel in teeth. During demineralisation, calcium and phosphorus ions are drawn out from the hydroxyapatite. The mineral ions introduced during remineralisation restore the structure of the hydroxyapatite crystals. If fluoride ions are present during the remineralisation, through water fluoridation or the use of fluoride-containing toothpaste, the stronger and more acid-resistant fluorapatite crystals are formed instead of the hydroxyapatite crystals.
Mantis shrimp
The clubbing appendages of the Odontodactylus scyllarus (peacock mantis shrimp) are made of an extremely dense form of the mineral which has a higher specific strength; this has led to its investigation for potential synthesis and engineering use. Their dactyl appendages have excellent impact resistance due to the impact region being composed of mainly crystalline hydroxyapatite, which offers significant hardness. A periodic layer underneath the impact layer composed of hydroxyapatite with lower calcium and phosphorus content (thus resulting in a much lower modulus) inhibits crack growth by forcing new cracks to change directions. This periodic layer also reduces the energy transferred across both layers due to the large difference in modulus, even reflecting some of the incident energy.
Use in dentistry
, the use of hydroxyapatite, or its synthetically manufactured form, nano-hydroxyapatite, is not yet common practice. Some studies suggest it is useful in counteracting dentine hypersensitivity, preventing sensitivity after teeth bleaching procedures and cavity prevention. Avian eggshell hydroxyapatite can be a viable filler material in bone regeneration procedures in oral surgery.
Dentine sensitivity
Nano-hydroxyapatite possesses bioactive components which can prompt the mineralisation process of teeth, remedying hypersensitivity. Hypersensitivity of teeth is thought to be regulated by fluid within dentinal tubules. The movement of this fluid as a result of different stimuli is said to excite receptor cells in the pulp and trigger sensations of pain. The physical properties of the nano-hydroxyapatite can penetrate and seal the tubules, stopping the circulation of the fluid and therefore the sensations of pain from stimuli. Nano-hydroxyapatite would be preferred as it parallels the natural process of surface remineralisation.
In comparison to alternative treatments for dentine hypersensitivity relief, nano-hydroxyapatite containing treatment has been shown to perform better clinically. Nano-hydroxyapatite was proven to be better than other treatments at reducing sensitivity against evaporative stimuli, such as an air blast, and tactile stimuli, such as tapping the tooth with a dental instrument. However, no difference was seen between nano-hydroxyapatite and other treatments for cold stimuli. Hydroxylapatite has shown significant medium and long-term desensitizing effects on dentine hypersensitivity using evaporative stimuli and the visual analogue scale (alongside potassium nitrate, arginine, glutaraldehyde with hydroxyethyl methacrylate, hydroxyapatite, adhesive systems, glass ionomer cements and laser).
Co-agent for bleaching
Teeth bleaching agents release reactive oxygen species which can degrade enamel. To prevent this, nano-hydroxyapatite can be added to the bleaching solution to reduce the impact of the bleaching agent by blocking pores within the enamel. This reduces sensitivity after the bleaching process.
Cavity prevention
Nano-hydroxyapatite possesses a remineralising effect on teeth and can be used to prevent damage from carious attacks. In the event of an acid attack by cariogenic bacteria, nano-hydroxyapatite particles can infiltrate pores on the tooth surface to form a protective layer. Furthermore, nano-hydroxyapatite may have the capacity to reverse damage from carious assaults by either directly replacing deteriorated surface minerals or acting as a binding agent for lost ions.
In some toothpaste hydroxyapatite can be found in the form of nanocrystals (as these are easily dissolved). In recent years, hydroxyapatite nanocrystals (nHA) have been used in toothpaste to combat dental hypersensitivity. They aid in the repair and remineralisation of the enamel, thus helping to prevent tooth sensitivity. Tooth enamel can become demineralised due to various factors, including acidic erosion and dental caries. If left untreated this can lead to the exposure of dentin and subsequent exposure of the dental pulp. In various studies the use of nano hydroxyapatite in toothpaste showed positive results in aiding the remineralisation of dental enamel. In addition to remineralisation, in vitro studies have shown that toothpastes containing nano-hydroxyapatite have the potential to reduce biofilm formation on both tooth enamel and resin-based composite surfaces.
As a dental material
Hydroxyapatite is widely used within dentistry and oral and maxillofacial surgery, due to its chemical similarity to hard tissue.
In the future, there are possibilities for using nano-hydroxyapatite for tissue engineering and repair. The main and most advantageous feature of nano-hydroxyapatite is its biocompatibility. It is chemically similar to naturally occurring hydroxyapatite and can mimic the structure and biological function of the structures found in the resident extracellular matrix. Therefore, it can be used as a scaffold for engineering tissues such as bone and cementum. It may be used to restore cleft lips and palates and refine existing practices such as preservation of alveolar bone after extraction for better implant placement.
Safety concerns
The European Commission's Scientific Committee on Consumer Safety (SCCS) issued an official opinion in 2021, where it considered whether the nanomaterial hydroxyapatite was safe when used in leave-on and rinse-off dermal and oral cosmetic products, taking into account reasonably foreseeable exposure conditions. It stated:
The European Commission's Scientific Committee on Consumer Safety (SCCS) reissued an updated opinion in 2023, where it cleared rod-shaped nano hydroxyapatite of concerns regarding genotoxicity, allowing consumer products to contain concentrations of nano hydroxyapatite as high as 10% for toothpastes and 0.465% for mouthwashes. However, it warns of needle-shaped nano hydroxyapatite and of inhalation in spray products. It stated:
Chromatography
Along with its medical applications, hydroxyapatite is also used in downstream applications under mixed-mode chromatography in polishing step. The ions present on the surface of hydroxyapatite make it an ideal candidate with unique selectivity, separation and purification of biomolecule mixtures. In mixed-mode chromatography, hydroxyapatite is used as the stationary phase in chromatography columns.
The combined presence of calcium ions (C- sites) and phosphate sites (P-sites) provide metal affinity and ion exchange properties respectively. The C-sites on the surface of the resin undergo metal affinity interactions with phosphate or carboxyl groups present on the biomolecules. Concurrently, these positively charged C-sites tend to repel positively charged functional groups (e.g., amino groups) on biomolecules. P-sites undergo cationic exchange with positively charged functional groups on biomolecules. They exhibit electrostatic repulsion with negatively charged functional groups on biomolecules. For the elution of molecules buffer with high concentration of phosphate and sodium chloride is used. The nature of different charged ions on the surface of hydroxyapatite provides the framework for unique selectivity and binding of biomolecules, facilitating robust separation of biomolecules.
Hydroxyapatite is available in different forms and in different sizes for the purpose of protein purification. The advantages of hydroxyapatite media are its high product stability and uniformity in various lots during its production. Generally, hydroxyapatite was used in the polishing step of monoclonal antibodies, isolation of endotoxin free plasmids, purification of enzymes and viral particles.
Use in archaeology
In archaeology, hydroxyapatite from human and animal remains can be analysed to reconstruct ancient diets, migrations and paleoclimate. The mineral fractions of bone and teeth act as a reservoir of trace elements, including carbon, oxygen and strontium. Stable isotope analysis of human and faunal hydroxyapatite can be used to indicate whether a diet was predominantly terrestrial or marine in nature (carbon, strontium); the geographical origin and migratory habits of an animal or human (oxygen, strontium) and to reconstruct past temperatures and climate shifts (oxygen). Post-depositional alteration of bone can contribute to the degradation of bone collagen, the protein required for stable isotope analysis.
Research
Due to its high biocompatibility, bioactivity, osteoconductive and/or osteoinductive capacity, nontoxicity, nonimmunogenic properties, and noninflammatory behavior, hydroxyapatite is available and used as a bone filler and as coatings on prostheses. Designing bone scaffolds with a higher capability of promoting bone regeneration is a topical research subject. Composite 3D scaffolds for bone tissue engineering based on nano-hydroxyapatite and poly-ε-caprolactone were designed. The 3D composite scaffolds showed good cytocompatibility and osteogenic potential, which is specifically recommended in applications when faster mineralization is needed, such as osteoporosis treatment.
Defluoridation
Hydroxylapatite is a potential adsorbent for the defluoridation of drinking water, as it forms fluorapatite in a three step process. Hydroxylapatite removes from the water to replace forming fluorapatite. However, during the defluoridation process the hydroxyapatite dissolves, and increases the pH and phosphate ion concentration which makes the defluoridated water unfit for drinking. Recently, a ″calcium amended-hydroxyapatite″ defluoridation technique was suggested to overcome the phosphate leaching from hydroxyapatite. This technique can also affect fluorosis reversal by providing calcium-enriched alkaline drinking water to fluorosis affected areas.
See also
Calcium hydroxyphosphate
Calcific tendinitis
Mechanical properties of biomaterials
References
External links
Calcium minerals
Phosphate minerals
Ceramic materials
Biomaterials
Hexagonal minerals
Minerals in space group 176 | Hydroxyapatite | [
"Physics",
"Engineering",
"Biology"
] | 3,054 | [
"Biomaterials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter",
"Medical technology"
] |
24,128,922 | https://en.wikipedia.org/wiki/Algae%20bioreactor | An algae bioreactor is used for cultivating micro or macroalgae. Algae may be cultivated for the purposes of biomass production (as in a seaweed cultivator), wastewater treatment, CO2 fixation, or aquarium/pond filtration in the form of an algae scrubber. Algae bioreactors vary widely in design, falling broadly into two categories: open reactors and enclosed reactors. Open reactors are exposed to the atmosphere while enclosed reactors, also commonly called photobioreactors, are isolated to varying extents from the atmosphere. Specifically, algae bioreactors can be used to produce fuels such as biodiesel and bioethanol, to generate animal feed, or to reduce pollutants such as NOx and CO2 in flue
gases of power plants. Fundamentally, this kind of bioreactor is based on the photosynthetic reaction, which is performed by the chlorophyll-containing algae itself using dissolved carbon dioxide and sunlight. The carbon dioxide is dispersed into the reactor fluid to make it accessible to the algae. The bioreactor has to be made out of transparent material.
Historical background
The first microalgae cultivation was of the unicellular Chlorella vulgaris by Dutch microbiologist Martinus Beijerinck in 1890. Later, during World War II, Germany used open ponds to increase algal cultivation for use as a protein supplement. Some of the first experiments with the aim of cultivating algae were conducted in 1957 by the Carnegie Institution for Science in Washington. In these experiments, monocellular Chlorella were cultivated by adding and some minerals. The goal of this research was the cultivation of algae to produce a cheap animal feed.
Metabolism of microalgae
Algae are primarily eukaryotic photoautotrophic organisms which perform oxygenic photosynthesis. These types of algae are classified by their light-harvesting pigments which give them their color. The green algae species, also known as Chlorophyta, are often used in bioreactors due to their high growth rate and ability to withstand a variety of environments. Blue-green algae, also known as cyanobacteria, are classified as prokaryotic photoautotrophs due to their lack of a nucleus. Light provides essential energy the cell needs to metabolize , nitrogen, phosphorus and other essential nutrients. The wavelengths and intensities of light are very important factors. Available is also an important factor for growth and due to the lower concentration in our atmosphere, supplementary can be added as seen with the bubble column PBR below. Microalgae also possess the ability to take up excess nitrogen and phosphorus under starvation conditions, which are essential for lipid and amino acid synthesis. Higher temperatures and a pH above 7 and below 9 are also common factors. Each of these factors may vary from species to species so it is important to have the correct environmental conditions while designing bioreactors of any sort.
Types of bioreactors
Bioreactors can be divided into two broad categories, open systems and photobioreactors (PBR). The difference between these two reactors are their exposure to the surrounding environment. Open systems are fully exposed to the atmosphere, while PBRs have very limited exposure to the atmosphere.
Commonly used open systems
Simple ponds
The simplest system yields a low production and operation cost. Ponds need a rotating mixer to avoid settling of algal biomass. However, these systems are prone to contamination due to the lack of environmental control.
Raceway ponds
A modified version of a simple pond, the raceway pond uses paddle wheels to drive the flow in a certain direction. The pond is continuously collecting biomass while providing carbon dioxide and other nutrients back into the pond. Typically, raceway ponds are very large due to their low water depth.
Other systems
Less common systems include an incline cascade system where flow is gravity-driven to a retention tank, from where it gets pumped back up to start again. This system can yield high biomass densities, but also entails higher operating costs.
Commonly used photobioreactors (PBRs)
Nowadays, 3 basic types of algae photobioreactors have to be differentiated, but the determining factor is the unifying parameter – the available intensity of sunlight energy.
Flat plate PBR
A plate reactor simply consists of inclined or vertically arranged translucent rectangular boxes, which are often divided in two parts to affect an agitation of the reactor fluid. Generally, these boxes are arranged into a system by linking them. Those connections are also used for making the process of filling/emptying, introduction of gas and transport of nutritive substances. The introduction of the flue gas mostly occurs at the bottom of the box to ensure that the carbon dioxide has enough time to interact with algae in the reactor fluid. Typically, these plates are illuminated from both sides and have a high light penetration. Disadvantages of the flat plate design are the limited pressure tolerance and high space requirements.
Tubular PBR
A tubular reactor consists of vertically or horizontally arranged tubes, connected together, in which the algae-suspended fluid circulates. The tubes are generally made out of transparent plastics or borosilicate glass, and the constant circulation is kept up by a pump at the end of the system. The introduction of gas takes place at the end/beginning of the tube system. This way of introducing gas causes the problem of carbon dioxide deficiency and high concentration of oxygen at the end of the unit during the circulation, ultimately making the process inefficient. The growth of microalgae on the walls of the tubes can inhibit the penetration of the light as well.
Bubble column PBR
A bubble column photo reactor consists of vertically arranged cylindrical columns made out of transparent material. The introduction of gas takes place at the bottom of the column and causes a turbulent stream to enable an optimum gas exchange. The bubbling also acts as a natural agitator. Light is typically sourced from outside the column, however recent designs introduce lights inside the column to increase light distribution and penetration.
Industrial usage
The cultivation of algae in a photobioreactor creates a narrow range of industrial application possibilities. There are three common pathways for cultivated biomass. Algae may be used for environmental improvements, biofuel production and food/biofeed. Some power companies already established research facilities with algae photobioreactors to find out how efficient they could be in reducing CO2 emissions, which are contained in flue gas, and how much biomass will be produced. Algae biomass has many uses and can be sold to generate additional income. The saved emission volume can bring an income too, by selling emission credits to other power companies. Recent studies around the world look at the algae usage for treating wastewater as a way to become more sustainable.
The utilization of algae as food is very common in East Asian regions and is making an appearance around the world for uses in feedstock and even pharmaceuticals due to their high value products. Most of the species contain only a fraction of usable proteins and carbohydrates, and a lot of minerals and trace elements. Generally, the consumption of algae should be minimal because of the high iodine content, particularly problematic for those with hyperthyroidism. Likewise, many species of diatomaceous algae produce compounds unsafe for humans. The algae, especially some species which contain over 50 percent oil and a lot of carbohydrates, can be used for producing biodiesel and bioethanol by extracting and refining the fractions. The algae biomass is generated 30 times faster than some agricultural biomass, which is commonly used for producing biodiesel.
Microgeneration
The built in 2013 in Germany is a showcase experimental bionic house using glass facade panels for the cultivation of micro algae.
Once the panels heat up thermal energy can also be extracted through a heat exchanger in order to supply warm water to the building. The technology is still in an early stage and not yet fit for a wider use.
The Green Power House in Montana, United States used newly-developed Algae Aquaculture Technology within a system that uses sunlight and woody debris waste from a lumber mill for providing nutrients to eight algae ponds of the AACT that cover its floor. Identified challenges of algae façades include durability of microalgae panels, the need for maintenance, and construction and maintenance costs
In 2022, news outlets reported about the development of algae biopanels by a company for sustainable energy generation with unclear viability.
See also
Moss bioreactor
References
Further reading
How an entrepreneur killed his investor. August 18, 2016
Biotechnology
Biological engineering
Bioreactors
Algaculture
Renewable energy | Algae bioreactor | [
"Chemistry",
"Engineering",
"Biology"
] | 1,739 | [
"Bioreactors",
"Biological engineering",
"Algae",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment",
"Biotechnology",
"Algaculture",
"nan"
] |
27,032,782 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28entropy%29 | The following list shows different orders of magnitude of entropy.
See also
Heat capacity
Joule per kelvin
Orders of magnitude (data), relates to information entropy
Order of magnitude (terminology)
References
Entropy
Entropy | Orders of magnitude (entropy) | [
"Physics",
"Chemistry",
"Mathematics"
] | 41 | [
"Thermodynamic properties",
"Units of measurement",
"Physical quantities",
"Quantity",
"Entropy",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Orders of magnitude",
"Symmetry",
"Dynamical systems"
] |
27,034,390 | https://en.wikipedia.org/wiki/Global%20Water%20Security%20%26%20Sanitation%20Partnership | The Global Water Security & Sanitation Partnership (GWSP), formerly the Water and Sanitation Program, is a trust fund administered by the World Bank geared at improving the accessibility and infrastructure of water and sanitation for underdeveloped countries. GWSP works in more than 25 countries through regional offices in Africa, East and South Asia, Latin America, the Caribbean, and an office in Washington, D.C. Heath P. Tarbert is the Acting Executive Director for the United States. The GWSP is best known for its work providing technical assistance, building partnerships and capacity building. GWSP focuses on both regulatory and structural changes and also behavior change projects, such as a scaling up handwashing project and scaling up sanitation project. Another key aspect of GWSP's work is sharing knowledge and best practices through multiple channels. The GWSP has determined five main focus areas: Sustainability, inclusion, institutions, financing, and resilience.
Activities
In addition to other field projects, the program published 108 field notes and technical briefs in 2016. During this year, just under $40 billion US dollars was distributed worldwide, mostly in Africa. The program divides its efforts between the development of sanitation infrastructure and supplies and researching issues impacting the well-being of the communities lacking such facilities.
Countries affected
Africa
East Asia and the Pacific
Bangladesh
Cambodia
India
Indonesia
Laos
Pakistan
Philippines
Vietnam
Latin America
Bolivia
Ecuador
Haiti
Honduras
Nicaragua
Peru
Other Focus Areas
Ending open defecation
The program has devoted much of its influence to ending open defecation (OD) which affects 1 billion people worldwide and ultimately leading to an estimated 842,000 deaths annually. As part of the RWSP, the WSP began extensive collecting of data in several countries to explore the factors contributing to open defecation in rural areas. The main methodology they have developed is dubbed the SaniFOAM framework. It is focused on identifying the specific practices or attitudes that need to be improved within a community and then finding solutions to influence them to ultimately end open defecation.
Rural Water and Sanitation Project (RWSP)
The Water and Sanitation Program focused mostly on metropolitan areas. The Rural Water and Sanitation Project focuses mainly on the rural areas that don't have access to the materials that the metropolitan areas do. The RWSP expands the water and sewage infrastructure in areas that only have it in a small part of the country. The project utilizes techniques to shift behavioral habits and sanitation marketing to create a demand for products and services to improve water quality. Beginning in 2006 it was implemented India, Indonesia, and Tanzania. It has now spread to over a dozen countries.
Water Partnership Program (WPP)
The Water Partnership Program focuses on agricultural use of water. WPP recognizes that 70% of the freshwater is being used for agricultural usage. The WPP is researching into agriculture, and taking steps to preserve fresh water from being exploited for growing crops.
Methodology
Sustainability
The GWSP has stated that their goal is to promote and help fund private sector initiatives in countries with limited access to water. The reasoning of the GWSP behind promoting development in the private sector is that they claim private water suppliers are able to provide better access with less cost, and that the public sector lacks the resources to improve the access to water. However, there have been criticisms made about the practice of water privatization in developing nations. Some criticisms include how, for the sake of increasing profit for water corporations, these private water suppliers do not do an adequate job of developing infrastructure, and once programs to aid development end, many lower-income households are left without access to inexpensive water.
The GWSP takes steps to ensure the sustainability of water.
Plan for the future of population growth, urbanization, and climate change
Infrastructure built to last and be maintained
Inclusion
The GWSP includes everyone and makes sure not to discriminate anyone from water.
Institutions
There are set rules that institutes make. GWSP tries to figure out the rules to expand its services.
Financing
An estimate of US$114 billion per year until 2030 has been made. To reach that goal, the GWSP is taking steps balancing sources of income, making water affordable, and keeping the viability of water is kept up.
Resilience
Extreme weather, and climate change will effect how the GWSP runs. The steps taken to help slow down the shock is to build buildings that are more resilient to temperature change while still providing water.
History
In an effort to improve upon water and sanitation technology for impoverished nations, the World Bank and United Nations Development Programme (UNDP) founded the program in 1978.
The program and its forebearer UNDP invested most of its efforts to testing cost-effective technologies such as hand pumps and latrines for future implementation in the 1980s. However, as other world governments and organizations began developing systemic solutions and strategies to approach issues regarding safe water and sanitation, the program followed suit in widening its scope of impact.
Beginning in the early 1990s the World Bank Water and Sanitation Program worked on sustainable solutions for communities to provide water for themselves. Their main objectives were to create systems that could stay in operation and help the communities be independent. By the end of the decade the program divided its efforts into both field projects and research and evaluation of the world's water systems and practices.
Donors
The program is funded by several countries including Australia, Austria, Canada, Denmark, Finland, France, Ireland, Luxembourg, Netherlands, Norway, Sweden, Switzerland, United Kingdom, and United States, as well as by the Bill & Melinda Gates Foundation.
See also
WASH
Human right to water and sanitation
Water issues in developing countries
References
Sanitation
Water supply
World Bank
Water
Right to health | Global Water Security & Sanitation Partnership | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,149 | [
"Water",
"Hydrology",
"Water supply",
"Environmental engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.