id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
3,566,883 | https://en.wikipedia.org/wiki/Particle%20physics%20and%20representation%20theory | There is a natural connection between particle physics and representation theory, as first noted in the 1930s by Eugene Wigner. It links the properties of elementary particles to the structure of Lie groups and Lie algebras. According to this connection, the different quantum states of an elementary particle give rise to an irreducible representation of the Poincaré group. Moreover, the properties of the various particles, including their spectra, can be related to representations of Lie algebras, corresponding to "approximate symmetries" of the universe.
General picture
Symmetries of a quantum system
In quantum mechanics, any particular one-particle state is represented as a vector in a Hilbert space . To help understand what types of particles can exist, it is important to classify the possibilities for allowed by symmetries, and their properties. Let be a Hilbert space describing a particular quantum system and let be a group of symmetries of the quantum system. In a relativistic quantum system, for example, might be the Poincaré group, while for the hydrogen atom, might be the rotation group SO(3). The particle state is more precisely characterized by the associated projective Hilbert space , also called ray space, since two vectors that differ by a nonzero scalar factor correspond to the same physical quantum state represented by a ray in Hilbert space, which is an equivalence class in and, under the natural projection map , an element of .
By definition of a symmetry of a quantum system, there is a group action on . For each , there is a corresponding transformation of . More specifically, if is some symmetry of the system (say, rotation about the x-axis by 12°), then the corresponding transformation of is a map on ray space. For example, when rotating a stationary (zero momentum) spin-5 particle about its center, is a rotation in 3D space (an element of ), while is an operator whose domain and range are each the space of possible quantum states of this particle, in this example the projective space associated with an 11-dimensional complex Hilbert space .
Each map preserves, by definition of symmetry, the ray product on induced by the inner product on ; according to Wigner's theorem, this transformation of comes from a unitary or anti-unitary transformation of . Note, however, that the associated to a given is not unique, but only unique up to a phase factor. The composition of the operators should, therefore, reflect the composition law in , but only up to a phase factor:
,
where will depend on and . Thus, the map sending to is a projective unitary representation of , or possibly a mixture of unitary and anti-unitary, if is disconnected. In practice, anti-unitary operators are always associated with time-reversal symmetry.
Ordinary versus projective representations
It is important physically that in general does not have to be an ordinary representation of ; it may not be possible to choose the phase factors in the definition of to eliminate the phase factors in their composition law. An electron, for example, is a spin-one-half particle; its Hilbert space consists of wave functions on with values in a two-dimensional spinor space. The action of on the spinor space is only projective: It does not come from an ordinary representation of . There is, however, an associated ordinary representation of the universal cover of on spinor space.
For many interesting classes of groups , Bargmann's theorem tells us that every projective unitary representation of comes from an ordinary representation of the universal cover of . Actually, if is finite dimensional, then regardless of the group , every projective unitary representation of comes from an ordinary unitary representation of . If is infinite dimensional, then to obtain the desired conclusion, some algebraic assumptions must be made on (see below). In this setting the result is a theorem of Bargmann. Fortunately, in the crucial case of the Poincaré group, Bargmann's theorem applies. (See Wigner's classification of the representations of the universal cover of the Poincaré group.)
The requirement referred to above is that the Lie algebra does not admit a nontrivial one-dimensional central extension. This is the case if and only if the second cohomology group of is trivial. In this case, it may still be true that the group admits a central extension by a discrete group. But extensions of by discrete groups are covers of . For instance, the universal cover is related to through the quotient with the central subgroup being the center of itself, isomorphic to the fundamental group of the covered group.
Thus, in favorable cases, the quantum system will carry a unitary representation of the universal cover of the symmetry group . This is desirable because is much easier to work with than the non-vector space . If the representations of can be classified, much more information about the possibilities and properties of are available.
The Heisenberg case
An example in which Bargmann's theorem does not apply comes from a quantum particle moving in . The group of translational symmetries of the associated phase space, , is the commutative group . In the usual quantum mechanical picture, the symmetry is not implemented by a unitary representation of . After all, in the quantum setting, translations in position space and translations in momentum space do not commute. This failure to commute reflects the failure of the position and momentum operators—which are the infinitesimal generators of translations in momentum space and position space, respectively—to commute. Nevertheless, translations in position space and translations in momentum space do commute up to a phase factor. Thus, we have a well-defined projective representation of , but it does not come from an ordinary representation of , even though is simply connected.
In this case, to obtain an ordinary representation, one has to pass to the Heisenberg group, which is a nontrivial one-dimensional central extension of .
Poincaré group
The group of translations and Lorentz transformations form the Poincaré group, and this group should be a symmetry of a relativistic quantum system (neglecting general relativity effects, or in other words, in flat spacetime). Representations of the Poincaré group are in many cases characterized by a nonnegative mass and a half-integer spin (see Wigner's classification); this can be thought of as the reason that particles have quantized spin. (Note that there are in fact other possible representations, such as tachyons, infraparticles, etc., which in some cases do not have quantized spin or fixed mass.)
Other symmetries
While the spacetime symmetries in the Poincaré group are particularly easy to visualize and believe, there are also other types of symmetries, called internal symmetries. One example is color SU(3), an exact symmetry corresponding to the continuous interchange of the three quark colors.
Lie algebras versus Lie groups
Many (but not all) symmetries or approximate symmetries form Lie groups. Rather than study the representation theory of these Lie groups, it is often preferable to study the closely related representation theory of the corresponding Lie algebras, which are usually simpler to compute.
Now, representations of the Lie algebra correspond to representations of the universal cover of the original group. In the finite-dimensional case—and the infinite-dimensional case, provided that Bargmann's theorem applies—irreducible projective representations of the original group correspond to ordinary unitary representations of the universal cover. In those cases, computing at the Lie algebra level is appropriate. This is the case, notably, for studying the irreducible projective representations of the rotation group SO(3). These are in one-to-one correspondence with the ordinary representations of the universal cover SU(2) of SO(3). The representations of the SU(2) are then in one-to-one correspondence with the representations of its Lie algebra su(2), which is isomorphic to the Lie algebra so(3) of SO(3).
Thus, to summarize, the irreducible projective representations of SO(3) are in one-to-one correspondence with the irreducible ordinary representations of its Lie algebra so(3). The two-dimensional "spin 1/2" representation of the Lie algebra so(3), for example, does not correspond to an ordinary (single-valued) representation of the group SO(3). (This fact is the origin of statements to the effect that "if you rotate the wave function of an electron by 360 degrees, you get the negative of the original wave function.") Nevertheless, the spin 1/2 representation does give rise to a well-defined projective representation of SO(3), which is all that is required physically.
Approximate symmetries
Although the above symmetries are believed to be exact, other symmetries are only approximate.
Hypothetical example
As an example of what an approximate symmetry means, suppose an experimentalist lived inside an infinite ferromagnet, with magnetization in some particular direction. The experimentalist in this situation would find not one but two distinct types of electrons: one with spin along the direction of the magnetization, with a slightly lower energy (and consequently, a lower mass), and one with spin anti-aligned, with a higher mass. Our usual SO(3) rotational symmetry, which ordinarily connects the spin-up electron with the spin-down electron, has in this hypothetical case become only an approximate symmetry, relating different types of particles to each other.
General definition
In general, an approximate symmetry arises when there are very strong interactions that obey that symmetry, along with weaker interactions that do not. In the electron example above, the two "types" of electrons behave identically under the strong and weak forces, but differently under the electromagnetic force.
Example: isospin symmetry
An example from the real world is isospin symmetry, an SU(2) group corresponding to the similarity between up quarks and down quarks. This is an approximate symmetry: while up and down quarks are identical in how they interact under the strong force, they have different masses and different electroweak interactions. Mathematically, there is an abstract two-dimensional vector space
and the laws of physics are approximately invariant under applying a determinant-1 unitary transformation to this space:
For example, would turn all up quarks in the universe into down quarks and vice versa. Some examples help clarify the possible effects of these transformations:
When these unitary transformations are applied to a proton, it can be transformed into a neutron, or into a superposition of a proton and neutron, but not into any other particles. Therefore, the transformations move the proton around a two-dimensional space of quantum states. The proton and neutron are called an "isospin doublet", mathematically analogous to how a spin-½ particle behaves under ordinary rotation.
When these unitary transformations are applied to any of the three pions (, , and ), it can change any of the pions into any other, but not into any non-pion particle. Therefore, the transformations move the pions around a three-dimensional space of quantum states. The pions are called an "isospin triplet", mathematically analogous to how a spin-1 particle behaves under ordinary rotation.
These transformations have no effect at all on an electron, because it contains neither up nor down quarks. The electron is called an isospin singlet, mathematically analogous to how a spin-0 particle behaves under ordinary rotation.
In general, particles form isospin multiplets, which correspond to irreducible representations of the Lie algebra SU(2). Particles in an isospin multiplet have very similar but not identical masses, because the up and down quarks are very similar but not identical.
Example: flavour symmetry
Isospin symmetry can be generalized to flavour symmetry, an SU(3) group corresponding to the similarity between up quarks, down quarks, and strange quarks. This is, again, an approximate symmetry, violated by quark mass differences and electroweak interactions—in fact, it is a poorer approximation than isospin, because of the strange quark's noticeably higher mass.
Nevertheless, particles can indeed be neatly divided into groups that form irreducible representations of the Lie algebra SU(3), as first noted by Murray Gell-Mann and independently by Yuval Ne'eman.
See also
Charge (physics)
Representation theory:
Of Lie algebras
Of Lie groups
Projective representation
Special unitary group
Notes
References
Coleman, Sidney (1985) Aspects of Symmetry: Selected Erice Lectures of Sidney Coleman. Cambridge Univ. Press. .
Georgi, Howard (1999) Lie Algebras in Particle Physics. Reading, Massachusetts: Perseus Books. .
.
.
Sternberg, Shlomo (1994) Group Theory and Physics. Cambridge Univ. Press. . Especially pp. 148–150.
Especially appendices A and B to Chapter 2.
External links
Lie algebras
Representation theory of Lie groups
Conservation laws
Quantum field theory | Particle physics and representation theory | [
"Physics"
] | 2,684 | [
"Quantum field theory",
"Equations of physics",
"Conservation laws",
"Quantum mechanics",
"Symmetry",
"Physics theorems"
] |
3,567,455 | https://en.wikipedia.org/wiki/Nuclear%20data | Nuclear data represents measured (or evaluated) probabilities of various physical interactions involving the nuclei of atoms. It is used to understand the nature of such interactions by providing the fundamental input to many models and simulations, such as fission and fusion reactor calculations, shielding and radiation protection calculations, criticality safety, nuclear weapons, nuclear physics research, medical radiotherapy, radioisotope therapy and diagnostics, particle accelerator design and operations, geological and environmental work, radioactive waste disposal calculations, and space travel calculations.
It groups all experimental data relevant for nuclear physics and nuclear engineering. It includes a large number of physical quantities, like scattering and reaction cross sections (which are generally functions of energy and angle), nuclear structure and nuclear decay parameters, etc. It can involve neutrons, protons, deuterons, alpha particles, and virtually all nuclear isotopes which can be handled in a laboratory.
There are two major reasons to need high-quality nuclear data: theoretical model development of nuclear physics, and applications involving radiation and nuclear power. There is often an interplay between these two aspects, since applications often motivate research in particular theoretical fields, and theory can be used to predict quantities or phenomena which can lead to new or improved technological concepts.
Nuclear Data Evaluations
To ensure a level of quality required to protect the public, experimental nuclear data results are occasionally evaluated by a Nuclear Data Organization to form a nuclear data library. These organizations review multiple measurements and agree upon the highest-quality measurements before publishing the libraries. For unmeasured or very complex data regimes, the parameters of nuclear models are adjusted until the resulting data matches well with critical experiments. The result of an evaluation is almost universally stored as a set of data files in Evaluated Nuclear Data File (ENDF) format. To keep the size of these files reasonable, they contain a combination of actual data tables and resonance parameters that can be reconstructed into pointwise data with specialized tools (such as NJOY).
Nuclear Data Organizations
The International Network of Nuclear Reaction Data Centres (NRDC) constitutes a worldwide cooperation of nuclear data centres under the auspices of the International Atomic Energy Agency. The Network was established to coordinate the worldwide collection, compilation and dissemination of nuclear reaction data.
The Cross Section Evaluation Working Group (CSEWG) is the National Nuclear Data Organization of the United States and Canada. This is a cooperative effort of the national laboratories, industry, and universities that produces the ENDF/B file.
The Joint Evaluated Fission and Fusion (JEFF) organization consists of members of the Nuclear Energy Agency (NEA) of the Organisation for Economic Co-operation and Development (OECD). They produce the JEFF file, which is also in the universal ENDF format.
The Japanese Nuclear Data Committee (JNDC) handles the Japanese Evaluated Nuclear Data Library (JENDL). This effort is coordinated through the Nuclear Data Center at the Japan Atomic Energy Agency (JAEA).
Releases of ENDF/B Files
The historical releases of ENDF/B files are summarized below.
The historical releases of JEFF files are summarized below.
See also
Nuclear reaction
Nuclear force
External links
XSPlot an online nuclear cross section plotter
International Network of Nuclear Reaction Data Centres (NRDC)
Introduction to the ENDF Formats
IAEA Nuclear Data Section: Nuclear Data Services
National Nuclear Data Center: Brookhaven National Laboratory
JAEA Nuclear Data Center
Joint Evaluated Fission and Fusion File (JEFF)
Cross Section Evaluation Working Group (CSEWG)
Data Formats for ENDF-6
T-2 Nuclear Information Service
JANIS NEA Nuclear Data Information System
References
Data, nuclear | Nuclear data | [
"Physics"
] | 723 | [
"Nuclear physics"
] |
3,567,920 | https://en.wikipedia.org/wiki/Ducrete | DUCRETE (Depleted Uranium Concrete) is a high density concrete alternative investigated for use in construction of casks for storage of radioactive waste. It is a composite material containing depleted uranium dioxide aggregate instead of conventional gravel, with a Portland cement binder.
Background and development
In 1993, the United States Department of Energy Office of Environmental Management initiated investigation into the potential use of depleted uranium in heavy concretes. The aim of this investigation was to simultaneously find an application for depleted uranium and to create a new and more efficient method for the storage and transportation of spent nuclear fuels. The material was first conceived at the Idaho National Engineering and Environmental Laboratory (INEEL) by W. Quapp and P. Lessing, who jointly developed the processes behind the material and were awarded both U.S. and foreign patents in 1998 and 2000, respectively.
Description
DUCRETE is a kind of concrete that replaces the standard coarse aggregate with a depleted uranium ceramic material. All of the other materials present in DUCRETE (Portland cement, sand and water) are used in the same volumetric ratio used for ordinary concrete. This ceramic material is a very efficient shielding material since it presents both high atomic number (uranium) for gamma shielding, and low atomic number (water bonded in the concrete) for neutron shielding. There exists an optimum uranium-to-binder ratio for a combined attenuation of gamma and neutron radiation at a given wall thickness. A balance needs to be established between the attenuation of the gamma flux in the Depleted Uranium Oxide (DUO2) and the cement phase with water to attenuate the neutron flux.
The key to effective shielding with depleted uranium ceramic concrete is maximum uranium oxide density. Unfortunately, the densest depleted uranium oxide is also the most chemically unstable. DUO2 has a maximum theoretical density of 10.5 g/cm3 at 95% purity. However, under oxidation conditions, this material readily transforms into the more stable depleted uranium trioxide (DUO3) or depleted triuranium octaoxide (DU3O8). Thus, if naked UO2 aggregate is used, this transitions can result in an expansion that may generate stresses that could crack the material, lowering its compressive strength. Another limitation for the direct use of depleted uranium dioxide fine powder is that concretes depend on their coarse aggregates to carry compressive stresses. In order to overcome these issues, DUAGG was developed.
DUAGG (depleted uranium aggregate) is the term applied to the stabilized DUO2 ceramic. This consists of sintered DUO2 particles with a silicate-based coating that covers the surfaces and fills the spaces between the grains, acting as an oxygen barrier, as well as corrosion and leach resistance. DUAGG has a density up to 8.8 g/cm3 and replaces the conventional aggregate in concrete, producing concrete with a density of 5.6 to 6.4 g/cm3, compared to 2.3 g/cm3 for conventional concrete.
Also, DUCRETE presents environmentally friendly properties. The table below shows the effectiveness of converting depleted uranium into concrete, since potential leaching is decreased in a high order. The leach test used was the EPA Toxicity characteristic leaching procedure (TCLP), which is used to assess heavy metal risks to the environment.
Production
U.S. method
DUCRETE is produced by mixing a DUO2 aggregate with Portland cement. DU is a result of the enrichment of uranium for use in nuclear power generation and other fields. DU usually comes bonded with fluorine in uranium hexafluoride. This compound is highly reactive and cannot be used in the DUCRETE. Uranium hexafluoride must therefore be oxidized into triuranium octoxide and uranium trioxide. These compounds are then converted to UO2 (uranium oxide) through the addition of hydrogen gas. The UO2 is then dried, crushed, and milled into a uniform sediment. This then converted into small inch-long briquettes through the use of high pressure (). The low-atomic number binder is then added and undergoes pyrolysis. The compound then undergoes liquid phase sintering at 1300 °C until the desired density is achieved, usually around 8.9 g/cm3. The briquettes are then crushed and gap sorted and are now ready to be mixed into DUCRETE.
VNIINM (Russian) method
The VNIINM method is very similar to the U.S. method except it does not gap sort the binder and UO2 after it is crushed.
Applications
After processing, DUCRETE composite may be used in container vessels, shielding structures, and containment storage areas, all of which can be used to store radioactive waste. The primary implementation of this material is within a dry cask storage system for high level waste (HLW) and spent nuclear fuel (SNF). In such a system, the composite would be the primary component used to shield radiation from workers and the public. Cask systems made from DUCRETE are smaller and lighter in weight than casks made from conventional materials, such as traditional concrete. DUCRETE containers need only be about 1/3 as thick to provide the same degree of radiation shielding as concrete systems.
Analysis has shown that DUCRETE is more cost effective than conventional materials. The cost for the production of casks made with DUCRETE is low when compared with other shielding materials such as steel, lead and DU metal, since less material is required as a consequence of a higher density. In a study by Duke Engineering at a nuclear waste facility at Savannah River, the DUCRETE cask system was evaluated at a lower cost than an alternative Glass Waste storage building. However, disposal of the DUCRETE was not considered. Since DUCRETE is a low level radioactive composite, its relatively expensive disposal could decrease the cost effectiveness of such systems. An alternative to such disposal is the use of empty DUCRETE casks as a container for high activity low-level waste.
While DUCRETE shows potential for future nuclear waste programs, such concepts are far from utilization. So far, no DUCRETE cask systems have been licensed in the U.S.
References
External links
http://web.ead.anl.gov/uranium/pdf/IHLWM_Dole_paper.pdf
http://web.ead.anl.gov/uranium/pdf/DUCRETEIntroductionJune2003.pdf
http://web.ead.anl.gov/uranium/pdf/ducretecosteffec.pdf
http://web.ead.anl.gov/uranium/pdf/Global99Paper2.pdf
Concrete
Radioactive waste | Ducrete | [
"Chemistry",
"Technology",
"Engineering"
] | 1,382 | [
"Structural engineering",
"Environmental impact of nuclear power",
"Hazardous waste",
"Radioactivity",
"Concrete",
"Radioactive waste"
] |
12,918,089 | https://en.wikipedia.org/wiki/Supernova%20Legacy%20Survey | The Supernova Legacy Survey Program is a project designed to investigate dark energy, by detecting and monitoring approximately 2000 high-redshift supernovae between 2003 and 2008, using MegaPrime, a large CCD mosaic at the Canada-France-Hawaii Telescope. It also carries out detailed spectroscopy of a subsample of distant supernovae.
References
External links
SuperNova Legacy Survey experiment record on INSPIRE-HEP
Astronomical surveys
Observational astronomy | Supernova Legacy Survey | [
"Astronomy"
] | 91 | [
"Astronomical surveys",
"Observational astronomy",
"Astronomy stubs",
"Works about astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
12,918,103 | https://en.wikipedia.org/wiki/Cenarchaeum | Cenarchaeum is a monotypic genus of archaeans in the family Cenarchaeaceae. The marine archaean Cenarchaeum symbiosum is psychrophilic and is found inhabiting marine sponges. Cenarchaeum symbiosum was initially detected as a major symbiotic microorganism living within (it is an endosymbiont of) the sponge Axinella mexicana. It has been ubiquitously detected in the world oceans at lower abundances, while in some genera of marine sponges it is one of the most abundant microbiome members. Its genome sequence and diversity has been investigated in detail finding unique metabolic products and its role in ammonia-oxidizing activities.
Genome
The genome of C. symbiosum is estimated to be 2.02 Million bp in length, with a predicted amount of 2011 genes.
Ecology
Cenarchaeum symbiosum is a psychrophilic organism capable of surviving and proliferating at low temperatures usually ranging from 7-19 Celsius. C. symbiosum has a symbiotic relationship with certain varieties of sponge species, usually living in 10-20 meter depths, typically near California.
References
Further reading
Archaea genera
Thermoproteota
Enigmatic archaea taxa | Cenarchaeum | [
"Biology"
] | 272 | [
"Archaea",
"Archaea stubs"
] |
12,918,117 | https://en.wikipedia.org/wiki/Smart%20ligand | Smart ligands are affinity ligands selected with pre-defined equilibrium (), kinetic (, ) and thermodynamic (ΔH, ΔS) parameters of biomolecular interaction.
Ligands with desired parameters can be selected from large combinatorial libraries of biopolymers using instrumental separation techniques with well-described kinetic behaviour, such as kinetic capillary electrophoresis (KCE), surface plasmon resonance (SPR), microscale thermophoresis (MST), etc. Known examples of smart ligands include DNA smart aptamers; however, RNA and peptide smart aptamers can also be developed.
Smart ligands can find a set of unique applications in biomedical research, drug discovery and proteomic studies. For example, a panel of DNA smart aptamers has been recently used to develop affinity analysis of proteins with ultra-wide dynamic range of measured concentrations.
References
External links
Extending Protein Detection
Biotechnology
Ligands (biochemistry)
Nucleic acids
Proteomics | Smart ligand | [
"Chemistry",
"Biology"
] | 211 | [
"Biomolecules by chemical classification",
"Biotechnology",
"Signal transduction",
"Ligands (biochemistry)",
"nan",
"Nucleic acids"
] |
12,918,181 | https://en.wikipedia.org/wiki/Nitrosopumilus | Nitrosopumilus is a genus of archaea. The type species, Nitrosopumilus maritimus, is an extremely common archaeon living in seawater. It is the first member of the Group 1a Nitrososphaerota (formerly Thaumarchaeota) to be isolated in pure culture. Gene sequences suggest that the Group 1a Nitrososphaerota are ubiquitous with the oligotrophic surface ocean and can be found in most non-coastal marine waters around the planet. It is one of the smallest living organisms at 0.2 micrometers in diameter. Cells in the species N. maritimus are shaped like peanuts and can be found both as individuals and in loose aggregates. They oxidize ammonia to nitrite and members of N. maritimus can oxidize ammonia at levels as low as 10 nanomolar, near the limit to sustain its life. Archaea in the species N. maritimus live in oxygen-depleted habitats. Oxygen needed for ammonia oxidation might be produced by novel pathway which generates oxygen and dinitrogen. N. maritimus is thus among organisms which are able to produce oxygen in dark.
This organism was isolated from sediment in a tropical tank at the Seattle Aquarium by a group led by David Stahl (University of Washington).
Biology
Lipid membranes
Populations of N. maritimus are probably the main source of glycerol dialkyl glycerol tetraethers (GDGTs) in the ocean, a compound which constitutes their monolayer lipidic cell membranes as intact polar lipids (IPLs) together with crenarcheol. This membrane structure is thought to maximise proton motive force. The compounds found in the membrane of these organisms, such as GDGTs, IPLs, and crenarcheol, can be useful as biomarkers for the presence of organisms belonging to the Nitrososphaerota group in the water column. These archaea have also been found to change their membrane's composition in relation to temperature (by GDGT cyclization), growth, metabolic status, and, even if less dramatically, to pH.
Cell division
All known Archaea use cell division to duplicate. Euryarchaeota and Bacteria use the FtsZ mechanism in cell division, while Thermoproteota divide using the Cdv machinery. However, Nitrososphaerota such as N. maritimus adopts both mechanisms, FtsZ and Cdv. Nevertheless, after further researches, N. maritimus was found to use mainly Cdv proteins rather than FtsZ during cell division. In this case, Cdv is the primary system in cell division for N. maritimus. Therefore, to replicate a genome of 1.645Mb, N. maritimus spends 15 to 18 hours.
Physiology
Genome
Ammonia-oxidizing bacteria (AOB) are known to have chemolithoautotrophic growth by using inorganic carbon, N. maritimus, an Ammonia-oxidizing archaea (AOA) use a similar process of growth. While AOB uses Calvin–Bassham–Benson cycle with the -fixing enzyme ribulose bisphosphate carboxylase/oxygenase (RubisCO) as the key enzyme; N. maritimus seems to grow and use an alternative pathway due to the lack of genes and enzymes. Therefore, a variant of the 3-hydroxypropionate/4-hydroxybutyrate is used by N. maritimus to develop autotrophically, which allows its capacity to assimilate inorganic carbon. Using the 3-hydroxypropionate/4-hydroxybutyrate pathway method instead of the Calvin cycle, N. maritimus could provide a growth advantage as the process is more energy-efficient. Due to its originality, N. maritimus plays an essential role in the carbon and nitrogen cycle
Ammonia oxidation
The isolation and the sequencing of N. maritimuss genome have allowed to extend the insight into the physiology of the organisms belonging to the Nitrososphaerota group. N. maritimus was the first Archaeon with an ammonia oxidizing metabolism to be studied. This organism is common in the marine environment especially at the bottom of the photic zone where the amount of Ammonium and Iron is enough to support its growth. The physiology of N. maritimus remains unclear under certain aspects. It conserves energy for its vital functions, from the oxidation of Ammonia () and the reduction of Oxygen (), with the formation of Nitrite. is the carbon source. It is fixed and assimilated by the microorganism through the 3-hydroxypropinate/4-hydroxybutyrate carbon cycle.
N. maritimus carries out the first step of Nitrification, by acting in a key role in the Nitrogen cycle along the water column. Since this oxidizing reaction releases just a little amount of energy, the growth of this microorganism is slow. N. maritimus’s genome includes the amoA gene, encoding for the Ammonia Monooxygenase (AMO) enzyme. This latter allows the oxidation of ammonia to hydroxylamine (). Instead, the genome lacks the gene encoding for Hydroxylamine Oxidoreductase (HAO) responsible for oxidizing the intermediate () to nitrite. The hydroxylamine is produced as a metabolite, and it is immediately consumed during the metabolic reaction. Other intermediates produced during this metabolic pathway are: the nitric oxide (NO), the nitrous oxide (), the nitoxyl (HNO). These are toxic at high concentration. The enzyme responsible for oxidizing the hydroxylamine to nitrite is not well-known yet.
Two hypotheses are suggested for the metabolic pathway of N. maritimus that involve two types of enzymes : the copper-based enzyme (Cu-ME) and the nitrite reductase enzyme (nirK) and its reverse:
In the first one ammonia is oxidized through AMO forming the hydroxylamine; the latter, plus a molecule of nitric oxide, are, in turn, oxidized by a copper-based enzyme (Cu-ME) producing two molecules of nitrite. One of these is reduced to NO by the nitrite reductase (nirK) and goes back to the cu-ME enzyme. An electrons translocation occurs producing a Proton Motive Force (PMF) and allowing ATP synthesis.
In the second one ammonia is oxidized through AMO making up the hydroxylamine and then the two enzymes, nirK and Cu-ME, oxidize the hydroxylamine to nitric oxide and this to nitrite. The proper roles and the order at which these enzymes work, have to be clarified.
The S-layer of N. maritimus is found to form into multiple layers of channels that allow ammonium () cations to flow through.
Additionally, nitrous oxide is released by this type of metabolism. It is an important greenhouse gas that likely is produced as result of abiotic denitrification of metabolites.
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
Incertae sedis:
"Ca. Nitrosopumilus brisbanensis" Prabhu et al. 2024
"Nitrosopumilus cymbastelae" corrig. Zhang et al. 2019
"Nitrosopumilus detritiferus" Zhang et al. 2019
"Nitrosopumilus hexadellae" corrig. Zhang et al. 2019
Ecology
Habitats
Characteristic of the Nitrososphaerota phylum, N. maritimus is mainly found in oligotrophic (poor environment in nutrients) open ocean, within the Pelagic zone. Initially discovered in Seattle, in an aquarium, today N. maritimus can populate numerous environment such as the subtropical North Pacific and South Atlantic Ocean or the mesopelagic zone in the Pacific Ocean. N. maritimus is an aerobic archeon able to grow even with an extremely low concentration of nutrients, like in dark-deep open ocean, in which N. maritimus as an important impact.
Contributions
Nitrification of the ocean
Members of the species N. maritimus can oxidize ammonia to form nitrite, which is the first step of the nitrogen cycle. Ammonia and nitrate are the two nutrients which form the inorganic pool of nitrogen. Populating poor environments (lacking of organic energy sources and sunlight), the oxidation of ammonia could contribute to primary productivity . In fact, nitrate fuels half of the primary production of phytoplankton but not only phytoplankton needs nitrate. The high ammonia's affinity allows N. maritimus to largely compete with the other marine phototrophs and chemotrophs. Regarding the ammonium turnover per unit biomass, N. maritimus would be around 5 times higher than oligotrophic heterotrophs' turnover, and around 30 times higher than most of the oligotrophic diatoms known turnover. Computing these two observations nitrification by N. maritimus plays a key role in the marine nitrogen cycle.
Carbon and phosphorus implications
Its ability to fix inorganic carbon via an alternative pathway (3-hydroxypropionate/4-hydroxybutyrate pathway) allows N. maritimus to participate efficiently in the flux of the global carbon budget. Coupling with the ammonia-oxidizing pathway, N. maritimus and the other marine thaumarchaea, approximately, recycle 4.5% of the organic carbon mineralized in the oceans and transform 4.3% of detrital phosphorus into new phosphorus substances.
See also
List of Archaea genera
References
Further reading
.
Archaea genera
Candidatus taxa
Marine microorganisms | Nitrosopumilus | [
"Biology"
] | 2,099 | [
"Marine microorganisms",
"Microorganisms"
] |
1,263,271 | https://en.wikipedia.org/wiki/Europium%28III%29%20chloride | Europium(III) chloride is an inorganic compound with the formula EuCl3. The anhydrous compound is a yellow solid. Being hygroscopic it rapidly absorbs water to form a white crystalline hexahydrate, EuCl3·6H2O, which is colourless. The compound is used in research.
Preparation
Treating Eu2O3 with aqueous HCl produces hydrated europium chloride (EuCl3·6H2O). This salt cannot be rendered anhydrous by heating. Instead one obtains an oxychloride.
Anhydrous EuCl3 is often prepared by the "ammonium chloride route," starting from either Eu2O3 or hydrated europium chloride (EuCl3·6H2O) by heating carefully to 230 °C. These methods produce (NH4)2[EuCl5]:
10 NH4Cl + Eu2O3 → 2 (NH4)2[EuCl5] + 6 NH3 + 3 H2O
EuCl3·6H2O + 2 NH4Cl → (NH4)2[EuCl5] + 6 H2O
The pentachloride decomposes thermally according to the following equation:
(NH4)2[EuCl5] → 2 NH4Cl + EuCl3
The thermolysis reaction proceeds via the intermediary of (NH4)[Eu2Cl7].
Reactions
Europium(III) chloride is a precursor to other europium compounds. It can be converted to the corresponding metal bis(trimethylsilyl)amide via salt metathesis with lithium bis(trimethylsilyl)amide. The reaction is performed in THF and requires a period at reflux.
EuCl3 + 3 LiN(SiMe3)2 → Eu(N(SiMe3)2)3 + 3 LiCl
Eu(N(SiMe3)2)3 is a starting material for the more complicated coordination complexes.
Reduction with hydrogen gas with heating gives EuCl2. The latter has been used to prepare organometallic compounds of europium(II), such as bis(pentamethylcyclopentadienyl)europium(II) complexes. Europium(III) chloride can be used as a starting point for the preparation of other europium salts.
Structure
In the solid state, it crystallises in the UCl3 motif. The Eu centres are nine-coordinate.
Bibliography
References
Chlorides
Europium(III) compounds
Lanthanide halides | Europium(III) chloride | [
"Chemistry"
] | 546 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
1,263,365 | https://en.wikipedia.org/wiki/Acetylide | In chemistry, an acetylide is a compound that can be viewed as the result of replacing one or both hydrogen atoms of acetylene (ethyne) by metallic or other cations. Calcium carbide is an important industrial compound, which has long been used to produce acetylene for welding and illumination. It is also a major precursor to vinyl chloride. Other acetylides are reagents in organic synthesis.
Nomenclature
The term acetylide is used loosely. It apply to an acetylene , where R = H or a side chain that is usually organic. The nomenclature can be ambiguous with regards to the distinction between compounds of the type MC2R and M2C2. When both hydrogens of acetylene are replaced by metals, the compound can also be called carbide, e.g. calcium carbide . When only one hydrogen atom is replaced, the anion may be called hydrogen acetylide or the prefix mono- may be attached to the metal, as in monosodium acetylide . An acetylide may be a salt (ionic compound) containing the anion , , or , as in sodium acetylide or cobalt acetylide . Other acetylides have the metal bound to the carbon atom(s) by covalent bonds, being therefore coordination or organometallic compounds.
Ionic acetylides
Alkali metal and alkaline earth metal acetylides of the general formula MC≡CM are salt-like Zintl phase compounds, containing ions. Evidence for this ionic character can be seen in the ready hydrolysis of these compounds to form acetylene and metal oxides, and by solubility in liquid ammonia with solvated ions.
The ion has a closed shell ground state of 1Σ, making it isoelectronic to a neutral molecule N2, which may afford it some gas-phase stability.
Organometallic acetylides
Some acetylides, particularly of transition metals, show evidences of covalent character, e. g. for being neither dissolved nor decomposed by water and by radically different chemical reactions. That seems to be the case of silver acetylide and copper acetylide, for example.
In the absence of additional ligands, metal acetylides adopt polymeric structures wherein the acetylide groups are bridging ligands.
Preparation
Of the type MC2R
Acetylene and terminal alkynes are weak acids:
RC≡CH + R″M R″H + RC≡CM
Monopotassium and monosodium acetylide can be prepared by reacting acetylene with bases like sodium amide or with the elemental metals, often at room temperature and atmospheric pressure.
Copper(I) acetylide can be prepared by passing acetylene through an aqueous solution of copper(I) chloride because of a low solubility equilibrium. Similarly, silver acetylides can be obtained from silver nitrate.
In organic synthesis, acetylides are usually prepared by treating acetylene and alkynes with organometallic or inorganic Classically, liquid ammonia was used for deprotonations, but ethers are now more commonly used.
Lithium amide, LiHMDS, or organolithium reagents, such as butyllithium (), are frequently used to form lithium acetylides:
Of the type M2C2 and CaC2
Calcium carbide is prepared industrially by heating carbon with lime (calcium oxide) at approximately 2,000 °C. A similar process can be used to produce lithium carbide.
Dilithium acetylide, Li2C2, competes with the preparation of the monolithium derivative LiC2H.
Reactions
Ionic acetylides are typically decomposed by water with evolution of acetylene:
+ 2 → +
+ → +
Acetylides of the type RC2M are widely used in alkynylations in organic chemistry. They are nucleophiles that add to a variety of electrophilic and unsaturated substrates.
A classic application is the Favorskii reaction, such as in the sequence shown below. Here ethyl propiolate is deprotonated by n-butyllithium to give the corresponding lithium acetylide. This acetylide adds to the carbonyl center of cyclopentanone. Hydrolysis liberates the alkynyl alcohol.
The dimerization of acetylene to vinylacetylene proceeds by insertion of acetylene into a copper(I) acetylide complex.
Coupling reactions
Acetylides are sometimes used as intermediates in coupling reactions. Examples include Sonogashira coupling, Cadiot-Chodkiewicz coupling, Glaser coupling and Eglinton coupling.
Hazards
Some acetylides are notoriously explosive. Formation of acetylides poses a risk in handling of gaseous acetylene in presence of metals such as mercury, silver or copper, or alloys with their high content (brass, bronze, silver solder).
See also
Ethynyl
Ethynyl radical
Diatomic carbon (neutral C2)
Acetylenediol
References
Anions
Functional groups | Acetylide | [
"Physics",
"Chemistry"
] | 1,078 | [
"Ions",
"Functional groups",
"Matter",
"Anions"
] |
1,263,624 | https://en.wikipedia.org/wiki/Interaction%20point | In particle physics, an interaction point (IP) is the place where particles collide in an accelerator experiment. The nominal interaction point is the design position, which may differ from the real or physics interaction point, where the particles actually collide. A related, but distinct, concept is the primary vertex: the reconstructed location of an individual particle collision.
For fixed target experiments, the interaction point is the point where beam and target interact. For colliders, it is the place where the beams interact.
Experiments (detectors) at particle accelerators are built around the nominal interaction points of the accelerators. The whole region around the interaction point (the experimental hall) is called an interaction region.
Particle colliders such as LEP, HERA, RHIC, Tevatron and LHC can host several interaction regions and therefore several experiments taking advantage of the same beam.
References
Accelerator physics
Experimental particle physics | Interaction point | [
"Physics"
] | 184 | [
"Applied and interdisciplinary physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Accelerator physics"
] |
1,264,239 | https://en.wikipedia.org/wiki/Reaction%20coordinate | In chemistry, a reaction coordinate is an abstract one-dimensional coordinate chosen to represent progress along a reaction pathway. Where possible it is usually a geometric parameter that changes during the conversion of one or more molecular entities, such as bond length or bond angle. For example, in the homolytic dissociation of molecular hydrogen, an apt choice would be the coordinate corresponding to the bond length. Non-geometric parameters such as bond order are also used, but such direct representation of the reaction process can be difficult, especially for more complex reactions.
In computer simulations collective variables are employed for a target-oriented sampling approach. Plain simulations fail to capture so called rare events, because they are not feasible to occur in realistic computation times. This often stems from to high energy barriers separating the reactants from products, or any two states of interest. A collective variable is as the name states only a set, a collection, of individual variables () contracted into one:
,
with a transformation matrix. The collective variables reduce many variables to a lower-dimensional set of variables, that still describe the crucial characteristics of the system. Many collective variables than span the reaction coordinate with a continuous function :
.
An example is the complexation of two molecules. The distance between both of them is the collective variable, where the atomic positions are the individual variables and the reaction coordinate would be the full path of association and dissociation. By applying a bias to the collective variables the simulation can be 'steered' towards the desired destination. These kinds of simulations are called enhanced simulations.
Special collective variables that help to distinguish reactants from products are also known as order parameters, terminology that originates in work on phase transitions. Reaction coordinates are special order parameters that describe the entire pathway from reactants through transition states and on to products. Depending on the application, reaction coordinates may be defined by using chemically intuitive variables like bond lengths, or splitting probabilities (also called committors), or using the eigenfunction corresponding to the reactant-to-product transition as a progress coordinate.
A reaction coordinate parameterizes reaction process at the level of the molecular entities involved. It differs from extent of reaction, which measures reaction progress in terms of the composition of the reaction system.
(Free) energy is often plotted against reaction coordinate(s) to demonstrate in schematic form the potential energy profile (an intersection of a potential energy surface) associated with the reaction.
In the formalism of transition-state theory the reaction coordinate for each reaction step is one of a set of curvilinear coordinates obtained from the conventional coordinates for the reactants, and leads smoothly among configurations, from reactants to products via the transition state. It is typically chosen to follow the path defined by potential energy gradient – shallowest ascent/steepest descent – from reactants to products.
Notes and references
Physical chemistry
Quantum chemistry
Theoretical chemistry
Computational chemistry
Molecular physics
Chemical kinetics | Reaction coordinate | [
"Physics",
"Chemistry"
] | 588 | [
"Quantum chemistry stubs",
"Chemical reaction engineering",
"Quantum chemistry",
"Molecular physics",
"Applied and interdisciplinary physics",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
"Chemical kine... |
1,266,110 | https://en.wikipedia.org/wiki/Smoothed-particle%20hydrodynamics | Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating the mechanics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy in 1977, initially for astrophysical problems. It has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a meshfree Lagrangian method (where the co-ordinates move with the fluid), and the resolution of the method can easily be adjusted with respect to variables such as density.
Method
Advantages
By construction, SPH is a meshfree method, which makes it ideally suited to simulate problems dominated by complex boundary dynamics, like free surface flows, or large boundary displacement.
The lack of a mesh significantly simplifies the model implementation and its parallelization, even for many-core architectures.
SPH can be easily extended to a wide variety of fields, and hybridized with some other models, as discussed in Modelling Physics.
As discussed in section on weakly compressible SPH, the method has great conservation features.
The computational cost of SPH simulations per number of particles is significantly less than the cost of grid-based simulations per number of cells when the metric of interest is related to fluid density (e.g., the probability density function of density fluctuations). This is the case because in SPH the resolution is put where the matter is.
Limitations
Setting boundary conditions in SPH such as inlets and outlets and walls is more difficult than with grid-based methods. In fact, it has been stated that "the treatment of boundary conditions is certainly one of the most difficult technical points of the SPH method". This challenge is partly because in SPH the particles near the boundary change with time. Nonetheless, wall boundary conditions for SPH are available.
The computational cost of SPH simulations per number of particles is significantly larger than the cost of grid-based simulations per number of cells when the metric of interest is not (directly) related to density (e.g., the kinetic-energy spectrum). Therefore, overlooking issues of parallel speedup, the simulation of constant-density flows (e.g., external aerodynamics) is more efficient with grid-based methods than with SPH.
Examples
Fluid dynamics
Smoothed-particle hydrodynamics is being increasingly used to model fluid motion as well. This is due to several benefits over traditional grid-based techniques. First, SPH guarantees conservation of mass without extra computation since the particles themselves represent mass. Second, SPH computes pressure from weighted contributions of neighboring particles rather than by solving linear systems of equations. Finally, unlike grid-based techniques, which must track fluid boundaries, SPH creates a free surface for two-phase interacting fluids directly since the particles represent the denser fluid (usually water) and empty space represents the lighter fluid (usually air). For these reasons, it is possible to simulate fluid motion using SPH in real time. However, both grid-based and SPH techniques still require the generation of renderable free surface geometry using a polygonization technique such as metaballs and marching cubes, point splatting, or 'carpet' visualization. For gas dynamics it is more appropriate to use the kernel function itself to produce a rendering of gas column density (e.g., as done in the SPLASH visualisation package).
One drawback over grid-based techniques is the need for large numbers of particles to produce simulations of equivalent resolution. In the typical implementation of both uniform grids and SPH particle techniques, many voxels or particles will be used to fill water volumes that are never rendered. However, accuracy can be significantly higher with sophisticated grid-based techniques, especially those coupled with particle methods (such as particle level sets), since it is easier to enforce the incompressibility condition in these systems. SPH for fluid simulation is being used increasingly in real-time animation and games where accuracy is not as critical as interactivity.
Recent work in SPH for fluid simulation has increased performance, accuracy, and areas of application:
B. Solenthaler, 2009, develops Predictive-Corrective SPH (PCISPH) to allow for better incompressibility constraints
M. Ihmsen et al., 2010, introduce boundary handling and adaptive time-stepping for PCISPH for accurate rigid body interactions
K. Bodin et al., 2011, replace the standard equation of state pressure with a density constraint and apply a variational time integrator
R. Hoetzlein, 2012, develops efficient GPU-based SPH for large scenes in Fluids v.3
N. Akinci et al., 2012, introduce a versatile boundary handling and two-way SPH-rigid coupling technique that is completely based on hydrodynamic forces; the approach is applicable to different types of SPH solvers
M. Macklin et al., 2013 simulates incompressible flows inside the Position Based Dynamics framework, for bigger timesteps
N. Akinci et al., 2013, introduce a versatile surface tension and two-way fluid-solid adhesion technique that allows simulating a variety of interesting physical effects that are observed in reality
J. Kyle and E. Terrell, 2013, apply SPH to Full-Film Lubrication
A. Mahdavi and N. Talebbeydokhti, 2015, propose a hybrid algorithm for implementation of solid boundary condition and simulate flow over a sharp crested weir
S. Tavakkol et al., 2016, develop curvSPH, which makes the horizontal and vertical size of particles independent and generates uniform mass distribution along curved boundaries
W. Kostorz and A. Esmail-Yakas, 2020, propose a general, efficient and simple method for evaluating normalization factors near piecewise-planar boundaries
Colagrossi et al., 2019, study flow around a cylinder close to a free-surface and compare with other techniques
Astrophysics
Smoothed-particle hydrodynamics's adaptive resolution, numerical conservation of physically conserved quantities, and ability to simulate phenomena covering many orders of magnitude make it ideal for computations in theoretical astrophysics.
Simulations of galaxy formation, star formation, stellar collisions, supernovae and meteor impacts are some of the wide variety of astrophysical and cosmological uses of this method.
SPH is used to model hydrodynamic flows, including possible effects of gravity. Incorporating other astrophysical processes which may be important, such as radiative transfer and magnetic fields is an active area of research in the astronomical community, and has had some limited success.
Solid mechanics
Libersky and Petschek
extended SPH to Solid Mechanics. The main advantage of SPH in this application is the possibility of dealing with larger local distortion than grid-based methods.
This feature has been exploited in many applications in Solid Mechanics: metal forming, impact, crack growth, fracture, fragmentation, etc.
Another important advantage of meshfree methods in general, and of SPH in particular, is that mesh dependence problems are naturally avoided given the meshfree nature of the method. In particular, mesh alignment is related to problems involving cracks and it is avoided in SPH due to the isotropic support of the kernel functions. However, classical SPH formulations suffer from tensile instabilities
and lack of consistency.
Over the past years, different corrections have been introduced to improve the accuracy of the SPH solution, leading to the RKPM by Liu et al.
Randles and Libersky
and Johnson and Beissel
tried to solve the consistency problem in their study of impact phenomena.
Dyka et al.
and Randles and Libersky
introduced the stress-point integration into SPH and Ted Belytschko et al.
showed that the stress-point technique removes the instability due to spurious singular modes, while tensile instabilities can be avoided by using a Lagrangian kernel. Many other recent studies can be found in the literature devoted to improve the convergence of the SPH method.
Recent improvements in understanding the convergence and stability of SPH have allowed for more widespread applications in Solid Mechanics. Other examples of applications and developments of the method include:
Metal forming simulations.
SPH-based method SPAM (Smoothed Particle Applied Mechanics) for impact fracture in solids by William G. Hoover.
Modified SPH (SPH/MLSPH) for fracture and fragmentation.
Taylor-SPH (TSPH) for shock wave propagation in solids.
Generalized coordinate SPH (GSPH) allocates particles inhomogeneously in the Cartesian coordinate system and arranges them via mapping in a generalized coordinate system in which the particles are aligned at a uniform spacing.
Numerical tools
Interpolations
The Smoothed-Particle Hydrodynamics (SPH) method works by dividing the fluid into a set of discrete moving elements , referred to as particles. Their Lagrangian nature allows setting their position by integration of their velocity as:
These particles interact through a kernel function with characteristic radius known as the "smoothing length", typically represented in equations by . This means that the physical quantity of any particle can be obtained by summing the relevant properties of all the particles that lie within the range of the kernel, the latter being used as a weighting function . This can be understood in two steps. First an arbitrary field is written as a convolution with :
The error in making the above approximation is order . Secondly, the integral is approximated using a Riemann summation over the particles:
where the summation over includes all particles in the simulation. is the volume of particle , is the value of the quantity for particle and denotes position. For example, the density of particle can be expressed as:
where denotes the particle mass and the particle density, while is a short notation for . The error done in approximating the integral by a discrete sum depends on , on the particle size (i.e. , being the space dimension), and on the particle arrangement in space. The latter effect is still poorly known.
Kernel functions commonly used include the Gaussian function, the quintic spline and the Wendland kernel. The latter two kernels are compactly supported (unlike the Gaussian, where there is a small contribution at any finite distance away), with support proportional to . This has the advantage of saving computational effort by not including the relatively minor contributions from distant particles.
Although the size of the smoothing length can be fixed in both space and time, this does not take advantage of the full power of SPH. By assigning each particle its own smoothing length and allowing it to vary with time, the resolution of a simulation can be made to automatically adapt itself depending on local conditions. For example, in a very dense region where many particles are close together, the smoothing length can be made relatively short, yielding high spatial resolution. Conversely, in low-density regions where individual particles are far apart and the resolution is low, the smoothing length can be increased, optimising the computation for the regions of interest.
Discretization of governing equations
For particles of constant mass, differentiating the interpolated density with respect to time yields
where is the gradient of with respect to . Comparing this equation with the continuity equation in the Lagrangian description (using material derivatives),
it is apparent that its right-hand side is an approximation of ; hence one defines a discrete divergence operator as follows:
This operator gives an SPH approximation of at the particle for a given set of particles with given masses , positions and velocities .
The other important equation for a compressible inviscid fluid is the Euler equation for momentum balance:
Similarly to continuity, the task is to define a discrete gradient operator in order to write
One choice is
which has the property of being skew-adjoint with the divergence operator above, in the sense that
this being a discrete version of the continuum identity
This property leads to nice conservation properties.
Notice also that this choice leads to a symmetric divergence operator and antisymmetric gradient. Although there are several ways of discretizing the pressure gradient in the Euler equations, the above antisymmetric form is the most acknowledged one. It supports strict conservation of linear and angular momentum. This means that a force that is exerted on particle by particle equals the one that is exerted on particle by particle including the sign change of the effective direction, thanks to the antisymmetry property .
Nevertheless, other operators have been proposed, which may perform better numerically or physically.
For instance, one drawback of these operators is that while the divergence is zero-order consistent (i.e. yields zero when applied to a constant vector field), it can be seen that the gradient is not. Several techniques have been proposed to circumvent this issue, leading to renormalized operators (see e.g.).
Variational principle
The above SPH governing equations can be derived from a least action principle, starting from the Lagrangian of a particle system:
,
where is the particle specific internal energy. The Euler–Lagrange equation of variational mechanics reads, for each particle:
When applied to the above Lagrangian, it gives the following momentum equation:
where the chain rule has been used, since depends on , and the latter, on the position of the particles.
Using the thermodynamic property we may write
Plugging the SPH density interpolation and differentiating explicitly leads to
which is the SPH momentum equation already mentioned, where we recognize the operator. This explains why linear momentum is conserved, and allows conservation of angular momentum and energy to be conserved as well.
Time integration
From the work done in the 80's and 90's on numerical integration of point-like particles in large accelerators, appropriate time integrators have been developed with accurate conservation properties on the long term; they are called symplectic integrators. The most popular in the SPH literature is the leapfrog scheme, which reads for each particle :
where is the time step, superscripts stand for time iterations while is the particle acceleration, given by the right-hand side of the momentum equation.
Other symplectic integrators exist (see the reference textbook). It is recommended to use a symplectic (even low-order) scheme instead of a high order non-symplectic scheme, to avoid error accumulation after many iterations.
Integration of density has not been studied extensively (see below for more details).
Symplectic schemes are conservative but explicit, thus their numerical stability requires stability conditions, analogous to the Courant-Friedrichs-Lewy condition (see below).
Boundary techniques
In case the SPH convolution shall be practiced close to a boundary, i.e. closer than , then the integral support is truncated. Indeed, when the convolution is affected by a boundary, the convolution shall be split in 2 integrals,
where is the compact support ball centered at , with radius , and denotes the part of the compact support inside the computational domain, . Hence, imposing boundary conditions in SPH is completely based on approximating the second integral on the right hand side. The same can be of course applied to the differential operators computation,
Several techniques has been introduced in the past to model boundaries in SPH.
Integral neglect
The most straightforward boundary model is neglecting the integral,
such that just the bulk interactions are taken into account,
This is a popular approach when free-surface is considered in monophase simulations.
The main benefit of this boundary condition is its obvious simplicity. However, several consistency issues shall be considered when this boundary technique is applied. That's in fact a heavy limitation on its potential applications.
Fluid Extension
Probably the most popular methodology, or at least the most traditional one, to impose boundary conditions in SPH, is Fluid Extension technique. Such technique is based on populating the compact support across the boundary with so-called ghost particles, conveniently imposing their field values.
Along this line, the integral neglect methodology can be considered as a particular case of fluid extensions, where the field, , vanish outside the computational domain.
The main benefit of this methodology is the simplicity, provided that the boundary contribution is computed as part of the bulk interactions. Also, this methodology has been deeply analyzed in the literature.
On the other hand, deploying ghost particles in the truncated domain is not a trivial task, such that modelling complex boundary shapes becomes cumbersome. The 2 most popular approaches to populate the empty domain with ghost particles are Mirrored-Particles and Fixed-Particles.
Boundary Integral
The newest Boundary technique is the Boundary Integral methodology. In this methodology, the empty volume integral is replaced by a surface integral, and a renormalization:
with the normal of the generic j-th boundary element. The surface term can be also solved considering a semi-analytic expression.
Modelling physics
Hydrodynamics
Weakly compressible approach
Another way to determine the density is based on the SPH smoothing operator itself. Therefore, the density is estimated from the particle distribution utilizing the SPH interpolation. To overcome undesired errors at the free surface through kernel truncation, the density formulation can again be integrated in time.
The weakly compressible SPH in fluid dynamics is based on the discretization of the Navier–Stokes equations or Euler equations for compressible fluids. To close the system, an appropriate equation of state is utilized to link pressure and density . Generally, the so-called Cole equation
(sometimes mistakenly referred to as the "Tait equation") is used in SPH. It reads
where is the reference density and the speed of sound. For water, is commonly used. The background pressure is added to avoid negative pressure values.
Real nearly incompressible fluids such as water are characterized by very high speeds of sound of the order . Hence, pressure information travels fast compared to the actual bulk flow, which leads to very small Mach numbers . The momentum equation leads to the following relation:
where is the density change and the velocity vector.
In practice a value of c smaller than the real one is adopted to avoid time steps too small in the time integration scheme. Generally a numerical speed of sound is adopted such that density variation smaller than 1% are allowed. This is the so-called weak-compressibility assumption.
This corresponds to a Mach number smaller than 0.1, which implies:
where the maximum velocity needs to be estimated, for e.g. by Torricelli's law or an educated guess. Since only small density variations occur, a linear equation of state can be adopted:
Usually the weakly-compressible schemes are affected by a high-frequency spurious noise on the pressure and density fields.
This phenomenon is caused by the nonlinear interaction of acoustic waves and by fact that the scheme is explicit in time and centered in space
.
Through the years, several techniques have been proposed to get rid of this problem. They can be classified in three different groups:
the schemes that adopt density filters,
the models that add a diffusive term in the continuity equation,
the schemes that employ Riemann solvers to model the particle interaction.
Density filter technique
The schemes of the first group apply a filter directly on the density field to remove the spurious numerical noise. The most used filters are the MLS (moving least squares) and the Shepard filter
which can be applied at each time step or every n time steps. The more frequent is the use of the filtering procedure, the more regular density and pressure fields are obtained. On the other hand, this leads to an increase of the computational costs. In long time simulations, the use of the filtering procedure may lead to the disruption of the hydrostatic pressure component and to an inconsistency between the global volume of fluid and the density field. Further, it does not ensure the enforcement of the dynamic free-surface boundary condition.
Diffusive term technique
A different way to smooth out the density and pressure field is to add a diffusive term inside the continuity equation (group 2) :
The first schemes that adopted such an approach were described in Ferrari
and in Molteni
where the diffusive term was modeled as a Laplacian of the density field. A similar approach was also used in Fatehi and Manzari
.
In Antuono et al.
a correction to the diffusive term of Molteni was proposed to remove some inconsistencies close to the free-surface. In this case the adopted diffusive term is equivalent to a high-order differential operator on the density field.
The scheme is called δ-SPH and preserves all the conservation properties of the SPH without diffusion (e.g., linear and angular momenta, total energy,
see
) along with a smooth and regular representation of the density and pressure fields.
In the third group there are those SPH schemes which employ numerical fluxes obtained through Riemann solvers to model the particle interactions.
Riemann solver technique
For an SPH method based on Riemann solvers, an inter-particle Riemann problem is constructed along a unit vector
pointing form particle to particle . In this Riemann problem the initial left and right states are on particles
and , respectively. The and states are
The solution of the Riemann problem results in three waves emanating from the discontinuity. Two waves, which can be shock or rarefaction wave, traveling with the smallest or largest wave speed. The middle wave is always a contact discontinuity and separates two intermediate states, denoted by and . By assuming that the intermediate state satisfies and , a linearized Riemann solver for smooth flows or with only moderately strong shocks can be written as
where and are inter-particle averages. With the solution of the Riemann problem, i.e. and , the discretization of the SPH method is
where
. This indicates that the inter-particle average velocity and pressure are simply replaced by the solution of the Riemann problem. By comparing both it can be seen that the intermediate velocity and pressure from the inter-particle averages amount to implicit dissipation, i.e. density regularization and numerical viscosity, respectively.
Since the above discretization is very dissipative a straightforward modification is to apply a limiter to decrease the implicit numerical dissipations introduced by limiting the intermediate pressure by
where the limiter is defined as
Note that ensures that there is no dissipation when the fluid is under the action of an expansion wave, i.e. , and that the parameter , is used to modulate dissipation when the fluid is under the action of a compression wave, i.e. . Numerical experiments found the is generally effective. Also note that the dissipation introduced by the intermediate velocity is not limited.
Incompressible approach
Viscosity modelling
In general, the description of hydrodynamic flows require a convenient treatment of diffusive processes to model the viscosity in the Navier–Stokes equations. It needs special consideration because it involves the Laplacian differential operator. Since the direct computation does not provide satisfactory results, several approaches to model the diffusion have been proposed.
Artificial viscosity
Introduced by Monaghan and Gingold
the artificial viscosity was used to deal with high Mach number fluid flows. It reads
Here, is controlling a volume viscosity while acts similar to the Neumann Richtmeyr artificial viscosity. The is defined by
where ηh is a small fraction of h (e.g. 0.01h) to prevent possible numerical infinities at close distances.
The artificial viscosity also has shown to improve the overall stability of general flow simulations. Therefore, it is applied to inviscid problems in the following form
It is possible to not only stabilize inviscid simulations but also to model the physical viscosity by this approach. To do so
is substituted in the equation above, where is the number of spatial dimensions of the model. This approach introduces the bulk viscosity .
Morris
For low Reynolds numbers the viscosity model by Morris
was proposed.
LoShao
Additional physics
Surface tension
Heat transfer
Turbulence
Multiphase extensions
Astrophysics
Often in astrophysics, one wishes to model self-gravity in addition to pure hydrodynamics. The particle-based nature of SPH makes it ideal to combine with a particle-based gravity solver, for instance tree gravity code, particle mesh, or particle-particle particle-mesh.
Solid mechanics and fluid-structure interaction (FSI)
Total Lagrangian formulation for solid mechanics
To discretize the governing equations of solid dynamics, a correction matrix
is first introduced to reproducing rigid-body rotation as
where
stands for the gradient of the kernel function evaluated at the initial reference configuration.
Note that subscripts and are used to denote solid particles, and smoothing length is identical to that in the discretization of fluid equations.
Using the initial configuration as the reference, the solid density is directly evaluated as
where is the Jacobian determinant of deformation tensor .
We can now discretize the momentum equation in the following form
where inter-particle averaged first Piola-Kirchhoff stress
is defined as
Also and correspond to the fluid pressure and viscous forces acting on the solid particle , respectively.
Fluid-structure coupling
In fluid-structure coupling, the surrounding solid structure is behaving as a moving boundary for fluid, and the no-slip boundary condition is imposed at the fluid-structure interface. The interaction forces and acting on a fluid particle , due to the presence of the neighboring solid particle , can be obtained as
and
Here, the imaginary pressure and velocity are defined by
where denotes the surface normal direction of the solid structure,
and the imaginary particle density is calculated through the equation of state.
Accordingly, the interaction forces and acting on a solid particle are given by
and
The anti-symmetric property of the derivative of the kernel function will ensure the momentum conservation for each pair of interacting particles and .
Others
The discrete element method, used for simulating granular materials, is related to SPH.
Variants of the method
References
Further reading
Hoover, W. G. (2006); Smooth Particle Applied Mechanics: The State of the Art, World Scientific.
Stellingwerf, R. F.; Wingate, C. A.; "Impact Modelling with SPH", Memorie della Societa Astronomia Italiana, Vol. 65, p. 1117 (1994).
Amada, T.; Imura, M.; Yasumuro, Y.; Manabe, Y.; and Chihara, K. (2004); "Particle-based fluid simulation on GPU", in Proceedings of ACM Workshop on General-purpose Computing on Graphics Processors (August, 2004, Los Angeles, California).
Desbrun, M.; and Cani, M.-P. (1996). "Smoothed Particles: a new paradigm for animating highly deformable bodies" in Proceedings of Eurographics Workshop on Computer Animation and Simulation (August 1996, Poitiers, France).
Hegeman, K.; Carr, N. A.; and Miller, G. S. P.; "Particle-based fluid simulation on the GPU", in Proceedings of International Conference on Computational Science (Reading, UK, May 2006), Lecture Notes in Computer Science v. 3994/2006 (Springer-Verlag).
Kelager, M. (2006) Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics, MSc Thesis, Univ. Copenhagen.
Kolb, A.; and Cuntz, N. (2005); "Dynamic particle coupling for GPU-based fluid simulation", in Proceedings of the 18th Symposium on Simulation Techniques (2005) pp. 722–727.
Liu, G. R.; and Liu, M. B.; Smoothed Particle Hydrodynamics: a meshfree particle method, Singapore: World Scientific (2003).
Monaghan, Joseph J. (1992). "Smoothed Particle Hydrodynamics", Annual Review of Astronomy and Astrophysics (1992). 30 : 543–74.
Muller, M.; Charypar, D.; and Gross, M.; "Particle-based Fluid Simulation for Interactive Applications", in Breen, D; and Lin, M. (eds.), Proceedings of Eurographics/SIGGRAPH Symposium on Computer Animation (2003).
Vesterlund, M.; Simulation and Rendering of a Viscous Fluid Using Smoothed Particle Hydrodynamics, MSc Thesis, Umea University, Sweden.
Violeau, D.; Fluid Mechanics and the SPH method, Oxford University Press (2012).
External links
First large simulation of star formation using SPH
SPHERIC (SPH rEsearch and engineeRing International Community)
ITVO is the web-site of The Italian Theoretical Virtual Observatory created to query a database of numerical simulation archive.
SPHC Image Gallery depicts a wide variety of test cases, experimental validations, and commercial applications of the SPH code SPHC.
A derivation of the SPH model starting from Navier-Stokes equations
Software
Algodoo is a 2D simulation framework for education using SPH
AQUAgpusph is the free (GPLv3) SPH of the researchers, by the researchers, for the researchers
dive solutions is a commercial web-based SPH engineering software for CFD purposes
DualSPHysics is a mostly open source SPH code based on SPHysics and using GPU computing. The open source components are available under the LGPL.
FLUIDS v.1 is a simple, open source (Zlib), real-time 3D SPH implementation in C++ for liquids for CPU and GPU.
Fluidix is a GPU-based particle simulation API available from OneZero Software
GADGET is a freely available (GPL) code for cosmological N-body/SPH simulations
GPUSPH SPH simulator with viscosity (GPLv3)
Pasimodo is a program package for particle-based simulation methods, e.g. SPH
LAMMPS is a massively parallel, open-source classical molecular dynamics code that can perform SPH simulations
Physics Abstraction Layer is an open source abstraction system that supports real time physics engines with SPH support
PreonLab is a commercial engineering software developed by FIFTY2 Technology implementing an implicit SPH method
Punto is a freely available visualisation tool for particle simulations
pysph Open Source Framework for Smoothed Particle Hydrodynamics in Python (New BSD License)
Py-SPHViewer Open Source python visualisation tool for Smoothed Particle Hydrodynamics simulations.
RealFlow Commercial SPH solver for the cinema industry.
RheoCube is a commercial SaaS product by Lorenz Research for the study and prediction of complex-fluid rheology and stability
SimPARTIX is a commercial simulation package for SPH and Discrete element method (DEM) simulations from Fraunhofer IWM
SPH-flow
SPHERA
SPHinXsys is an open source multi-physics, multi-resolution SPH library. It provides C++ APIs for physical accurate simulation and aims to model coupled industrial dynamic systems including fluid, solid, multi-body dynamics and beyond.
SPHysics is an open source SPH implementation in Fortran
SPLASH is an open source (GPL) visualisation tool for SPH simulations
SYMPLER: A freeware SYMbolic ParticLE simulatoR from the University of Freiburg.
Nauticle is a general-purpose computational tool for particle-based numerical methods.
NDYNAMICS is a commercial fluid simulation software based on implicit SPH developed by CENTROID LAB currently used for internal/external flooding/nuclear/chemical engineering applications.
Numerical differential equations
Computational fluid dynamics | Smoothed-particle hydrodynamics | [
"Physics",
"Chemistry"
] | 6,544 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
1,266,589 | https://en.wikipedia.org/wiki/Point%20particle | A point particle, ideal particle or point-like particle (often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension; being dimensionless, it does not take up space. A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. Point masses and point charges, discussed below, are two common cases. When a point particle has an additive property, such as mass or charge, it is often represented mathematically by a Dirac delta function. In classical mechanics there is usually no concept of rotation of point particles about their "center".
In quantum mechanics, the concept of a point particle is complicated by the Heisenberg uncertainty principle, because even an elementary particle, with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~. There is nevertheless a distinction between elementary particles such as electrons or quarks, which have no known internal structure, and composite particles such as protons and neutrons, whose internal structures are made up of quarks.
Elementary particles are sometimes called "point particles" in reference to their lack of internal structure, but this is in a different sense than that discussed herein.
Point mass
Point mass (pointlike mass) is the concept, for example in classical physics, of a physical object (typically matter) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions.
In the theory of gravity, extended objects can behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the Newtonian gravitation behave, as long as they do not touch each other, in such a way as if all their matter were concentrated in their centers of mass. In fact, this is true for all fields described by an inverse square law.
Point charge
Similar to point masses, in electromagnetism physicists discuss a , a point particle with a nonzero electric charge. The fundamental equation of electrostatics is Coulomb's law, which describes the electric force between two point charges. Another result, Earnshaw's theorem, states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges. The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit.
In quantum mechanics
In quantum mechanics, there is a distinction between an elementary particle (also called "point particle") and a composite particle. An elementary particle, such as an electron, quark, or photon, is a particle with no known internal structure. Whereas a composite particle, such as a proton or neutron, has an internal structure.
However, neither elementary nor composite particles are spatially localized, because of the Heisenberg uncertainty principle. The particle wavepacket always occupies a nonzero volume. For example, see atomic orbital: The electron is an elementary particle, but its quantum states form three-dimensional patterns.
Nevertheless, there is good reason that an elementary particle is often called a point particle. Even if an elementary particle has a delocalized wavepacket, the wavepacket can be represented as a quantum superposition of quantum states wherein the particle is exactly localized. Moreover, the interactions of the particle can be represented as a superposition of interactions of individual states which are localized. This is not true for a composite particle, which can never be represented as a superposition of exactly-localized quantum states. It is in this sense that physicists can discuss the intrinsic "size" of a particle: The size of its internal structure, not the size of its wavepacket. The "size" of an elementary particle, in this sense, is exactly zero.
For example, for the electron, experimental evidence shows that the size of an electron is less than . This is consistent with the expected value of exactly zero. (This should not be confused with the classical electron radius, which, despite the name, is unrelated to the actual size of an electron.)
See also
Test particle
Brane
Charge (physics) (general concept, not limited to electric charge)
Standard Model of particle physics
Wave–particle duality
Notes and references
Notes
Bibliography
Further reading
External links
Concepts in physics
Classical mechanics | Point particle | [
"Physics"
] | 953 | [
"Mechanics",
"Classical mechanics",
"nan"
] |
1,266,658 | https://en.wikipedia.org/wiki/Crystallization | Crystallization is the process by which solids form, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, cooling rate, and in the case of liquid crystals, time of fluid evaporation.
Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc.
The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances).
Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal.
Process
The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties.
Nucleation is the step where the solute molecules or atoms dispersed in the solvent start to gather into clusters, on the microscopic scale (elevating solute concentration in a small region), that become stable under the current operating conditions. These stable clusters constitute the nuclei. Therefore, the clusters need to reach a critical size in order to become stable nuclei. Such critical size is dictated by many different factors (temperature, supersaturation, etc.). It is at the stage of nucleation that the atoms or molecules arrange in a defined and periodic manner that defines the crystal structure – note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties of the crystal (size and shape), although those are a result of the internal crystal structure.
The crystal growth is the subsequent size increase of the nuclei that succeed in achieving the critical cluster size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization, as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either nucleation or growth may be predominant over the other, dictating crystal size.
Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon called polymorphism. Certain polymorphs may be metastable, meaning that although it is not in thermodynamic equilibrium, it is kinetically stable and requires some input of energy to initiate a transformation to the equilibrium phase. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by varying factors such as temperature, such as in the transformation of anatase to rutile phases of titanium dioxide.
In nature
There are many examples of natural process that involve crystallization.
Geological time scale process examples include:
Natural (mineral) crystal formation (see also gemstone);
Stalactite/stalagmite, rings formation;
Human time scale process examples include:
Snow flakes formation;
Honey crystallization (nearly all types of honey crystallize).
Methods
Crystal formation can be divided into two types, where the first type of crystals are composed of a cation and anion, also known as a salt, such as sodium acetate. The second type of crystals are composed of uncharged species, for example menthol.
Crystals can be formed by various methods, such as: cooling, evaporation, addition of a second solvent to reduce the solubility of the solute (technique known as antisolvent or drown-out), solvent layering, sublimation, changing the cation or anion, as well as other methods.
The formation of a supersaturated solution does not guarantee crystal formation, and often a seed crystal or scratching the glass is required to form nucleation sites.
A typical laboratory technique for crystal formation is to dissolve the solid in a solution in which it is partially soluble, usually at high temperatures to obtain supersaturation. The hot mixture is then filtered to remove any insoluble impurities. The filtrate is allowed to slowly cool. Crystals that form are then filtered and washed with a solvent in which they are not soluble, but is miscible with the mother liquor. The process is then repeated to increase the purity in a technique known as recrystallization.
For biological molecules in which the solvent channels continue to be present to retain the three dimensional structure intact, microbatch crystallization under oil and vapor diffusion have been the common methods.
Typical equipment
Equipment for the main industrial processes for crystallization.
Tank crystallizers. Tank crystallization is an old method still used in some specialized cases. Saturated solutions, in tank crystallization, are allowed to cool in open tanks. After a period of time the mother liquor is drained and the crystals removed. Nucleation and size of crystals are difficult to control. Typically, labor costs are very high.
Mixed-Suspension, Mixed-Product-Removal (MSMPR): MSMPR is used for much larger scale inorganic crystallization. MSMPR can crystalize solutions in a continuous manner.
Thermodynamic view
The crystallization process appears to violate the second principle of thermodynamics. Whereas most processes that yield more orderly results are achieved by applying heat, crystals usually form at lower temperaturesespecially by supercooling. However, the release of the heat of fusion during crystallization causes the entropy of the universe to increase, thus this principle remains unaltered.
The molecules within a pure, perfect crystal, when heated by an external source, will become liquid. This occurs at a sharply defined temperature (different for each type of crystal). As it liquifies, the complicated architecture of the crystal collapses. Melting occurs because the entropy (S) gain in the system by spatial randomization of the molecules has overcome the enthalpy (H) loss due to breaking the crystal packing forces:
Regarding crystals, there are no exceptions to this rule. Similarly, when the molten crystal is cooled, the molecules will return to their crystalline form once the temperature falls beyond the turning point. This is because the thermal randomization of the surroundings compensates for the loss of entropy that results from the reordering of molecules within the system. Such liquids that crystallize on cooling are the exception rather than the rule.
The nature of the crystallization process is governed by both thermodynamic and kinetic factors, which can make it highly variable and difficult to control. Factors such as impurity level, mixing regime, vessel design, and cooling profile can have a major impact on the size, number, and shape of crystals produced.
Dynamics
As mentioned above, a crystal is formed following a well-defined pattern, or structure, dictated by forces acting at the molecular level. As a consequence, during its formation process the crystal is in an environment where the solute concentration reaches a certain critical value, before changing status. Solid formation, impossible below the solubility threshold at the given temperature and pressure conditions, may then take place at a concentration higher than the theoretical solubility level. The difference between the actual value of the solute concentration at the crystallization limit and the theoretical (static) solubility threshold is called supersaturation and is a fundamental factor in crystallization.
Nucleation
Nucleation is the initiation of a phase change in a small region, such as the formation of a solid crystal from a liquid solution. It is a consequence of rapid local fluctuations on a molecular scale in a homogeneous phase that is in a state of metastable equilibrium. Total nucleation is the sum effect of two categories of nucleation – primary and secondary.
Primary nucleation
Primary nucleation is the initial formation of a crystal where there are no other crystals present or where, if there are crystals present in the system, they do not have any influence on the process. This can occur in two conditions. The first is homogeneous nucleation, which is nucleation that is not influenced in any way by solids. These solids include the walls of the crystallizer vessel and particles of any foreign substance. The second category, then, is heterogeneous nucleation. This occurs when solid particles of foreign substances cause an increase in the rate of nucleation that would otherwise not be seen without the existence of these foreign particles. Homogeneous nucleation rarely occurs in practice due to the high energy necessary to begin nucleation without a solid surface to catalyze the nucleation.
Primary nucleation (both homogeneous and heterogeneous) has been modeled as follows:
where
B is the number of nuclei formed per unit volume per unit time,
N is the number of nuclei per unit volume,
kn is a rate constant,
c is the instantaneous solute concentration,
c* is the solute concentration at saturation,
(c − c*) is also known as supersaturation,
n is an empirical exponent that can be as large as 10, but generally ranges between 3 and 4.
Secondary nucleation
Secondary nucleation is the formation of nuclei attributable to the influence of the existing microscopic crystals in the magma. More simply put, secondary nucleation is when crystal growth is initiated with contact of other existing crystals or "seeds". The first type of known secondary crystallization is attributable to fluid shear, the other due to collisions between already existing crystals with either a solid surface of the crystallizer or with other crystals themselves. Fluid-shear nucleation occurs when liquid travels across a crystal at a high speed, sweeping away nuclei that would otherwise be incorporated into a crystal, causing the swept-away nuclei to become new crystals. Contact nucleation has been found to be the most effective and common method for nucleation. The benefits include the following:
Low kinetic order and rate-proportional to supersaturation, allowing easy control without unstable operation.
Occurs at low supersaturation, where growth rate is optimal for good quality.
Low necessary energy at which crystals strike avoids the breaking of existing crystals into new crystals.
The quantitative fundamentals have already been isolated and are being incorporated into practice.
The following model, although somewhat simplified, is often used to model secondary nucleation:
where
k1 is a rate constant,
MT is the suspension density,
j is an empirical exponent that can range up to 1.5, but is generally 1,
b is an empirical exponent that can range up to 5, but is generally 2.
Growth
Once the first small crystal, the nucleus, forms it acts as a convergence point (if unstable due to supersaturation) for molecules of solute touching – or adjacent to – the crystal so that it increases its own dimension in successive layers. The pattern of growth resembles the rings of an onion, as shown in the picture, where each colour indicates the same mass of solute; this mass creates increasingly thin layers due to the increasing surface area of the growing crystal. The supersaturated solute mass the original nucleus may capture in a time unit is called the growth rate expressed in kg/(m2*h), and is a constant specific to the process. Growth rate is influenced by several physical factors, such as surface tension of solution, pressure, temperature, relative crystal velocity in the solution, Reynolds number, and so forth.
The main values to control are therefore:
Supersaturation value, as an index of the quantity of solute available for the growth of the crystal;
Total crystal surface in unit fluid mass, as an index of the capability of the solute to fix onto the crystal;
Retention time, as an index of the probability of a molecule of solute to come into contact with an existing crystal;
Flow pattern, again as an index of the probability of a molecule of solute to come into contact with an existing crystal (higher in laminar flow, lower in turbulent flow, but the reverse applies to the probability of contact).
The first value is a consequence of the physical characteristics of the solution, while the others define a difference between a well- and poorly designed crystallizer.
Size distribution
The appearance and size range of a crystalline product is extremely important in crystallization. If further processing of the crystals is desired, large crystals with uniform size are important for washing, filtering, transportation, and storage, because large crystals are easier to filter out of a solution than small crystals. Also, larger crystals have a smaller surface area to volume ratio, leading to a higher purity. This higher purity is due to less retention of mother liquor which contains impurities, and a smaller loss of yield when the crystals are washed to remove the mother liquor. In special cases, for example during drug manufacturing in the pharmaceutical industry, small crystal sizes are often desired to improve drug dissolution rate and bio-availability. The theoretical crystal size distribution can be estimated as a function of operating conditions with a fairly complicated mathematical process called population balance theory (using population balance equations).
Main crystallization processes
Some of the important factors influencing solubility are:
Concentration
Temperature
Solvent mixture composition
Polarity
Ionic strength
So one may identify two main families of crystallization processes:
Cooling crystallization
Evaporative crystallization
This division is not really clear-cut, since hybrid systems exist, where cooling is performed through evaporation, thus obtaining at the same time a concentration of the solution.
A crystallization process often referred to in chemical engineering is the fractional crystallization. This is not a different process, rather a special application of one (or both) of the above.
Cooling crystallization
Application
Most chemical compounds, dissolved in most solvents, show the so-called direct solubility that is, the solubility threshold increases with temperature.
So, whenever the conditions are favorable, crystal formation results from simply cooling the solution. Here cooling is a relative term: austenite crystals in a steel form well above 1000 °C. An example of this crystallization process is the production of Glauber's salt, a crystalline form of sodium sulfate. In the diagram, where equilibrium temperature is on the x-axis and equilibrium concentration (as mass percent of solute in saturated solution) in y-axis, it is clear that sulfate solubility quickly decreases below 32.5 °C. Assuming a saturated solution at 30 °C, by cooling it to 0 °C (note that this is possible thanks to the freezing-point depression), the precipitation of a mass of sulfate occurs corresponding to the change in solubility from 29% (equilibrium value at 30 °C) to approximately 4.5% (at 0 °C) – actually a larger crystal mass is precipitated, since sulfate entrains hydration water, and this has the side effect of increasing the final concentration.
There are limitations in the use of cooling crystallization:
Many solutes precipitate in hydrate form at low temperatures: in the previous example this is acceptable, and even useful, but it may be detrimental when, for example, the mass of water of hydration to reach a stable hydrate crystallization form is more than the available water: a single block of hydrate solute will be formed – this occurs in the case of calcium chloride);
Maximum supersaturation will take place in the coldest points. These may be the heat exchanger tubes which are sensitive to scaling, and heat exchange may be greatly reduced or discontinued;
A decrease in temperature usually implies an increase of the viscosity of a solution. Too high a viscosity may give hydraulic problems, and the laminar flow thus created may affect the crystallization dynamics.
It is not applicable to compounds having reverse solubility, a term to indicate that solubility increases with temperature decrease (an example occurs with sodium sulfate where solubility is reversed above 32.5 °C).
Cooling crystallizers
The simplest cooling crystallizers are tanks provided with a mixer for internal circulation, where temperature decrease is obtained by heat exchange with an intermediate fluid circulating in a jacket. These simple machines are used in batch processes, as in processing of pharmaceuticals and are prone to scaling. Batch processes normally provide a relatively variable quality of the product along with the batch.
The Swenson-Walker crystallizer is a model, specifically conceived by Swenson Co. around 1920, having a semicylindric horizontal hollow trough in which a hollow screw conveyor or some hollow discs, in which a refrigerating fluid is circulated, plunge during rotation on a longitudinal axis. The refrigerating fluid is sometimes also circulated in a jacket around the trough. Crystals precipitate on the cold surfaces of the screw/discs, from which they are removed by scrapers and settle on the bottom of the trough. The screw, if provided, pushes the slurry towards a discharge port.
A common practice is to cool the solutions by flash evaporation: when a liquid at a given T0 temperature is transferred in a chamber at a pressure P1 such that the liquid saturation temperature T1 at P1 is lower than T0, the liquid will release heat according to the temperature difference and a quantity of solvent, whose total latent heat of vaporization equals the difference in enthalpy. In simple words, the liquid is cooled by evaporating a part of it.
In the sugar industry, vertical cooling crystallizers are used to exhaust the molasses in the last crystallization stage downstream of vacuum pans, prior to centrifugation. The massecuite enters the crystallizers at the top, and cooling water is pumped through pipes in counterflow.
Evaporative crystallization
Another option is to obtain, at an approximately constant temperature, the precipitation of the crystals by increasing the solute concentration above the solubility threshold. To obtain this, the solute/solvent mass ratio is increased using the technique of evaporation. This process is insensitive to change in temperature (as long as hydration state remains unchanged).
All considerations on control of crystallization parameters are the same as for the cooling models.
Evaporative crystallizers
Most industrial crystallizers are of the evaporative type, such as the very large sodium chloride and sucrose units, whose production accounts for more than 50% of the total world production of crystals. The most common type is the forced circulation (FC) model (see evaporator). A pumping device (a pump or an axial flow mixer) keeps the crystal slurry in homogeneous suspension throughout the tank, including the exchange surfaces; by controlling pump flow, control of the contact time of the crystal mass with the supersaturated solution is achieved, together with reasonable velocities at the exchange surfaces. The Oslo, mentioned above, is a refining of the evaporative forced circulation crystallizer, now equipped with a large crystals settling zone to increase the retention time (usually low in the FC) and to roughly separate heavy slurry zones from clear liquid. Evaporative crystallizers tend to yield larger average crystal size and narrows the crystal size distribution curve.
DTB crystallizer
Whichever the form of the crystallizer, to achieve an effective process control it is important to control the retention time and the crystal mass, to obtain the optimum conditions in terms of crystal specific surface and the fastest possible growth. This can be achieved by a separation – to put it simply – of the crystals from the liquid mass, in order to manage the two flows in a different way. The practical way is to perform a gravity settling to be able to extract (and possibly recycle separately) the (almost) clear liquid, while managing the mass flow around the crystallizer to obtain a precise slurry density elsewhere. A typical example is the DTB (Draft Tube and Baffle) crystallizer, an idea of Richard Chisum Bennett (a Swenson engineer and later President of Swenson) at the end of the 1950s. The DTB crystallizer (see images) has an internal circulator, typically an axial flow mixer – yellow – pushing upwards in a draft tube while outside the crystallizer there is a settling area in an annulus; in it the exhaust solution moves upwards at a very low velocity, so that large crystals settle – and return to the main circulation – while only the fines, below a given grain size are extracted and eventually destroyed by increasing or decreasing temperature, thus creating additional supersaturation. A quasi-perfect control of all parameters is achieved as DTF crystallizers offer superior control over crystal size and characteristics. This crystallizer, and the derivative models (Krystal, CSC, etc.) could be the ultimate solution if not for a major limitation in the evaporative capacity, due to the limited diameter of the vapor head and the relatively low external circulation not allowing large amounts of energy to be supplied to the system.
See also
Abnormal grain growth
Chiral resolution by crystallization
Crystal habit
Crystal structure
Crystallite
Fractional crystallization (chemistry)
Igneous differentiation
Laser heated pedestal growth
Micro-pulling-down
Protein crystallization
Pumpable ice technology
Quasicrystal
Recrystallization (chemistry)
Recrystallization (metallurgy)
Seed crystal
Single crystal
Symplectite
Vitrification
X-ray crystallography
References
Further reading
"Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website
Arkenbout-de Vroome, Tine (1995). Melt Crystallization Technology CRC
Geankoplis, C.J. (2003) "Transport Processes and Separation Process Principles". 4th Ed. Prentice-Hall Inc.
Glynn P.D. and Reardon E.J. (1990) "Solid-solution aqueous-solution equilibria: thermodynamic theory and representation". Amer. J. Sci. 290, 164–201.
Jancic, S. J.; Grootscholten, P.A.M.: “Industrial Crystallization”, Textbook, Delft University Press and Reidel Publishing Company, Delft, The Netherlands, 1984.
Mersmann, A. (2001) Crystallization Technology Handbook CRC; 2nd ed.
External links
Batch Crystallization
Industrial Crystallization
Liquid-solid separation
Crystallography
Laboratory techniques
Phase transitions
Articles containing video clips | Crystallization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,817 | [
"Physical phenomena",
"Phase transitions",
"Separation processes by phases",
"Critical phenomena",
"Phases of matter",
"Materials science",
"Crystallography",
"Condensed matter physics",
"nan",
"Statistical mechanics",
"Matter",
"Liquid-solid separation"
] |
26,717,031 | https://en.wikipedia.org/wiki/Osgood%20curve | In mathematical analysis, an Osgood curve is a non-self-intersecting curve that has positive area. Despite its area, it is not possible for such a curve to cover any two-dimensional region, distinguishing them from space-filling curves. Osgood curves are named after William Fogg Osgood.
Definition and properties
A curve in the Euclidean plane is defined to be an Osgood curve when it is non-self-intersecting (that is, it is either a Jordan curve or a Jordan arc) and it has positive area. More formally, it must have positive two-dimensional Lebesgue measure.
Osgood curves have Hausdorff dimension two, like space-filling curves. However, they cannot be space-filling curves: by Netto's theorem, covering all of the points of the plane, or of any two-dimensional region of the plane, would lead to self-intersections.
History
The first examples of Osgood curves were found by and . Both examples have positive area in parts of the curve, but zero area in other parts; this flaw was corrected by , who found a curve that has positive area in every neighborhood of each of its points, based on an earlier construction of Wacław Sierpiński. Knopp's example has the additional advantage that its area can be made arbitrarily close to the area of its convex hull.
Construction
It is possible to modify the recursive construction of certain fractals and space-filling curves to obtain an Osgood curve. For instance, Knopp's construction involves recursively splitting triangles into pairs of smaller triangles, meeting at a shared vertex, by removing triangular wedges. When each level of this construction removes the same fraction of the area of its triangles, the result is a Cesàro fractal such as the Koch snowflake.
Instead, reducing the fraction of area removed per level, rapidly enough to leave a constant fraction of the area unremoved, produces an Osgood curve.
Another way to construct an Osgood curve is to form a two-dimensional version of the Smith–Volterra–Cantor set, a totally disconnected point set with non-zero area, and then apply the Denjoy–Riesz theorem according to which every bounded and totally disconnected subset of the plane is a subset of a Jordan curve.
Notes
References
.
.
.
.
.
.
.
External links
Plane curves
Area | Osgood curve | [
"Physics",
"Mathematics"
] | 491 | [
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Plane curves",
"Quantity",
"Euclidean plane geometry",
"Size",
"Wikipedia categories named after physical quantities",
"Area"
] |
26,717,534 | https://en.wikipedia.org/wiki/Gauss%20iterated%20map | In mathematics, the Gauss map (also known as Gaussian map or mouse map), is a nonlinear iterated map of the reals into a real interval given by the Gaussian function:
where α and β are real parameters.
Named after Johann Carl Friedrich Gauss, the function maps the bell shaped Gaussian function similar to the logistic map.
Properties
In the parameter real space can be chaotic. The map is also called the mouse map because its bifurcation diagram resembles a mouse (see Figures).
References
Chaotic maps | Gauss iterated map | [
"Mathematics"
] | 112 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
26,720,235 | https://en.wikipedia.org/wiki/Nanofluid | A nanofluid is a fluid containing nanometer-sized particles, called nanoparticles. These fluids are engineered colloidal suspensions of nanoparticles in a base fluid. The nanoparticles used in nanofluids are typically made of metals, oxides, carbides, or carbon nanotubes. Common base fluids include water, ethylene glycol, and oil.
Nanofluids have many potentially heat transfer applications, including microelectronics, fuel cells, pharmaceutical processes, and hybrid-powered engines, engine cooling/vehicle thermal management, domestic refrigerator, chiller, heat exchanger, in grinding, machining and in boiler flue gas temperature reduction. They exhibit enhanced thermal conductivity and convective heat transfer coefficient compared to the base fluid. Knowledge of the rheological behaviour of nanofluids is critical in deciding their suitability for convective heat transfer applications. Nanofluids also have special acoustical properties and in ultrasonic fields display shear-wave reconversion of an incident compressional wave; the effect becomes more pronounced as concentration increases.
In computational fluid dynamics (CFD), nanofluids can be assumed to be single phase fluids; however, almost all academic papers use a two-phase assumption. Classical theory of single phase fluids can be applied, where physical properties of nanofluid is taken as a function of properties of both constituents and their concentrations. An alternative approach simulates nanofluids using a two-component model.
The spreading of a nanofluid droplet is enhanced by the solid-like ordering structure of nanoparticles assembled near the contact line by diffusion, which gives rise to a structural disjoining pressure in the vicinity of the contact line. However, such enhancement is not observed for small droplets with diameter of nanometer scale, because the wetting time scale is much smaller than the diffusion time scale.
Properties
Thermal conductivity, viscosity, density, specific heat, and surface tension are significant thermophysical properties of nanofluids. Parameters such as nanoparticle type, size, shape, volume concentration, fluid temperature, and nanofluid preparation method affect thermophysical properties.
Viscosity
Density
Thermal conductivity
Synthesis
Nanofluids are produced by several techniques:
Direct Evaporation (1 step)
Gas condensation/dispersion (2 step)
Chemical vapour condensation (1 step)
Chemical precipitation (1 step)
Bio-based (2 step)
Base liquids include water, ethylene glycol, and oils have been used. Although stabilization can be a challenge, on-going research indicates that it is possible. Nano-materials used so far in nanofluid synthesis include metallic particles, oxide particles, carbon nanotubes, graphene nano-flakes and ceramic particles.
Bio-based
A biologically-based, environmentally friendly approach for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds was developed. No toxic/hazardous acids are typically used in common carbon nanomaterial functionalization procedures, as employed in this synthesis. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in distilled water (DI water), producing a highly stable MWCNT aqueous suspension (MWCNTs Nanofluid).
Applications
Nanofluids are primarily used for their enhanced thermal properties as coolants in heat transfer equipment such as heat exchangers, electronic cooling system(such as flat plate) and radiators. Heat transfer over flat plate has been analyzed by many researchers. However, they are also useful for their controlled optical properties. Graphene based nanofluid has been found to enhance Polymerase chain reaction efficiency. Nanofluids in solar collectors is another application where nanofluids are employed for their tunable optical properties. Nanofluids have also been explored to enhance thermal desalination technologies, by altering thermal conductivity and absorbing sunlight, but surface fouling of the nanofluids poses a major risk to those approaches. Researchers proposed nanofluids for electronics cooling. Nanofluids also can be used in machining.
Smart cooling
One project demonstrated a class of magnetically polarizable nanofluids with thermal conductivity enhanced up to 300%. Fatty-acid-capped magnetite nanoparticles of different sizes (3-10 nm) were synthesized. It showed that the thermal and rheological properties of such magnetic nanofluids are tunable by varying magnetic field strength and orientation with respect to the direction of heat flow. Such response stimuli fluids are reversible and have applications in miniature devices such as micro- and nano-electromechanical systems.
A 2013 study considered the effect of an external magnetic field on the convective heat transfer coefficient of water-based magnetite nanofluid experimentally under laminar flow regime. It obtained up to 300% enhancement at Re=745 and magnetic field gradient of 32.5 mT/mm. The effect of the magnetic field on pressure was not as significant.
Sensing
A nanofluid-based ultrasensitive optical sensor changes its colour on exposure to low concentrations of toxic cations. The sensor is useful in detecting minute traces of cations in industrial and environmental samples. Existing techniques for monitoring cations levels in industrial and environmental samples are expensive, complex and time-consuming. The sensor uses a magnetic nanofluid that consists of nano-droplets with magnetic grains suspended in water. In a fixed magnetic field, a light source illuminates the nanofluid, changing its colour depending on the cation concentration. This color change occurs within a second after exposure to cations, much faster than other existing cation sensing methods.
Such responsive nanofluids can detect and image defects in ferromagnetic components. The so-called photonic eye is based on a magnetically polarizable nano-emulsion that changes colour when it comes into contact with a defective region in a sample. The device could monitor structures such as rail tracks and pipelines.
Nanolubricants
Nanolubricants modify oils used for engine and machine lubrication. Materials including metals, oxides and allotropes of carbon have supplied nanoparticles for such applications. The nanofluid enhances thermal conductivity and anti-wear properties. Although MoS2, graphene, and Cu-based fluids have been studied extensively, fundamental understanding of underlying mechanisms is absent.
MoS2 and graphene work as third body lubricants, essentially acting as ball bearings that reduce the friction between surfaces. This mechanism requires sufficient particles to be present at the contact interface. The beneficial effects diminish because sustained contac pushes away the third body lubricants.
Other nanolubricant approaches, such as magnesium silicate hydroxides (MSH) rely on nanoparticle coatings by synthesizing nanomaterials with adhesive and lubricating functionalities. Research into nanolubricant coatings has been conducted in both the academic and industrial spaces. Nanoborate additives as well as mechanical model descriptions of diamond-like carbon (DLC) coating formations have been developed. Companies such as TriboTEX provide commercial formulations of synthesized MSH nanomaterial coatings for vehicle engine and industrial applications.
Petroleum refining
Many researches claim that nanoparticles can be used to enhance crude oil recovery.
Photonic crystals
Magnetic nanoparticle clusters or magnetic nanobeads of size 80–150 nanometers form ordered structures along the direction of an external magnetic field with a regular interparticle spacing on the order of hundreds of nanometers resulting in strong diffraction of visible light.
Flow battery
Nanoelectrofuel-based flow batteries ((NFB) have been claimed to store 15 to 25 times as much energy as traditional flow batteries. The Strategic Technology Office of the U.S. Defense Advanced Research Projects Agency (DARPA) is exploring military’s deployment of NFB in place of conventional lithium-ion batteries.
The nanofluid particles undergo redox reactions at the electrode. Particles are engineered to remain suspended indefinitely, comprising up to 80 percent of the liquid’s weight with the viscosity of motor oil. The particles can be made from inexpensive minerals, such as ferric oxide (anode) and gamma manganese dioxide (cathode). The nanofluids use a nonflammable aqueous suspension.
As of 2024 DARPA-funded Influit claimed to be developing a battery with an energy density of 550-850 wh/kg, higher than conventional lithium-ion batteries. A demonstration battery operated successfully between −40 °C and 80 °C.
Discharged nanofluids could be recharged while in a vehicle or after removal at a service station. Costs are claimed to be comparable to lithium ion. An EV-battery sized fuel reservoir (80 gallons) was expected to provide range comparable to a conventional gasoline vehicle. Fluids that escape, e.g., following a crash, turn into a pastelike substance, which can be removed and reused safely. Flow batteries also produce less heat, reducing their thermal signature for military vehicles.
Nanoparticle migration
A 30-lab study reported that "no anomalous enhancement of thermal conductivity was observed in the limited set of nanofluids tested in this exercise". The COST funded research programme, Nanouptake (COST Action CA15119) was conducted with the intention "develop and foster the use of nanofluids as advanced heat transfer/thermal storage materials to increase the efficiency of heat exchange and storage systems". One 5-lab study reported that "there are no anomalous or unexplainable effects".
Despite these apparently conclusive experimental investigations theoretical papers continue to claim anomalous enhancement, particularly via Brownian and thermophoretic mechanisms. Brownian diffusion is due to the random drifting of suspended nanoparticles in the base fluid which originates from collisions between nanoparticles and liquid molecules. Thermophoresis induces nanoparticle migration from warmer to colder regions, again due to such collisions. A 2017 study considered the mismatch between experimental and theoretical results. It reported that Brownian motion and thermophoresis effects have no significant effects: their role is often amplified in theoretical studies due to the use of incorrect parameter values. Experimental validation of these assertions came in 2018 Brownian diffusion as a cause for enhanced heat transfer is dismissed in the discussion of the use of nanofluids in solar collectors.
See also
Argonne National Laboratory
Flow battery
Fluid dynamics
Heat transfer
Nanophase material
Surface-area-to-volume ratio
Surfactant
Therminol
References
External links
Magnetically responsive photonic crystals nanofluid (video) produced by Nanos scientificae
European projects:
NanoHex is a European project developing industrial-class nanofluid coolants
Nanoparticles
Fluid mechanics
Heat transfer
Nanomaterials | Nanofluid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,290 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Civil engineering",
"Thermodynamics",
"Nanotechnology",
"Nanomaterials",
"Fluid mechanics"
] |
26,720,545 | https://en.wikipedia.org/wiki/Expanded%20polystyrene%20concrete | Expanded polystyrene (EPS) concrete (also known as EPScrete, EPS concrete or lightweight concrete) is a form of concrete known for its light weight made from cement and EPS (Expanded Polystyrene). It is a popular material for use in environmentally "green" homes. It has been used as road bedding, in soil or geo-stabilization projects and as sub-grading for railroad trackage.
Before 1980, EPS as the aggregate of concrete has been studied in detail. It is created by using small lightweight EPS balls (sometimes called Styrofoam) as an aggregate instead of the crushed stone that is used in regular concrete. It is not as strong as stone-based concrete mixes, but has other advantages such as increased thermal and sound insulation properties, easy shaping and ability to be formed by hand with sculpturing and construction tools.
After many years of exploration and attempt, EPS lightweight concrete can be used in many building structures, such as EPS insulation coating, EPS mortar, EPS sealing putty, EPS lightweight mortar, EPS concrete inner and outer wall panels, etc. In addition, EPS lightweight aggregate concrete is also used in the fields of pavement backfill, antifreeze subgrade, thermal insulation roof, floor sound insulation and marine floating structure. In particular, it has a strong energy absorption function, so it can also be used as a structural impact protection layer. EPS concrete combines the construction ease of concrete with the thermal and hydro insulation properties of EPS and can be used for a very wide range of application where lighter loads or thermal insulation or both are desired.
References
Concrete | Expanded polystyrene concrete | [
"Engineering"
] | 328 | [
"Structural engineering",
"Concrete"
] |
22,382,490 | https://en.wikipedia.org/wiki/CONQUEST | CONQUEST is a linear scaling, or O(N), density functional theory (DFT) electronic structure open-source code. The code is designed to perform DFT calculations on very large systems containing many thousands of atoms. It can be run at different levels of precision ranging from ab initio tight binding up to full DFT with plane wave accuracy. It has been applied to the study of three-dimensional reconstructions formed by Ge on Si(001), containing over 20,000 atoms. Tests on the UK's national supercomputer HECToR in 2009 demonstrated the capability of the code to perform ground-state calculations on systems of over 1,000,000 atoms.
Methodology
Instead of solving for the Kohn-Sham eigenstates as normal DFT codes do, CONQUEST solves for the one particle density matrix, . To make the problem computationally tractable, the density matrix is written in separable form:
,
where is a support function centred on atom i (with support functions on the same atom notated by ) and is the density matrix in the basis of the support functions. The ground state is found as a series of nested loops:
• Minimise the energy with respect to the density matrix for fixed charge density and support functions
• Find self-consistency between charge density and potential
• Minimise the energy with respect to the support functions
The support functions are confined within spheres of given cutoff radius and the density matrix is forced to zero beyond a given range: . These approximations give linear scaling behaviour, and as the radii are increased tend to the exact result.
Developers
CONQUEST is jointly developed at the Department of Physics and Astronomy and London Centre for Nanotechnology, University College London in the UK and at the Computational Materials Science Centre, National Institute for Materials Science, Tsukuba, Japan. In the UK, the development team includes Dr. David Bowler, Dr. Veronika Brazdova, Prof. Mike Gillan, Dr. Andrew Horsfield, Mr. Alex Sena, Mr. Lianheng Tong, Mr. Jack Baker and Mr. Shereif Mujahed who are all members of the Thomas Young Centre; in Japan, the development team includes Dr. Tsuyoshi Miyazaki, Dr. Takahisa Ohno, Dr. Takao Ohtsuka, Dr. Milica Todorovic and Dr. Antonio Torralba.
Previous developers include Ian Bush, Rathin Choudhury, Chris Goringe and Eduardo Hernandez.
See also
Density functional theory
Quantum chemistry computer programs
External links
CONQUEST official website
National Institute for Materials Science website
London Centre for Nanotechnology website
References
Computational chemistry software
Density functional theory software
Physics software | CONQUEST | [
"Physics",
"Chemistry"
] | 548 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
15,774,146 | https://en.wikipedia.org/wiki/Faraday-efficiency%20effect | The Faraday-efficiency effect refers to the potential for misinterpretation of data from experiments in electrochemistry through failure to take into account a Faraday efficiency of less than 100 percent.
Assumption about efficiency
Until recent decades it was common to assume that the release of hydrogen and oxygen gas during electrolysis of water always has a Faraday efficiency of 100%. Pons and Fleischmann, and other investigators who reported the finding of anomalous excess heat in electrolytic cells, all relied on this popular assumption. No one bothered to measure the Faraday efficiency in their cells during the experiments. Many publications reporting the finding of excess heat included an explicit statement like: "The Faraday efficiency is assumed to be unity." Even if not explicitly stated so, these publications included this implicit assumption in the formulas used to calculate the cells' energy balance.
Relevance to cold fusion
Lacking any other plausible explanation, the anomalous excess heat produced during such electrolysis was attributed by Pons and Fleischmann to cold fusion. Later, it was discovered that such excess heat can easily be the product of conventional chemistry, i.e. internal recombination of hydrogen and oxygen. Such recombination leads to a reduction in the Faraday efficiency of the electrolysis. The Faraday-efficiency effect is the observation of anomalous excess heat due to a reduction in the Faraday efficiency.
Measurement
From 1991-1993 a group of investigators, headed by Zvi Shkedi, in the state of Massachusetts, USA, built well-insulated cells and calorimeters which included the capability to measure the actual Faraday efficiency in real-time during the experiments. The cells were of the light-water type; with a fine-wire nickel cathode; a platinum anode; and K2CO3 electrolyte.
The calorimeters were calibrated to an accuracy of 0.02% of input power. The long-term stability of the calorimeters was verified over a period of 9 months of continuous operation. In their publication, the investigators show details of their calorimeters' design and teach the technology of achieving high calorimetric accuracy.
Experiments
A total of 64 experiments were performed in which the actual Faraday efficiency was measured. The results were analyzed twice; once with the popular assumption that the Faraday efficiency is 100%, and, again, taking into account the measured Faraday efficiency in each experiment. The average Faraday efficiency measured in these experiments was 78%.
First analysis
The first analysis, assuming a Faraday efficiency of 100%, yielded an average apparent excess heat of 21% of input power. The term "apparent excess heat" was coined by the investigators to indicate that the actual Faraday efficiency was ignored in the analysis.
Second analysis
The second analysis, taking into account the measured Faraday efficiency, yielded an actual excess heat of 0.13% +/- 0.48%. In other words, when the actual Faraday efficiency was measured and taken into account, the energy balance of the cells was zero, with no excess heat.
Conclusion
This investigation has shown how conventional chemistry, i.e. internal recombination of hydrogen and oxygen, accounted for the entire amount of apparent excess heat. The investigators concluded their publication with the following word of advice:"All reports claiming the observation of excess heat should be accompanied by simultaneous measurements of the actual Faraday efficiency."
Jones et al. have confirmed the Shkedi et al. findings with the same conclusion:
"Faradaic efficiencies less than 100% during electrolysis of water can account for reports of excess heat in 'cold fusion' cells."
References
Electrochemistry | Faraday-efficiency effect | [
"Chemistry"
] | 758 | [
"Electrochemistry"
] |
15,778,704 | https://en.wikipedia.org/wiki/8-Anilinonaphthalene-1-sulfonic%20acid | 8-Anilinonaphthalene-1-sulfonic acid (ANS), also called 1-anilino-8-naphthalenesulfonate, is an organic compound containing both a sulfonic acid and an amine group. This compound is used as a fluorescent molecular probe. For example, ANS can be used to study conformational changes induced by ligand binding in proteins, as ANS's fluorescent properties will change as it binds to hydrophobic regions on the protein surface. Comparison of the fluorescence in the presence and absence of a particular ligand can thus give information about how the binding of the ligand changes the surface of the protein. Its permeability to mitochondrial membranes makes it particularly useful.
References
Naphthalenesulfonic acids
Fluorescent dyes
Anilines | 8-Anilinonaphthalene-1-sulfonic acid | [
"Chemistry"
] | 167 | [
"Molecular and cellular biology stubs",
"Biochemistry stubs",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
15,778,762 | https://en.wikipedia.org/wiki/Aplaviroc | Aplaviroc (INN, codenamed AK602 and GSK-873140) is a CCR5 entry inhibitor that belongs to a class of 2,5-diketopiperazines developed for the treatment of HIV infection. It was developed by GlaxoSmithKline.
In October 2005, all studies of aplaviroc were discontinued due to liver toxicity concerns. Some authors have claimed that evidence of poor efficacy may have contributed to termination of the drug's development; the ASCENT study, one of the discontinued trials, showed aplaviroc to be under-effective in many patients even at high concentrations.
See also
CCR5 receptor antagonist
References
Further reading
Abandoned drugs
Benzoic acids
Diketopiperazines
Entry inhibitors
Hepatotoxins
Spiro compounds
Diphenyl ethers
Butyl compounds | Aplaviroc | [
"Chemistry"
] | 174 | [
"Organic compounds",
"Drug safety",
"Abandoned drugs",
"Spiro compounds"
] |
15,778,818 | https://en.wikipedia.org/wiki/Alternariol | Alternariol is a toxic metabolite of Alternaria fungi. It is an important contaminant in cereals and fruits.
Alternariol exhibits antifungal and phytotoxic activity. It is reported to inhibit cholinesterase enzymes. It is also a mycoestrogen.
A 2017 in vitro assay study reported alternariol to be a full androgen agonist.
References
Mycotoxins
Natural phenols
Benzochromenes
Lactones
Resorcinols
Mycoestrogens | Alternariol | [
"Chemistry"
] | 111 | [
"Biomolecules by chemical classification",
"Natural phenols"
] |
15,781,841 | https://en.wikipedia.org/wiki/Skydrol | Skydrol is a brand name of fire-resistant hydraulic fluid used in aviation and aerospace applications. It is a phosphate ester-based fluid that is known for its excellent fire resistance and ability to withstand extreme temperature and pressure conditions. It is manufactured by Solutia (now part of Eastman Chemical Company), and formerly manufactured by Monsanto. There are various lines of Skydrol including Skydrol 500B-4, Skydrol LD-4, and Skydrol 5.
Skydrol is made of a fire-resistant phosphate ester base stock, with a number of oil additives dissolved into it to inhibit corrosion and prevent erosion damage to servo valves. It also includes a purple or green dye to ease identification. It has been approved by most airframe manufacturers including Airbus, Boeing and BAE Systems and has been used in their products for over 50 years.
Characteristics
Acid number (the proportional content of acid, not pH) and particulate contamination must be monitored while using Skydrol, and generally hydraulic systems should be sampled every C check.
Generally recommended contamination levels should be better than AS4059 Class 7 as new, and should not be allowed to degrade beyond Class 9. Skydrol has a 5-year shelf life from the date of manufacture.
Skydrol fluids are extremely irritating to human tissue. Gloves and goggles are recommended safety equipment when servicing Skydrol systems. If the fluid gets on the skin it creates an itchy, red rash with a persistent burning sensation. The effects subside within a few hours; egg white can be applied to the affected area to neutralize the burning. Animal studies have shown that repeated exposure to tributylphosphate, one of the phosphate esters used in Skydrol fluids, may cause urinary bladder damage. If Skydrol gets in the eyes, it creates an intense stinging sensation. The recommended treatment for this is to use an eye-wash station, sometimes mineral oil, castor oil or milk is used.
Skydrol fluids are incompatible with many plastics, paints and adhesives, which can be softened and eventually destroyed by exposure to Skydrol. Some materials (for example rayon, acetate) and rubber-soled shoes may also be damaged by Skydrol.
Production
The Skydrol series of phosphate ester hydraulic fluids were originally jointly developed by the Douglas Aircraft Company and Monsanto in the late 1940s to reduce the fire risk from leaking high pressure mineral oil-based hydraulic fluids impinging on potential ignition sources.
In 1949 Douglas first licensed Monsanto to produce a range of Skydrol materials under their patents. In the 1990s Monsanto became primarily a biotechnology company, and an independent chemical producer, Solutia, was created in 1997 to handle its chemical interests, including Skydrol.
Solutia Inc. built a new facility to produce Skydrol and SkyKleen aviation cleaning solutions in Anniston, Alabama in 2005. In 2012, Solutia was acquired by Eastman Chemical.
Uses
The first type of Skydrol used in aviation was Skydrol 7000 (now obsolete), which was dyed green in colour, as a fire-resistant lubricant in Douglas-designed cabin pressure superchargers (as piston-engined airliners do not have 'bleed air' pressurisation) used in the DC-6 and -7 series piston-engined aircraft, and first flight tested by United Airlines in 1949, who also used Skydrol 7000 in the hydraulic systems of these aircraft, as did quite a number of other airlines including Pan-Am, and KLM and BOAC in Europe.
With the introduction of jet aircraft operating at higher altitudes, and lower external temperatures there was a need for improved phosphate ester fluids. The story of the introduction of Skydrol type fluids in civil aviation is covered in a Kindle book entitled "The Skydrol Story", in which it describes how the Vickers Vanguard was the first non US built aircraft to introduce Skydrol as a hydraulic fluid when Trans-Canada Air Lines adopted it for their Vanguard fleet.
In the years following, during the flight testing of the Boeing 707 a test aircraft suffered a gear collapse which led to a fire fueled by leaking hydraulic fluid. As a result of this incident, Boeing implemented the use of Skydrol on the 707 and then later on the 720 and subsequent aircraft. Skydrol 500B (dyed purple in colour) then proliferated through the aerospace industry due to its flame retardant capability, but predominantly only in the civilian world on transport category aircraft.
Notable exceptions include the BAC Concorde, which used silicate ester fluid (Chevron M2V Oronite) due to the high temperature requirements.
Skydrol was never adopted into widespread military use, ostensibly because if an aircraft was hit by enemy fire on a mission it was believed that it is merely academic whether the fluid is flame retardant or not, as the aircraft would have been expected to be destroyed.
The predominant competing mineral oil fluid, MIL-PRF-5606 had higher flammability due to its lower flash point, however modern derivatives such as MIL-PRF-87257 have a flash point much closer to that of Skydrol.
Some smaller business jets still use MIL-H-5606, such as the Dassault Falcon series jets, most of the Cessna Citations and all models of Learjet. Business jets using Skydrol include the Cessna Citation X, Gulfstreams and Bombardier Challenger & Global Express Series.
Special seals were developed for use with Skydrol, as the elastomers available at the time were incompatible - the first seals used were made from butyl rubber, which were resistant to the phosphate ester fluid but suffered some early leakages. Modern Skydrol compatible seals are usually made from EPDM (ethylene propylene diene monomer) or PTFE (polytetrafluroethylene).
References
External links
http://www.solutia.com/en/Default.aspx
http://www.hazard.com/msds/files/clh/clhvx.html - MSDS sheet
Hydraulic fluids
Aerospace
1949 introductions | Skydrol | [
"Physics"
] | 1,278 | [
"Aerospace",
"Physical systems",
"Hydraulics",
"Space",
"Hydraulic fluids",
"Spacetime"
] |
15,782,871 | https://en.wikipedia.org/wiki/Lattice%20%28discrete%20subgroup%29 | In Lie theory and related areas of mathematics, a lattice in a locally compact group is a discrete subgroup with the property that the quotient space has finite invariant measure. In the special case of subgroups of Rn, this amounts to the usual geometric notion of a lattice as a periodic subset of points, and both the algebraic structure of lattices and the geometry of the space of all lattices are relatively well understood.
The theory is particularly rich for lattices in semisimple Lie groups or more generally in semisimple algebraic groups over local fields. In particular there is a wealth of rigidity results in this setting, and a celebrated theorem of Grigory Margulis states that in most cases all lattices are obtained as arithmetic groups.
Lattices are also well-studied in some other classes of groups, in particular groups associated to Kac–Moody algebras and automorphisms groups of regular trees (the latter are known as tree lattices).
Lattices are of interest in many areas of mathematics: geometric group theory (as particularly nice examples of discrete groups), in differential geometry (through the construction of locally homogeneous manifolds), in number theory (through arithmetic groups), in ergodic theory (through the study of homogeneous flows on the quotient spaces) and in combinatorics (through the construction of expanding Cayley graphs and other combinatorial objects).
Generalities on lattices
Informal discussion
Lattices are best thought of as discrete approximations of continuous groups (such as Lie groups). For example, it is intuitively clear that the subgroup of integer vectors "looks like" the real vector space in some sense, while both groups are essentially different: one is finitely generated and countable, while the other is not finitely generated and has the cardinality of the continuum.
Rigorously defining the meaning of "approximation of a continuous group by a discrete subgroup" in the previous paragraph in order to get a notion generalising the example is a matter of what it is designed to achieve. Maybe the most obvious idea is to say that a subgroup "approximates" a larger group is that the larger group can be covered by the translates of a "small" subset by all elements in the subgroups. In a locally compact topological group there are two immediately available notions of "small": topological (a compact, or relatively compact subset) or measure-theoretical (a subset of finite Haar measure). Note that since the Haar measure is a Radon measure, so it gives finite mass to compact subsets, the second definition is more general. The definition of a lattice used in mathematics relies upon the second meaning (in particular to include such examples as ) but the first also has its own interest (such lattices are called uniform).
Other notions are coarse equivalence and the stronger quasi-isometry. Uniform lattices are quasi-isometric to their ambient groups, but non-uniform ones are not even coarsely equivalent to it.
Definition
Let be a locally compact group and a discrete subgroup (this means that there exists a neighbourhood of the identity element of such that ). Then is called a lattice in if in addition there exists a Borel measure on the quotient space which is finite (i.e. ) and -invariant (meaning that for any and any open subset the equality is satisfied).
A slightly more sophisticated formulation is as follows: suppose in addition that is unimodular, then since is discrete it is also unimodular and by general theorems there exists a unique -invariant Borel measure on up to scaling. Then is a lattice if and only if this measure is finite.
In the case of discrete subgroups this invariant measure coincides locally with the Haar measure and hence a discrete subgroup in a locally compact group being a lattice is equivalent to it having a fundamental domain (for the action on by left-translations) of finite volume for the Haar measure.
A lattice is called uniform (or cocompact) when the quotient space is compact (and non-uniform otherwise). Equivalently a discrete subgroup is a uniform lattice if and only if there exists a compact subset with . Note that if is any discrete subgroup in such that is compact then is automatically a lattice in .
First examples
The fundamental, and simplest, example is the subgroup which is a lattice in the Lie group . A slightly more complicated example is given by the discrete Heisenberg group inside the continuous Heisenberg group.
If is a discrete group then a lattice in is exactly a subgroup of finite index (i.e. the quotient set is finite).
All of these examples are uniform. A non-uniform example is given by the modular group inside , and also by the higher-dimensional analogues .
Any finite-index subgroup of a lattice is also a lattice in the same group. More generally, a subgroup commensurable to a lattice is a lattice.
Which groups have lattices?
Not every locally compact group contains a lattice, and there is no general group-theoretical sufficient condition for this. On the other hand, there are plenty of more specific settings where such criteria exist. For example, the existence or non-existence of lattices in Lie groups is a well-understood topic.
As we mentioned, a necessary condition for a group to contain a lattice is that the group must be unimodular. This allows for the easy construction of groups without lattices, for example the group of invertible upper triangular matrices or the affine groups. It is also not very hard to find unimodular groups without lattices, for example certain nilpotent Lie groups as explained below.
A stronger condition than unimodularity is simplicity. This is sufficient to imply the existence of a lattice in a Lie group, but in the more general setting of locally compact groups there exist simple groups without lattices, for example the "Neretin groups".
Lattices in solvable Lie groups
Nilpotent Lie groups
For nilpotent groups the theory simplifies much from the general case, and stays similar to the case of Abelian groups. All lattices in a nilpotent Lie group are uniform, and if is a connected simply connected nilpotent Lie group (equivalently it does not contain a nontrivial compact subgroup) then a discrete subgroup is a lattice if and only if it is not contained in a proper connected subgroup (this generalises the fact that a discrete subgroup in a vector space is a lattice if and only if it spans the vector space).
A nilpotent Lie group contains a lattice if and only if the Lie algebra of can be defined over the rationals. That is, if and only if the structure constants of are rational numbers. More precisely: if is a nilpotent simply connected Lie group whose Lie algebra has only rational structure constants, and is a lattice in (in the more elementary sense of Lattice (group)) then generates a lattice in ; conversely, if is a lattice in then generates a lattice in .
A lattice in a nilpotent Lie group is always finitely generated (and hence finitely presented since it is itself nilpotent); in fact it is generated by at most elements.
Finally, a nilpotent group is isomorphic to a lattice in a nilpotent Lie group if and only if it contains a subgroup of finite index which is torsion-free and finitely generated.
The general case
The criterion for nilpotent Lie groups to have a lattice given above does not apply to more general solvable Lie groups. It remains true that any lattice in a solvable Lie group is uniform and that lattices in solvable groups are finitely presented.
Not all finitely generated solvable groups are lattices in a Lie group. An algebraic criterion is that the group be polycyclic.
Lattices in semisimple Lie groups
Arithmetic groups and existence of lattices
If is a semisimple linear algebraic group in which is defined over the field of rational numbers (i.e. the polynomial equations defining have their coefficients in ) then it has a subgroup . A fundamental theorem of Armand Borel and Harish-Chandra states that is always a lattice in ; the simplest example of this is the subgroup .
Generalising the construction above one gets the notion of an arithmetic lattice in a semisimple Lie group. Since all semisimple Lie groups can be defined over a consequence of the arithmetic construction is that any semisimple Lie group contains a lattice.
Irreducibility
When the Lie group splits as a product there is an obvious construction of lattices in from the smaller groups: if are lattices then is a lattice as well. Roughly, a lattice is then said to be irreducible if it does not come from this construction.
More formally, if is the decomposition of into simple factors, a lattice is said to be irreducible if either of the following equivalent conditions hold:
The projection of to any factor is dense;
The intersection of with any factor is not a lattice.
An example of an irreducible lattice is given by the subgroup which we view as a subgroup via the map where is the Galois map sending a matric with coefficients to .
Rank 1 versus higher rank
The real rank of a Lie group is the maximal dimension of a -split torus of (an abelian subgroup containing only semisimple elements with at least one real eigenvalue distinct from ). The semisimple Lie groups of real rank 1 without compact factors are (up to isogeny) those in the following list (see List of simple Lie groups):
The orthogonal groups of real quadratic forms of signature for ;
The unitary groups of Hermitian forms of signature for ;
The groups (groups of matrices with quaternion coefficients which preserve a "quaternionic quadratic form" of signature ) for ;
The exceptional Lie group (the real form of rank 1 corresponding to the exceptional Lie algebra ).
The real rank of a Lie group has a significant influence on the behaviour of the lattices it contains. In particular the behaviour of lattices in the first two families of groups (and to a lesser extent that of lattices in the latter two) differs much from that of irreducible lattices in groups of higher rank. For example:
There exists non-arithmetic lattices in all groups , in , and possibly in (the last is an open question) but all irreducible lattices in the others are arithmetic;
Lattices in rank 1 Lie groups have infinite, infinite index normal subgroups while all normal subgroups of irreducible lattices in higher rank are either of finite index or contained in their center;
Conjecturally, arithmetic lattices in higher-rank groups have the congruence subgroup property but there are many lattices in which have non-congruence finite-index subgroups.
Kazhdan's property (T)
The property known as (T) was introduced by Kazhdan to study the algebraic structure lattices in certain Lie groups when the classical, more geometric methods failed or at least were not as efficient. The fundamental result when studying lattices is the following:
A lattice in a locally compact group has property (T) if and only if the group itself has property (T).
Using harmonic analysis it is possible to classify semisimple Lie groups according to whether or not they have the property. As a consequence we get the following result, further illustrating the dichotomy of the previous section:
Lattices in do not have Kazhdan's property (T) while irreducible lattices in all other simple Lie groups do;
Finiteness properties
Lattices in semisimple Lie groups are always finitely presented, and actually satisfy stronger finiteness conditions. For uniform lattices this is a direct consequence of cocompactness. In the non-uniform case this can be proved using reduction theory. It is easier to prove finite presentability for groups with Property (T); however, there is a geometric proof which works for all semisimple Lie groups.
Riemannian manifolds associated to lattices in Lie groups
Left-invariant metrics
If is a Lie group then from an inner product on the tangent space (the Lie algebra of ) one can construct a Riemannian metric on as follows: if belong to the tangent space at a point put where indicates the tangent map (at ) of the diffeomorphism of .
The maps for are by definition isometries for this metric . In particular, if is any discrete subgroup in (so that it acts freely and properly discontinuously by left-translations on ) the quotient is a Riemannian manifold locally isometric to with the metric .
The Riemannian volume form associated to defines a Haar measure on and we see that the quotient manifold is of finite Riemannian volume if and only if is a lattice.
Interesting examples in this class of Riemannian spaces include compact flat manifolds and nilmanifolds.
Locally symmetric spaces
A natural bilinear form on is given by the Killing form. If is not compact it is not definite and hence not an inner product: however when is semisimple and is a maximal compact subgroup it can be used to define a -invariant metric on the homogeneous space : such Riemannian manifolds are called symmetric spaces of non-compact type without Euclidean factors.
A subgroup acts freely, properly discontinuously on if and only if it is discrete and torsion-free. The quotients are called locally symmetric spaces. There is thus a bijective correspondence between complete locally symmetric spaces locally isomorphic to and of finite Riemannian volume, and torsion-free lattices in . This correspondence can be extended to all lattices by adding orbifolds on the geometric side.
Lattices in p-adic Lie groups
A class of groups with similar properties (with respect to lattices) to real semisimple Lie groups are semisimple algebraic groups over local fields of characteristic 0, for example the p-adic fields . There is an arithmetic construction similar to the real case, and the dichotomy between higher rank and rank one also holds in this case, in a more marked form. Let be an algebraic group over of split--rank r. Then:
If r is at least 2 all irreducible lattices in are arithmetic;
if r=1 then there are uncountably many commensurability classes of non-arithmetic lattices.
In the latter case all lattices are in fact free groups (up to finite index).
S-arithmetic groups
More generally one can look at lattices in groups of the form
where is a semisimple algebraic group over . Usually is allowed, in which case is a real Lie group. An example of such a lattice is given by
.
This arithmetic construction can be generalised to obtain the notion of an S-arithmetic group. The Margulis arithmeticity theorem applies to this setting as well. In particular, if at least two of the factors are noncompact then any irreducible lattice in is S-arithmetic.
Lattices in adelic groups
If is a semisimple algebraic group over a number field and its adèle ring then the group of adélic points is well-defined (modulo some technicalities) and it is a locally compact group which naturally contains the group of -rational point as a discrete subgroup. The Borel–Harish-Chandra theorem extends to this setting, and is a lattice.
The strong approximation theorem relates the quotient to more classical S-arithmetic quotients. This fact makes the adèle groups very effective as tools in the theory of automorphic forms. In particular modern forms of the trace formula are usually stated and proven for adélic groups rather than for Lie groups.
Rigidity
Rigidity results
Another group of phenomena concerning lattices in semisimple algebraic groups is collectively known as rigidity. Here are three classical examples of results in this category.
Local rigidity results state that in most situations every subgroup which is sufficiently "close" to a lattice (in the intuitive sense, formalised by Chabauty topology or by the topology on a character variety) is actually conjugated to the original lattice by an element of the ambient Lie group. A consequence of local rigidity and the Kazhdan-Margulis theorem is Wang's theorem: in a given group (with a fixed Haar measure), for any v>0 there are only finitely many (up to conjugation) lattices with covolume bounded by v.
The Mostow rigidity theorem states that for lattices in simple Lie groups not locally isomorphic to (the group of 2 by 2 matrices with determinant 1) any isomorphism of lattices is essentially induced by an isomorphism between the groups themselves. In particular, a lattice in a Lie group "remembers" the ambient Lie group through its group structure. The first statement is sometimes called strong rigidity and is due to George Mostow and Gopal Prasad (Mostow proved it for cocompact lattices and Prasad extended it to the general case).
Superrigidity provides (for Lie groups and algebraic groups over local fields of higher rank) a strengthening of both local and strong rigidity, dealing with arbitrary homomorphisms from a lattice in an algebraic group G into another algebraic group H. It was proven by Grigori Margulis and is an essential ingredient in the proof of his arithmeticity theorem.
Nonrigidity in low dimensions
The only semisimple Lie groups for which Mostow rigidity does not hold are all groups locally isomorphic to . In this case there are in fact continuously many lattices and they give rise to Teichmüller spaces.
Nonuniform lattices in the group are not locally rigid. In fact they are accumulation points (in the Chabauty topology) of lattices of smaller covolume, as demonstrated by hyperbolic Dehn surgery.
As lattices in rank-one p-adic groups are virtually free groups they are very non-rigid.
Tree lattices
Definition
Let be a tree with a cocompact group of automorphisms; for example, can be a regular or biregular tree. The group of automorphisms of is a locally compact group (when endowed with the compact-open topology, in which a basis of neighbourhoods of the identity is given by the stabilisers of finite subtrees, which are compact). Any group which is a lattice in some is then called a tree lattice.
The discreteness in this case is easy to see from the group action on the tree: a subgroup of is discrete if and only if all vertex stabilisers are finite groups.
It is easily seen from the basic theory of group actions on trees that uniform tree lattices are virtually free groups. Thus the more interesting tree lattices are the non-uniform ones, equivalently those for which the quotient graph is infinite. The existence of such lattices is not easy to see.
Tree lattices from algebraic groups
If is a local field of positive characteristic (i.e. a completion of a function field of a curve over a finite field, for example the field of formal Laurent power series ) and an algebraic group defined over of -split rank one, then any lattice in is a tree lattice through its action on the Bruhat–Tits building which in this case is a tree. In contrast to the characteristic 0 case such lattices can be nonuniform, and in this case they are never finitely generated.
Tree lattices from Bass–Serre theory
If is the fundamental group of an infinite graph of groups, all of whose vertex groups are finite, and under additional necessary assumptions on the index of the edge groups and the size of the vertex groups, then the action of on the Bass-Serre tree associated to the graph of groups realises it as a tree lattice.
Existence criterion
More generally one can ask the following question: if is a closed subgroup of , under which conditions does contain a lattice? The existence of a uniform lattice is equivalent to being unimodular and the quotient being finite. The general existence theorem is more subtle: it is necessary and sufficient that be unimodular, and that the quotient be of "finite volume" in a suitable sense (which can be expressed combinatorially in terms of the action of ), more general than the stronger condition that the quotient be finite (as proven by the very existence of nonuniform tree lattices).
Notes
References
Algebraic groups
Differential geometry
Ergodic theory
Geometric group theory
Lie groups | Lattice (discrete subgroup) | [
"Physics",
"Mathematics"
] | 4,236 | [
"Lie groups",
"Geometric group theory",
"Mathematical structures",
"Group actions",
"Ergodic theory",
"Algebraic structures",
"Symmetry",
"Dynamical systems"
] |
19,742,095 | https://en.wikipedia.org/wiki/Biopanning | Biopanning is an affinity selection technique which selects for peptides that bind to a given target. All peptide sequences obtained from biopanning using combinatorial peptide libraries have been stored in a special freely available database named BDB. This technique is often used for the selection of antibodies too.
Biopanning involves 4 major steps for peptide selection. The first step is to have phage display libraries prepared. This involves inserting foreign desired gene segments into a region of the bacteriophage genome, so that the peptide product will be displayed on the surface of the bacteriophage virion. The most often used are genes pIII or pVIII of bacteriophage M13.
The next step is the capturing step. It involves conjugating the phage library to the desired target. This procedure is termed panning. It utilizes the binding interactions so that only specific peptides presented by bacteriophage are bound to the target. For example, selecting antibody presented by bacteriophage with coated antigen in microtiter plates.
The washing step comes after the capturing step to wash away the unbound phages from solid surface. Only the bound phages with strong affinity are kept. The final step involves the elution step where the bound phages are eluted through changing of pH or other environment conditions.
The end result is the peptides produced by bacteriophage are specific. The resulting filamentous phages can infect gram-negative bacteria once again to produce phage libraries. The cycle can occur many times resulting with strong affinity binding peptides to the target.
References
Biochemistry methods | Biopanning | [
"Chemistry",
"Biology"
] | 335 | [
"Biochemistry methods",
"Biochemistry"
] |
19,742,557 | https://en.wikipedia.org/wiki/Electrophoretic%20color%20marker | An electrophoretic color marker is a chemical used to monitor the progress of agarose gel electrophoresis and polyacrylamide gel electrophoresis (PAGE) since DNA, RNA, and most proteins are colourless. The color markers are made up of a mixture of dyes that migrate through the gel matrix alongside the sample of interest. They are typically designed to have different mobilities from the sample components and to generate colored bands that can be used to assess the migration and separation of sample components.
Color markers are often used as molecular weight standards, loading dyes, tracking dyes, or staining solutions. Molecular weight ladders are used to estimate the size of DNA and protein fragments by comparing their migration distance to that of the colored bands. DNA and protein standards are available commercially in a wide range of sizes, and are often provided with pre-stained or color-coded bands for easy identification. Loading dyes are usually added to the sample buffer before loading the sample onto the gel, and they migrate through the gel along with the sample to help track its progress during electrophoresis. Tracking dyes are added to the electrophoresis buffer rather to provide a visual marker of the buffer front. Staining solutions are applied after electrophoresis to visualize the sample bands, and are available in a range of colors.
Different types of electrophoretic color markers are available commercially, with varying numbers and types of dyes or pigments used in the mixture. Some markers generate a series of colored bands with known mobilities, while others produce a single band of a specific color that can be used as a reference point. They are widely used in research, clinical diagnostics, and forensic science.
Progress markers
Loading buffers often contain anionic dyes that are visible under the visible light spectrum, and are added to the gel before the nucleic acid. Tracking dyes should not be reactive so as not to alter the sample, and move down the gel with the DNA or RNA sample. Commonly used color markers include Bromophenol blue, Cresol Red, Orange G and Xylene cyanol. Xylene and bromophenol blue are the most commonly used dyes. Generally speaking, Orange G migrates faster than bromophenol blue, which migrates faster than xylene cyanol, but the apparent "sizes" of these dyes (compared to DNA molecules) varies with the concentration of agarose and the buffer system used. For instance, in a 1% agarose gel made in TAE buffer (Tris-acetate-EDTA), xylene cyanol migrates at the speed of a 3000 base pair (bp) molecule of DNA and bromophenol blue migrates at 400 bp. However, in a 1% gel made in TBE buffer (Tris-borate-EDTA), they migrate at 2000 bp and 250 bp respectively.
DNA and RNA staining
Agarose gel electrophoresis is a technique widely used to estimate the size of nucleic acid fragments and identify them based on their differential mobility in the gel. Nucleic acids are commonly stained and detected using either ethidium bromide or SYBR Green dyes. The most common electrophoretic stain in agarose gel is ethidium bromide, however, SYBR green presents greater resolution and yield for single-stranded nucleic acid detection. The dyes grant fluorescence to DNA and RNA under 300 nm UV light. This occurs due to their intercalating nature. In double helical nucleic acids, the dyes bind between two strands, and in single-stranded nucleic acids, the dyes bind short, duplex segments formed within a strand.
The most commonly used dye in agarose gel gel electrophoresis of DNA and RNA, dating as far back as the 1970s, is ethidium bromide (2,7-diamino-10-ethyl-9-phenylphenanthridiniumbromide). Ethidium Bromide (EtBr) is an orange-colored fluorescent intercalating dye. The dye inserts itself between the double helical structure of nucleic acids, allowing for visualization of the molecules under UV light. EtBr has absorbance maxima at 300-360 nm and fluorescent emission maxima at 500-590 nm, with the detection limit of 0.5-5.0 ng/band. The dye, however, has reduced sensitivity in the detection of single-stranded nucleic acid samples EtBr should be handled with care, as it is a potent mutagen.
A more sensitive alternative for nucleic acid staining in gel electrophoresis is SYBR™ Green I. The dye is 25 times more sensitive than EtBr in the staining of dsDNA, and is especially useful in staining assays containing single-stranded nucleic acids. SYBR Green is, however, more expensive when compared to EtBr.
Protein staining
Coomassie Blue is the most commonly used non-covalent stain in SDS polyacrylamide gel electrophoresis for protein quantification. The staining dye binds to the protein bands and creates a blue color that can be detected visually. Coomassie Brilliant Blue R-250 (red), is typically used for electrophoresis, while Coomassie Brilliant Blue G-250 (green), for Bradford Assay. The limitation of this dye is that it is non-specific, and will bind to almost any protein in solution, and is less sensitive. Another common method of visualization of proteins in the gel is silver staining, where soluble silver ions permanently mark proteins and are reduced by formaldehyde to form a brown precipitate. Silver staining is a more sensitive staining method when compared to Coomassie Blue, however, results are more vulnerable to contamination.
Applications
Color markers are sometimes added to loading dyes for gel electrophoresis in the separation of DNA fragments. Loading dyes keep DNA samples below the surface of the agarose gel, and the color markers within help keep track of the migration front of the DNA as it moves along the gel.
For PAGE, some commercially available molecular weight markers (also called "ladders" because they look like the rungs of a ladder after separation) contain pre-stained proteins of different colours, so it is possible to determine more accurately where the proteins of interest in the samples might be.
References
Biological techniques and tools
Electrophoresis
Genetics techniques | Electrophoretic color marker | [
"Chemistry",
"Engineering",
"Biology"
] | 1,359 | [
"Genetics techniques",
"Instrumental analysis",
"Genetic engineering",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Electrophoresis"
] |
19,742,971 | https://en.wikipedia.org/wiki/Purmer | Purmer is a polder and reclaimed lake in the Netherlands province of North Holland, located between the towns of Purmerend and Edam-Volendam. It is also a village located in the municipalities of Waterland and Edam-Volendam.
Purmer polder
Windmill reclamation activity began in 1618. Hydraulic engineer Jan Adriaanszoon Leeghwater also had stakes in the reclamation, although he was not directly involved in the project itself. In 1622 all 26.8 km2 (10.3 sq mi) were clear of water.
The original lake of Purmer formed part of a small number of landlocked minor seas located in North Holland. Other examples of such minor seas are the lakes of Beemster and Schermer. All these lakes were directly connected to open sea, so salt water could flow in and tidal movements occurred. Purmer lake was connected to both the Zuyderzee inlet and to Beemster lake.
The Purmer's being directly connected to open sea resulted in large-scale shoreline erosion due to wave dynamics and water currents. The high rate of erosion and the need for arable land gave rise to plans for reclamation.
Once reclaimed, Purmer was given to farming, but the polder is now highly urbanised. Most of this urban sprawl is due to the town of Purmerend, which has derived its name from Purmer (the "end of Purmer"). During the 1980s and '90s Purmerend had two residential areas built in Purmer, Purmer-Noord and Purmer-Zuid. In contrast to Beemster and Schermer, Purmer is not a municipality in its own right, being divided among the municipalities of Purmerend, Edam-Volendam, and Waterland.
Gallery
References
This article is based on the Dutch language article.
Hydraulic engineering
Lakes of the Netherlands
Polders of North Holland
Geography of Edam-Volendam
Geography of Purmerend
Waterland | Purmer | [
"Physics",
"Engineering",
"Environmental_science"
] | 425 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
19,744,097 | https://en.wikipedia.org/wiki/S%C3%A9rsic%20profile | The Sérsic profile (or Sérsic model or Sérsic's law) is a mathematical function that describes how the intensity of a galaxy varies with distance from its center. It is a generalization of de Vaucouleurs' law. José Luis Sérsic first published his law in 1963.
Definition
The Sérsic profile has the form
or
where is the intensity at .
The parameter , called the "Sérsic index," controls the degree of curvature of the profile (see figure). The smaller the value of , the less centrally concentrated the profile is and the shallower (steeper) the logarithmic slope at small (large) radii is. The equation for describing this is:
Today, it is more common to write this function in terms of the half-light radius, Re, and the intensity at that radius, Ie, such that
where is approximately for . can also be approximated to be , for .
It can be shown that satisfies , where and are respectively the Gamma function and lower incomplete Gamma function.
Many related expressions, in terms of the surface brightness, also exist.
Applications
Most galaxies are fit by Sérsic profiles with indices in the range 1/2 < n < 10.
The best-fit value of n correlates with galaxy size and luminosity, such that bigger and brighter galaxies tend to be fit with larger n.
Setting gives the de Vaucouleurs profile:
which is a rough approximation of ordinary elliptical galaxies.
Setting gives the exponential profile:
which is a good approximation of spiral galaxy disks and a rough approximation of dwarf elliptical galaxies. The correlation of Sérsic index (i.e. galaxy concentration) with galaxy morphology is sometimes used in automated schemes to determine the Hubble type of distant galaxies. Sérsic indices have also been shown to correlate with the mass of the supermassive black hole at the centers of the galaxies.
Sérsic profiles can also be used to describe dark matter halos, where the Sérsic index correlates with halo mass.
Generalizations of the Sérsic profile
The brightest elliptical galaxies often have low-density cores that are not well described by Sérsic's law. The core-Sérsic family of models was introduced to describe such galaxies. Core-Sérsic models have an additional set of parameters that describe the core.
Dwarf elliptical galaxies and bulges often have point-like nuclei that are also not well described by Sérsic's law. These galaxies are often fit by a Sérsic model with an added central component representing the nucleus.
The Einasto profile is mathematically identical to the Sérsic profile, except that is replaced by , the volume density, and is replaced by , the internal (not projected on the sky) distance from the center.
See also
Elliptical galaxy
Galactic bulge
References
External links
Stellar systems following the R exp 1/m luminosity law A comprehensive paper that derives many properties of Sérsic models.
A Concise Reference to (Projected) Sérsic R1/n Quantities, Including Concentration, Profile Slopes, Petrosian Indices, and Kron Magnitudes.
Astrophysics
Equations of astronomy | Sérsic profile | [
"Physics",
"Astronomy"
] | 636 | [
"Concepts in astronomy",
"Astronomical sub-disciplines",
"Astrophysics",
"Equations of astronomy"
] |
2,618,270 | https://en.wikipedia.org/wiki/Gravitational%20instanton | In mathematical physics and differential geometry, a gravitational instanton is a four-dimensional complete Riemannian manifold satisfying the vacuum Einstein equations. They are so named because they are analogues in quantum theories of gravity of instantons in Yang–Mills theory. In accordance with this analogy with self-dual Yang–Mills instantons, gravitational instantons are usually assumed to look like four dimensional Euclidean space at large distances, and to have a self-dual Riemann tensor. Mathematically, this means that they are asymptotically locally Euclidean (or perhaps asymptotically locally flat) hyperkähler 4-manifolds, and in this sense, they are special examples of Einstein manifolds. From a physical point of view, a gravitational instanton is a non-singular solution of the vacuum Einstein equations with positive-definite, as opposed to Lorentzian, metric.
There are many possible generalizations of the original conception of a gravitational instanton: for example one can allow gravitational instantons to have a nonzero cosmological constant or a Riemann tensor which is not self-dual. One can also relax the boundary condition that the metric is asymptotically Euclidean.
There are many methods for constructing gravitational instantons, including the Gibbons–Hawking Ansatz, twistor theory, and the hyperkähler quotient construction.
Introduction
Gravitational instantons are interesting, as they offer insights into the quantization of gravity. For example, positive definite asymptotically locally Euclidean metrics are needed as they obey the positive-action conjecture; actions that are unbounded below create divergence in the quantum path integral.
A four-dimensional Ricci-flat Kähler manifold has anti-self-dual Riemann tensor with respect to the complex orientation.
Consequently, a simply-connected anti-self-dual gravitational instanton is a four-dimensional complete hyperkähler manifold.
Gravitational instantons are analogous to self-dual Yang–Mills instantons.
Several distinctions can be made with respect to the structure of the Riemann curvature tensor, pertaining to flatness and self-duality. These include:
Einstein (non-zero cosmological constant)
Ricci flatness (vanishing Ricci tensor)
Conformal flatness (vanishing Weyl tensor)
Self-duality
Anti-self-duality
Conformally self-dual
Conformally anti-self-dual
Taxonomy
By specifying the 'boundary conditions', i.e. the asymptotics of the metric 'at infinity' on a noncompact Riemannian manifold, gravitational instantons are divided into a few classes, such as asymptotically locally Euclidean spaces (ALE spaces), asymptotically locally flat spaces (ALF spaces).
They can be further characterized by whether the Riemann tensor is self-dual, whether the Weyl tensor is self-dual, or neither; whether or not they are Kähler manifolds; and various characteristic classes, such as Euler characteristic, the Hirzebruch signature (Pontryagin class), the Rarita–Schwinger index (spin-3/2 index), or generally the Chern class. The ability to support a spin structure (i.e. to allow consistent Dirac spinors) is another appealing feature.
List of examples
Eguchi et al. list a number of examples of gravitational instantons. These include, among others:
Flat space , the torus and the Euclidean de Sitter space , i.e. the standard metric on the 4-sphere.
The product of spheres .
The Schwarzschild metric and the Kerr metric .
The Eguchi–Hanson instanton , given below.
The Taub–NUT solution, given below.
The Fubini–Study metric on the complex projective plane Note that the complex projective plane does not support well-defined Dirac spinors. That is, it is not a spin structure. It can be given a spinc structure, however.
The Page space, which exhibits an explicit Einstein metric on the connected sum of two oppositely oriented complex projective planes .
The Gibbons–Hawking multi-center metrics, given below.
The Taub-bolt metric and the rotating Taub-bolt metric. The "bolt" metrics have a cylindrical-type coordinate singularity at the origin, as compared to the "nut" metrics, which have a sphere coordinate singularity. In both cases, the coordinate singularity can be removed by switching to Euclidean coordinates at the origin.
The K3 surfaces.
The ALE (asymptotically locally Euclidean) anti-self-dual manifolds. Among these, the simply connected ones are all hyper-Kähler, and each one is asymptotic to a flat cone over modulo a finite subgroup. Each finite sub-group of actually occurs. The complete list of possibilities consists of the cyclic groups together with the inverse images of the dihedral groups, the tetrahedral group, the octahedral group, and the icosahedral group under the double cover . Note that corresponds to the Eguchi–Hanson instanton, while for higher k, the cyclic group corresponds to the Gibbons–Hawking multi-center metrics, each of which diffeomorphic to the space obtained from the disjoint union of k copies of by using the Dynkin diagram as a plumbing diagram.
This is a very incomplete list; there are many other possibilities, not all of which have been classified.
Examples
It will be convenient to write the gravitational instanton solutions below using left-invariant 1-forms on the three-sphere S3 (viewed as the group Sp(1) or SU(2)). These can be defined in terms of Euler angles by
Note that for cyclic.
Taub–NUT metric
Eguchi–Hanson metric
The Eguchi–Hanson space is defined by a metric the cotangent bundle of the 2-sphere . This metric is
where . This metric is smooth everywhere if it has no conical singularity at , . For this happens if has a period of , which gives a flat metric on R4; However, for this happens if has a period of .
Asymptotically (i.e., in the limit ) the metric looks like
which naively seems as the flat metric on R4. However, for , has only half the usual periodicity, as we have seen. Thus the metric is asymptotically R4 with the identification , which is a Z2 subgroup of SO(4), the rotation group of R4. Therefore, the metric is said to be asymptotically
R4/Z2.
There is a transformation to another coordinate system, in which the metric looks like
where
(For a = 0, , and the new coordinates are defined as follows: one first defines and then parametrizes , and by the R3 coordinates , i.e. ).
In the new coordinates, has the usual periodicity
One may replace V by
For some n points , i = 1, 2..., n.
This gives a multi-center Eguchi–Hanson gravitational instanton, which is again smooth everywhere if the angular coordinates have the usual periodicities (to avoid conical singularities). The asymptotic limit () is equivalent to taking all to zero, and by changing coordinates back to r, and , and redefining , we get the asymptotic metric
This is R4/Zn = C2/Zn, because it is R4 with the angular coordinate replaced by , which has the wrong periodicity ( instead of ). In other words, it is R4 identified under , or, equivalently, C2 identified under zi ~ zi for i = 1, 2.
To conclude, the multi-center Eguchi–Hanson geometry is a Kähler Ricci flat geometry which is asymptotically C2/Zn. According to Yau's theorem this is the only geometry satisfying these properties. Therefore, this is also the geometry of a C2/Zn orbifold in string theory after its conical singularity has been smoothed away by its "blow up" (i.e., deformation).
Gibbons–Hawking multi-centre metrics
The Gibbons–Hawking multi-center metrics are given by
where
Here, corresponds to multi-Taub–NUT, and is flat space, and and is the Eguchi–Hanson solution (in different coordinates).
FLRW-metrics as gravitational instantons
In 2021 it was found that if one views the curvature parameter of a foliated maximally symmetric space as a continuous function, the gravitational action, as a sum of the Einstein–Hilbert action and the Gibbons–Hawking–York boundary term, becomes that of a point particle. Then the trajectory is the scale factor and the curvature parameter is viewed as the potential. For the solutions restricted like this, general relativity takes the form of a topological Yang–Mills theory.
See also
Gravitational anomaly
Hyperkähler manifold
References
Riemannian manifolds
Quantum gravity
Mathematical physics
4-manifolds | Gravitational instanton | [
"Physics",
"Mathematics"
] | 1,860 | [
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Space (mathematics)",
"Metric spaces",
"Riemannian manifolds",
"Quantum gravity",
"Mathematical physics",
"Physics beyond the Standard Model"
] |
2,618,778 | https://en.wikipedia.org/wiki/Polyolefin | A polyolefin is a type of polymer with the general formula (CH2CHR)n where R is an alkyl group. They are usually derived from a small set of simple olefins (alkenes). Dominant in a commercial sense are polyethylene and polypropylene. More specialized polyolefins include polyisobutylene and polymethylpentene. They are all colorless or white oils or solids. Many copolymers are known, such as polybutene, which derives from a mixture of different butene isomers. The name of each polyolefin indicates the olefin from which it is prepared; for example, polyethylene is derived from ethylene, and polymethylpentene is derived from 4-methyl-1-pentene. Polyolefins are not olefins themselves because the double bond of each olefin monomer is opened in order to form the polymer. Monomers having more than one double bond such as butadiene and isoprene yield polymers that contain double bonds (polybutadiene and polyisoprene) and are usually not considered polyolefins. Polyolefins are the foundations of many chemical industries.
Industrial polyolefins
Most polyolefin are made by treating the monomer with metal-containing catalysts. The reaction is highly exothermic.
Traditionally, Ziegler-Natta catalysts are used. Named after the Nobelists Karl Ziegler and Giulio Natta, these catalysts are prepared by treating titanium chlorides with organoaluminium compounds, such as triethylaluminium. In some cases, the catalyst is insoluble and is used as a slurry. In the case of polyethylene, chromium-containing Phillips catalysts are used often. Kaminsky catalysts are yet another family of catalysts that are amenable to systematic changes to modify the tacticity of the polymer, especially applicable to polypropylene.
Thermoplastic polyolefins
low-density polyethylene (LDPE),
linear low-density polyethylene (LLDPE),
very-low-density polyethylene (VLDPE),
ultra-low-density polyethylene (ULDPE),
medium-density polyethylene (MDPE),
polypropylene (PP),
polymethylpentene (PMP),
polybutene-1 (PB-1);
ethylene-octene copolymers,
stereo-block PP,
olefin block copolymers,
propylene–butane copolymers;
Polyolefin elastomers (POE)
polyisobutylene (PIB),
poly(a-olefin)s,
ethylene propylene rubber (EPR),
ethylene propylene diene monomer (M-class) rubber (EPDM rubber).
Properties
Polyolefin properties range from liquidlike to rigid solids, and are primarily determined by their molecular weight and degree of crystallinity. Polyolefin degrees of crystallinity range from 0% (liquidlike) to 60% or higher (rigid plastics). Crystallinity is primarily governed by the lengths of polymer's crystallizable sequences established during polymerization. Examples include adding a small percentage of comonomer like 1-hexene or 1-octene during the polymerization of ethylene, or occasional irregular insertions ("stereo" or "regio" defects) during the polymerization of isotactic propylene. The polymer's ability to crystallize to high degrees decreases with increasing content of defects.
Low degrees of crystallinity (0–20%) are associated with liquidlike-to-elastomeric properties. Intermediate degrees of crystallinity (20–50%) are associated with ductile thermoplastics, and degrees of crystallity over 50% are associated with rigid and sometimes brittle plastics.
Polyolefin surfaces are not effectively joined together by solvent welding because they have excellent chemical resistance and are unaffected by common solvents. They inherently have very low surface energies and don't wet-out well (the process of being covered and filled with resin). They can be adhesively bonded after surface treatment, and by some superglues (cyanoacrylates) and reactive (meth)acrylate glues. They are extremely inert chemically but exhibit decreased strength at lower and higher temperatures. As a result of this, thermal welding is a common bonding technique.
Practically all polyolefins that are of any practical or commercial importance are poly-alpha-olefin (or poly-α-olefin or polyalphaolefin, sometimes abbreviated as PAO), a polymer made by polymerizing an alpha-olefin. An alpha-olefin (or α-olefin) is an alkene where the carbon-carbon double bond starts at the α-carbon atom, i.e. the double bond is between the #1 and #2 carbons in the molecule. Alpha-olefins such as 1-hexene may be used as co-monomers to give an alkyl branched polymer (see chemical structure below), although 1-decene is most commonly used for lubricant base stocks.
Many poly-alpha-olefins have flexible alkyl branching groups on every other carbon of their polymer backbone chain. These alkyl groups, which can shape themselves in numerous conformations, make it very difficult for the polymer molecules to align themselves up side-by-side in an orderly way. This results in lower contact surface area between the molecules and decreases the intermolecular interactions between molecules. Therefore, many poly-alpha-olefins do not crystallize or solidify easily and are able to remain oily, viscous liquids even at lower temperatures. Low molecular weight poly-alpha-olefins are useful as synthetic lubricants such as synthetic motor oils for vehicles and can be used over a wide temperature range.
Even polyethylenes copolymerized with a small amount of alpha-olefins (such as 1-hexene, 1-octene, or longer) are more flexible than simple straight-chain high-density polyethylene, which has no branching. The methyl branch groups on a polypropylene polymer are not long enough to make typical commercial polypropylene more flexible than polyethylene.
Uses
Polyethylene:
HDPE: used for film (wrapping of goods), blow molding (e.g. bottles), injection molding (e.g., toys, screw caps), extrusion coating (e.g., coating on milk cartons), piping for distributing water and gas, insulation for telephone cables. Wire and cable insulation.
LDPE: mainly (70%) used for film.
Polypropylene: injection molding, fibers, and film. Compared to polyethylene, polypropylene is stiffer but less prone to breaking. It is less dense but shows more chemical resistance.
Synthetic base oil (by far the most used one): industrial and automotive lubricants.
Polyolefins are used for blow moulded or rotationally moulded components, e.g. toys, for heat-shrink tubing used to mechanically and electrically protect connections in electronics, and for rash guards or undergarments for wetsuits.
Polyolefin sheets or foams are used in a wide variety of packaging applications, sometimes in direct contact with food.
Polyolefin elastomer POE is used as a main ingredient in the molded flexible foam technology such as in the fabrication of self skinned footwear (for example, Crocs shoes), seat cushions, arm rests, spa pillows, etc. Hydrogenated polyalphaolefin (PAO) is used as a radar coolant. Head makes polyolefin tennis racket strings. Polyolefin is also used in pharmaceutical and medical industry for HEPA filter certification—a PAO aerosol is passed through the filters and the air that exits is measured with an aerosol detector.
Elastolefin is a fiber used in fabrics. IKEA's Better Shelter uses structural panels made out of polyolefin foam, stating, "They are tough and durable.". Piping systems for the conveyance of water, chemicals or gases are commonly produced in Polypropylene, and to a much greater extent Polyethylene. Piping systems in high-density Polyethylene (HDPE, PE100, PE80) are fast becoming the most commonly used drinking water, waste water and natural gas distribution piping systems in the world.
Polyalphaolefin, commonly referred to as a synthetic hydrocarbon, is used in various types of air compressors and turbines including reciprocating, centrifugal, and rotary screw compressors where high pressures and temperatures can be an issue. These base fluids are the most widely used variety of synthetic oil blends mainly for their ability to maintain performance in spite of temperature extremes and their similarity to—but improved performance over—mineral oil base fluids.
Polypropylene is commonly used in car bumpers, interior trims, and other components where TiO₂ is added to improve the UV stability of the plastic, ensuring that parts do not degrade or lose color when exposed to sunlight over time. Polyethylene films are widely used in agriculture for greenhouses, mulching, and silage wraps.
Recycling
Despite hype rosier than practice, real recycling of polyolefins has been insufficient in the decades since they became ubiquitous, often not due to technical limitations but because of economic realities. Polyolefin waste can potentially be converted into many different products, including pure polymers, naphtha, clean fuels, or monomers, but only to the extent that money-losing processes are not required, in the reality of the business world. In the 2020s, improved catalysts have been developed that may bring commercial recycling of polyolefins closer to a circular economy of recovery of the monomers, more comparable to the existing situation with PET polyester bottles.
References
External links
MSDS (Material Safety Data Sheet)
Plastics | Polyolefin | [
"Physics"
] | 2,152 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,619,011 | https://en.wikipedia.org/wiki/Coordinate-measuring%20machine | A coordinate-measuring machine (CMM) is a device that measures the geometry of physical objects by sensing discrete points on the surface of the object with a probe. Various types of probes are used in CMMs, the most common being mechanical and laser sensors, though optical and white light sensors do exist. Depending on the machine, the probe position may be manually controlled by an operator, or it may be computer controlled. CMMs (coordinate-measuring machine) specify a probe's position in terms of its displacement from a reference position in a three-dimensional Cartesian coordinate system (i.e., with XYZ axes). In addition to moving the probe along the X, Y, and Z axes, many machines also allow the probe angle to be controlled to allow measurement of surfaces that would otherwise be unreachable.
Description
The typical 3D "bridge" CMM allows probe movement along three axes, X, Y, and Z, which are orthogonal to each other in a three-dimensional Cartesian coordinate system. Each axis has a sensor that monitors the position of the probe on that axis, with typical accuracy in the order of microns. When the probe contacts (or otherwise detects) a particular location on the object, the machine samples the axis position sensors, thus measuring the location of one point on the object's surface, as well as the 3-dimensional vector of the measurement taken. This process is repeated as necessary, moving the probe each time, to produce a "point cloud" which describes the surface areas of interest. The points can be measured either manually by an operator, automatically via Direct Computer Control (DCC), or automatically using scripted programs; thus, an automated CMM is a specialized form of industrial robot.
A common use of CMMs is in manufacturing and assembly processes to test a part or assembly against the design intent. The measured points can be used to verify the distance between features. They can also be used to construct geometric features such as cylinders and planes for GD&T so that aspects like roundness, flatness, and perpendicularity can be assessed.
Technical facts
Parts
Coordinate-measuring machines include three main components:
The main structure includes three axes of motion. The material used to construct the moving frame has varied over the years. Granite and steel were used in the early CMMs. Today all the major CMM manufacturers build frames from materials like granite, aluminum alloy or some derivative, and ceramic to increase the stiffness of the Z axis for scanning applications. Few CMM builders today still manufacture granite-frame CMMs due to market requirements for improved metrology dynamics and increasing trends to install CMMs outside of the quality lab. The increasing trend towards scanning also requires the CMM's Z axis to be stiffer, and new materials have been introduced such as black granite, ceramic, and silicon carbide.
A probing system.
A data collection and reduction system — this typically includes a machine controller, desktop computer, and application software.
Availability
These machines are available as stationary or portable.
Accuracy
The accuracy of coordinate measurement machines is typically given as an uncertainty factor as a function over distance. For a CMM using a touch probe, this relates to the repeatability of the probe and the accuracy of the linear scales. Typical probe repeatability can result in measurements within one micron or 0.00005 inch (half a ten thousandth) over the entire measurement volume. For 3, 3+2, and 5 axis machines, probes are routinely calibrated using traceable standards and the machine movement is verified using gauges to ensure accuracy.
Specific parts
Machine body
The first CMM was developed by the Ferranti Company of Scotland in the 1950s as the result of a direct need to measure precision components in their military products, although this machine only had 2 axes. The first 3-axis models began appearing in the 1960s (made by DEA of Italy and LK of the UK), and computer control debuted in the early 1970s, but the first working CMM was developed and put on sale by Browne & Sharpe in Melbourne, England. Leitz Germany subsequently produced a fixed machine structure with moving table.
In modern machines, the gantry-type superstructure has two legs and is often called a bridge. This moves freely along the granite table with one leg (often referred to as the inside leg) following a guide rail attached to one side of the granite table. The opposite leg (often outside leg) simply rests on the granite table following the vertical surface contour. Air bearings are the chosen method for ensuring friction-free travel. In these, compressed air is forced through a series of very small holes in a flat bearing surface to provide a smooth-but-controlled air cushion on which the CMM can move in a nearly frictionless manner which can be compensated for through software. The movement of the bridge or gantry along the granite table forms one axis of the XY plane. The bridge of the gantry contains a carriage which traverses between the inside and outside legs and forms the other horizontal axis. The third axis of movement (Z axis) is provided by the addition of a vertical quill or spindle which moves up and down through the center of the carriage. The touch probe forms the sensing device on the end of the quill. The movement of the X, Y, and Z axes fully describes the measuring envelope. Optional rotary tables can be used to enhance the approachability of the measuring probe to complicated workpieces. The rotary table as a fourth drive axis does not enhance the measuring dimensions, which remain 3D, but it does provide a degree of flexibility. Some touch probes are themselves powered rotary devices with the probe tip able to swivel vertically through more than 180° and through a full 360° rotation.
CMMs are now also available in a variety of other forms. These include CMM arms that use angular measurements taken at the joints of the arm to calculate the position of the stylus tip, and can be outfitted with probes for laser scanning and optical imaging. Such arm CMMs are often used where their portability is an advantage over traditional fixed-bed CMMs: by storing measured locations, programming software also allows moving the measuring arm itself, and its measurement volume, around the part to be measured during a measurement routine. Because CMM arms imitate the flexibility of a human arm, they are also often able to reach the insides of complex parts that could not be probed using a standard three axis machine.
Mechanical probe
In the early days of coordinate measurement, mechanical probes were fitted into a special holder on the end of the quill. A very common probe was made by soldering a hard ball to the end of a shaft. This was ideal for measuring a whole range of flat-face, cylindrical, or spherical surfaces. Other probes were ground to specific shapes, for example a quadrant, to enable measurement of special features. These probes were physically held against the workpiece with the position in space being read from a 3-axis digital readout (DRO) or, in more advanced systems, being logged into a computer by means of a footswitch or similar device. Measurements taken by this contact method were often unreliable as machines were moved by hand and each machine operator applied different amounts of pressure on the probe or adopted differing techniques for the measurement.
A further development was the addition of motors for driving each axis. Operators no longer had to physically touch the machine but could drive each axis using a handbox with joysticks in much the same way as with modern remote controlled cars. Measurement accuracy and precision improved dramatically with the invention of the electronic touch trigger probe. The pioneer of this new probe device was David McMurtry who subsequently formed what is now Renishaw plc. Although still a contact device, the probe had a spring-loaded steel ball (later ruby ball) stylus. As the probe touched the surface of the component, the stylus deflected and simultaneously sent the X,Y,Z coordinate information to the computer. Measurement errors caused by individual operators became fewer, and the stage was set for the introduction of CNC operations and the coming of age of CMMs.
Optical probes are lens-and-CCD systems, which are moved like the mechanical ones, and are aimed at the point of interest, instead of touching the material. The captured image of the surface will be enclosed in the borders of a measuring window, until the residue is adequate to contrast between black and white zones. The dividing curve can be calculated to a point, which is the wanted measuring point in space. The horizontal information on the CCD is 2D (XY) and the vertical position is the position of the complete probing system on the stand Z-drive (or other device component).
Scanning probe systems
There are newer models that have probes that drag along the surface of the part while taking points at specified intervals, known as scanning probes. This method of CMM inspection is often more accurate than the conventional touch-probe method and most times faster as well.
The next generation of scanning, known as noncontact scanning, which includes high speed laser single point triangulation, laser line scanning, and white light scanning, is advancing very quickly. This method uses either laser beams or white light that are projected against the surface of the part. Many thousands of points can then be taken and used not only to check size and position, but to create a 3D image of the part as well. This "point-cloud data" can then be transferred to CAD software to create a working 3D model of the part. These optical scanners are often used on soft or delicate parts or to facilitate reverse engineering.
Micrometrology probes
Probing systems for microscale metrology applications are another emerging area. There are several commercially available coordinate measuring machines that have a microprobe integrated into the system, several specialty systems at government laboratories, and any number of university-built metrology platforms for microscale metrology. Although these machines are good and in many cases excellent metrology platforms with nanometric scales, their primary limitation is a reliable, robust, capable micro/nano probe. Challenges for microscale probing technologies include the need for a high-aspect-ratio probe giving the ability to access deep, narrow features with low contact forces so as to not damage the surface and high precision (nanometer level). Additionally, microscale probes are susceptible to environmental conditions such as humidity and surface interactions such as stiction (caused by adhesion, meniscus, and/or Van der Waals forces among others).
Technologies to achieve microscale probing include scaled-down version of classical CMM probes, optical probes, and a standing wave probe, among others. However, current optical technologies cannot be scaled small enough to measure deep, narrow features, and optical resolution is limited by the wavelength of light. X-ray imaging provides a picture of the feature but no traceable metrology information.
Physical principles
Optical probes and laser probes can be used (if possible in combination), which change CMMs to measuring microscopes or multi-sensor measuring machines. Fringe projection systems, theodolite triangulation systems, and laser distance and triangulation systems are not called measuring machines, but the measuring result is the same: a space point. Laser probes are used to detect the distance between the surface and the reference point on the end of the kinematic chain (that is, the end of the Z-drive component). This can use an interferometrical function, focus variation, light deflection, or a beam-shadowing principle.
Portable coordinate-measuring machines
Whereas traditional CMMs use a probe that moves on three Cartesian axes to measure an object's physical characteristics, portable CMMs use either articulated arms or, in the case of optical CMMs, arm-free scanning systems that use optical triangulation methods and enable total freedom of movement around the object.
Portable CMMs with articulated arms have six or seven axes that are equipped with rotary encoders, instead of linear axes. Portable arms are lightweight (typically less than 20 pounds) and can be carried and used nearly anywhere. However, optical CMMs are increasingly being used in the industry. Designed with compact linear or matrix array cameras (like the Microsoft Kinect), optical CMMs are smaller than portable CMMs with arms, feature no wires, and enable users to easily take 3D measurements of all types of objects located almost anywhere.
Certain nonrepetitive applications such as reverse engineering, rapid prototyping, and large-scale inspection of parts of all sizes are ideally suited for portable CMMs. The benefits of portable CMMs are multifold. Users have the flexibility in taking 3D measurements of all types of parts and in the most remote and difficult locations. They are easy to use and do not require a controlled environment to take accurate measurements. Moreover, portable CMMs tend to cost less than traditional CMMs.
The inherent trade-offs of portable CMMs are manual operation (they always require a human to use them). In addition, their overall accuracy can be somewhat less accurate than that of a bridge-type CMM and is less suitable for some applications.
Multisensor-measuring machines
Traditional CMM technology using touch probes is today often combined with other measurement technology. This includes laser, video, or white light sensors to provide what is known as multisensor measurement.
Standardization
To verify the performance of a coordinate measurement machine, the ISO 10360 series is available. This series of standards defines the characteristics of the probing system and the length measurement error:
PForm: probing deviation when measuring the form of a sphere
PSize: probing deviation when measuring the size of a sphere
EUni: deviation of measuring length on spheres from one direction
EBi: deviation of measuring length on spheres from left and right
The ISO 10360 series consists of the following parts:
ISO 10360-1 Geometrical product specifications (GPS) -- Acceptance and reverification tests for coordinate measuring machines (CMM) -- Part 1: Vocabulary
ISO 10360-2 Geometrical product specifications (GPS) -- Acceptance and reverification tests for coordinate measuring machines (CMM) -- Part 2: CMMs used for measuring linear dimensions
ISO 10360-7 Geometrical product specifications (GPS) -- Acceptance and reverification tests for coordinate measuring machines (CMM) -- Part 7: CMMs equipped with imaging probing systems
ISO 10360-8 Geometrical product specifications (GPS) -- Acceptance and reverification tests for coordinate measuring systems (CMS) -- Part 8: CMMs with optical distance sensors
See also
Outline of metrology and measurement
List of measuring instruments
Universal measuring machine
3D scanner
References
Industrial machinery
Metrology
Metalworking measuring instruments
Dimensional instruments
Positioning instruments | Coordinate-measuring machine | [
"Physics",
"Mathematics",
"Engineering"
] | 3,017 | [
"Dimensional instruments",
"Physical quantities",
"Quantity",
"Size",
"Industrial machinery"
] |
2,621,463 | https://en.wikipedia.org/wiki/Crystal%20Ball%20function | The Crystal Ball function, named after the Crystal Ball Collaboration (hence the capitalized initial letters), is a probability density function commonly used to model various lossy processes in high-energy physics. It consists of a Gaussian core portion and a power-law low-end tail, below a certain threshold. The function itself and its first derivative are both continuous.
The Crystal Ball function is given by:
where
,
,
,
,
.
(Skwarnicki 1986) is a normalization factor and , , and are parameters which are fitted with the data. erf is the error function.
External links
J. E. Gaiser, Appendix-F Charmonium Spectroscopy from Radiative Decays of the J/Psi and Psi-Prime, Ph.D. Thesis, SLAC-R-255 (1982). (This is a 205-page document in .pdf form – the function is defined on p. 178.)
M. J. Oreglia, A Study of the Reactions psi prime --> gamma gamma psi, Ph.D. Thesis, SLAC-R-236 (1980), Appendix D.
T. Skwarnicki, A study of the radiative CASCADE transitions between the Upsilon-Prime and Upsilon resonances, Ph.D Thesis, DESY F31-86-02(1986), Appendix E.
Functions and mappings
Continuous distributions
Experimental particle physics | Crystal Ball function | [
"Physics",
"Mathematics"
] | 292 | [
"Functions and mappings",
"Mathematical analysis",
"Mathematical objects",
"Experimental physics",
"Particle physics",
"Mathematical relations",
"Experimental particle physics"
] |
2,623,096 | https://en.wikipedia.org/wiki/Spin%20valve | A spin valve is a device, consisting of two or more conducting magnetic materials, whose electrical resistance can change between two values depending on the relative alignment of the magnetization in the layers. The resistance change is a result of the giant magnetoresistive effect. The magnetic layers of the device align "up" or "down" depending on an external magnetic field. In the simplest case, a spin valve consists of a non-magnetic material sandwiched between two ferromagnets, one of which is fixed (pinned) by an antiferromagnet which acts to raise its magnetic coercivity and behaves as a "hard" layer, while the other is free (unpinned) and behaves as a "soft" layer. Due to the difference in coercivity, the soft layer changes polarity at lower applied magnetic field strength than the hard one. Upon application of a magnetic field of appropriate strength, the soft layer switches polarity, producing two distinct states: a parallel, low-resistance state, and an antiparallel, high-resistance state. The invention of spin valves is credited to Dr. Stuart Parkin and his team at IBM Almaden Research Centre. Dr. Parkin is now serving as the Managing Director of the Max Planck Institute of Microstructure Physics in Halle, Germany.
How it works
Spin valves work because of a quantum property of electrons (and other particles) called spin. Due to a split in the density of states of electrons at the Fermi energy in ferromagnets, there is a net spin polarisation. An electric current passing through a ferromagnet therefore carries both charge and a spin component. In comparison, a normal metal has an equal number of electrons with up and down spins so, in equilibrium situations, such materials can sustain a charge current with a zero net spin component. However, by passing a current from a ferromagnet into a normal metal it is possible for spin to be transferred. A normal metal can thus transfer spin between separate ferromagnets, subject to a long enough spin diffusion length.
Spin transmission depends on the alignment of magnetic moments in the ferromagnets. If a current is passing into a ferromagnet whose majority spin is spin up, for example, then electrons with spin up will pass through relatively unhindered, while electrons with spin down will either 'reflect' or spin flip scatter to spin up upon encountering the ferromagnet to find an empty energy state in the new material. Thus if both the fixed and free layers are polarised in the same direction, the device has relatively low electrical resistance, whereas if the applied magnetic field is reversed and the free layer's polarity also reverses, then the device has a higher resistance due to the extra energy required for spin flip scattering.
Antiferromagnetic and non-magnetic layers
An antiferromagnetic layer is required to pin one of the ferromagnetic layers (i.e., make it fixed or magnetically hard). This results from a large negative exchange coupling energy between ferromagnets and antiferromagnets in contact.
The non-magnetic layer is required to decouple the two ferromagnetic layers so that at least one of them remains free (magnetically soft).
Pseudo spin valves
The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and Co. Note that coercivities are largely an extrinsic property of materials and thus determined by processing conditions.
Applications
Spin valves are used in magnetic sensors and hard disk read heads. They are also used in magnetic random access memories (MRAM).
See also
Spin-transfer torque
Magnetic tunnel junction
RKKY interaction
References
Quantum electronics
Spintronics | Spin valve | [
"Physics",
"Materials_science"
] | 837 | [
"Quantum electronics",
"Spintronics",
"Quantum mechanics",
"Condensed matter physics",
"Nanotechnology"
] |
25,225,297 | https://en.wikipedia.org/wiki/Saturation%20velocity | Saturation velocity is the maximum velocity a charge carrier in a semiconductor, generally an electron, attains in the presence of very high electric fields. When this happens, the semiconductor is said to be in a state of velocity saturation. Charge carriers normally move at an average drift speed proportional to the electric field strength they experience temporally. The proportionality constant is known as mobility of the carrier, which is a material property. A good conductor would have a high mobility value for its charge carrier, which means higher velocity, and consequently higher current values for a given electric field strength. There is a limit though to this process and at some high field value, a charge carrier can not move any faster, having reached its saturation velocity, due to mechanisms that eventually limit the movement of the carriers in the material.
As the applied electric field increases from that point, the carrier velocity no longer increases because the carriers lose energy through increased levels of interaction with the lattice, by emitting phonons and even photons as soon as the carrier energy is large enough to do so.
Field effect transistors
Saturation velocity is a very important parameter in the design of semiconductor devices, especially field effect transistors, which are basic building blocks of almost all modern integrated circuits. Typical values of saturation velocity may vary greatly for different materials, for example for Si it is in the order of 1×107 cm/s, for GaAs 1.2×107 cm/s, while for 6H-SiC, it is near 2×107 cm/s. Typical electric field strengths at which carrier velocity saturates is usually on the order of 10-100 kV/cm. Both saturation field and the saturation velocity of a semiconductor material are typically strong function of impurities, crystal defects and temperature.
Small scale devices
For extremely small scale devices, where the high-field regions may be comparable or smaller than the average mean free path of the charge carrier, one can observe velocity overshoot, or hot electron effects which has become more important as the transistor geometries continually decrease to enable design of faster, larger and more dense integrated circuits. The regime where the two terminals between which the electron moves is much smaller than the mean free path, is sometimes referred as ballistic transport. There have been numerous attempts in the past to build transistors based on this principle without much success. Nevertheless, developing field of nanotechnology, and new materials such as Carbon nanotubes and graphene, offers new hope.
Negative differential resistivity
Though in a semiconductor such as Si saturation velocity of a carrier is same as the peak velocity of the carrier, for some other materials with more complex energy band structures, this is not true. In GaAs or InP for example the carrier drift velocity reaches to a maximum as a function of field and then it begins to actually decrease as the electric field applied is increased further. Carriers which have gained enough energy are kicked up to a different conduction band which presents a lower drift velocity and eventually a lower saturation velocity in these materials. This results in an overall decrease of current for higher voltage until all electrons are in the "slow" band and this is the principle behind operation of a Gunn diode, which can display negative differential resistivity. Due to the transfer of electrons to a different conduction band involved, such devices, usually single terminal, are referred to as Transferred electron devices, or TEDs.
Design considerations
When designing semiconductor devices, especially on a sub-micrometre scale as used in modern microprocessors, velocity saturation is an important design characteristic. Velocity saturation greatly affects the voltage transfer characteristics of a field-effect transistor, which is the basic device used in most integrated circuits. If a semiconductor device enters velocity saturation, an increase in voltage applied to the device will not cause a linear increase in current as would be expected by Ohm's law. Instead, the current may only increase by a small amount, or not at all. It is possible to take advantage of this result when trying to design a device that will pass a constant current regardless of the voltage applied, a current limiter in effect.
References
Transistors
Charge carriers
Physical quantities
Semiconductors | Saturation velocity | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 860 | [
"Physical phenomena",
"Matter",
"Physical quantities",
"Charge carriers",
"Semiconductors",
"Quantity",
"Materials",
"Electrical phenomena",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Physical properties",
"Electrical resistance and conductance"
] |
6,294,249 | https://en.wikipedia.org/wiki/Type-I%20superconductor | The interior of a bulk superconductor cannot be penetrated by a weak magnetic field, a phenomenon known as the Meissner effect. When the applied magnetic field becomes too large, superconductivity breaks down. Superconductors can be divided into two types according to how this breakdown occurs. In type-I superconductors, superconductivity is abruptly destroyed via a first order phase transition when the strength of the applied field rises above a critical value Hc. This type of superconductivity is normally exhibited by pure metals, e.g. aluminium, lead, and mercury. The only alloys known up to now which exhibit type I superconductivity are tantalum silicide (TaSi2). and BeAu
The covalent superconductor SiC:B, silicon carbide heavily doped with boron, is also type-I.
Depending on the demagnetization factor, one may obtain an intermediate state. This state, first described by Lev Landau, is a phase separation into macroscopic non-superconducting and superconducting domains forming a Husimi Q representation.
This behavior is different from type-II superconductors which exhibit two critical magnetic fields. The first, lower critical field occurs when magnetic flux vortices penetrate the material but the material remains superconducting outside of these microscopic vortices. When the vortex density becomes too large, the entire material becomes non-superconducting; this corresponds to the second, higher critical field.
The ratio of the London penetration depth λ to the superconducting coherence length ξ determines whether a superconductor is type-I or type-II. Type-I superconductors are those with , and type-II superconductors are those with .
References
Superconductivity
Magnetism | Type-I superconductor | [
"Physics",
"Materials_science",
"Engineering"
] | 384 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
6,296,376 | https://en.wikipedia.org/wiki/Triple%20product%20rule | The triple product rule, known variously as the cyclic chain rule, cyclic relation, cyclical rule or Euler's chain rule, is a formula which relates partial derivatives of three interdependent variables. The rule finds application in thermodynamics, where frequently three variables can be related by a function of the form f(x, y, z) = 0, so each variable is given as an implicit function of the other two variables. For example, an equation of state for a fluid relates temperature, pressure, and volume in this manner. The triple product rule for such interrelated variables x, y, and z comes from using a reciprocity relation on the result of the implicit function theorem, and is given by
where each factor is a partial derivative of the variable in the numerator, considered to be a function of the other two.
The advantage of the triple product rule is that by rearranging terms, one can derive a number of substitution identities which allow one to replace partial derivatives which are difficult to analytically evaluate, experimentally measure, or integrate with quotients of partial derivatives which are easier to work with. For example,
Various other forms of the rule are present in the literature; these can be derived by permuting the variables {x, y, z}.
Derivation
An informal derivation follows. Suppose that f(x, y, z) = 0. Write z as a function of x and y. Thus the total differential dz is
Suppose that we move along a curve with dz = 0, where the curve is parameterized by x. Thus y can be written in terms of x, so on this curve
Therefore, the equation for dz = 0 becomes
Since this must be true for all dx, rearranging terms gives
Dividing by the derivatives on the right hand side gives the triple product rule
Note that this proof makes many implicit assumptions regarding the existence of partial derivatives, the existence of the exact differential dz, the ability to construct a curve in some neighborhood with dz = 0, and the nonzero value of partial derivatives and their reciprocals. A formal proof based on mathematical analysis would eliminate these potential ambiguities.
Alternative derivation
Suppose a function , where , , and are functions of each other. Write the total differentials of the variables
Substitute into
By using the chain rule one can show the coefficient of on the right hand side is equal to one, thus the coefficient of must be zero
Subtracting the second term and multiplying by its inverse gives the triple product rule
Applications
Example: Ideal Gas Law
The ideal gas law relates the state variables of pressure (P), volume (V), and temperature (T) via
which can be written as
so each state variable can be written as an implicit function of the other state variables:
From the above expressions, we have
Geometric Realization
A geometric realization of the triple product rule can be found in its close ties to the velocity of a traveling wave
shown on the right at time t (solid blue line) and at a short time later t+Δt (dashed). The wave maintains its shape as it propagates, so that a point at position x at time t will correspond to a point at position x+Δx at time t+Δt,
This equation can only be satisfied for all x and t if , resulting in the formula for the phase velocity
To elucidate the connection with the triple product rule, consider the point p1 at time t and its corresponding point (with the same height) p̄1 at t+Δt. Define p2 as the point at time t whose x-coordinate matches that of p̄1, and define p̄2 to be the corresponding point of p2 as shown in the figure on the right. The distance Δx between p1 and p̄1 is the same as the distance between p2 and p̄2 (green lines), and dividing this distance by Δt yields the speed of the wave.
To compute Δx, consider the two partial derivatives computed at p2,
Dividing these two partial derivatives and using the definition of the slope (rise divided by run) gives us the desired formula for
where the negative sign accounts for the fact that p1 lies behind p2 relative to the wave's motion. Thus, the wave's velocity is given by
For infinitesimal Δt, and we recover the triple product rule
See also
(has another derivation of the triple product rule)
and scalars.
References
Articles containing proofs
Laws of thermodynamics
Multivariable calculus
Theorems in analysis
Theorems in calculus | Triple product rule | [
"Physics",
"Chemistry",
"Mathematics"
] | 937 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Theorems in calculus",
"Calculus",
"Thermodynamics",
"Articles containing proofs",
"Mathematical problems",
"Multivariable calculus",
"Laws of thermodynamics"
] |
23,897,340 | https://en.wikipedia.org/wiki/International%20Association%20of%20Mathematical%20Physics | The International Association of Mathematical Physics (IAMP) was founded in 1976 to promote research in mathematical physics. It brings together research mathematicians and theoretical physicists, including students. The association's ordinary members are individual researchers, although associate membership is available to organizations and companies. The IAMP is governed by an executive committee elected by the ordinary members.
The association sponsors the International Congress on Mathematical Physics (ICMP), which takes place every three years, and it also supports smaller conferences and workshops. There is a quarterly news bulletin.
IAMP currently awards two kinds of research prizes in mathematical physics at its triannual meetings, the Henri Poincaré Prize (created in 1997) and the Early Career Award (created in 2009).
List of presidents
The presidents of the IAMP since its foundation were:
2024–: Kasia Rejzner
2021–23: Bruno Nachtergaele
2015–20: Robert Seiringer
2012–14: Antti Kupiainen
2009–11: Pavel Exner
2006–08: Giovanni Gallavotti
2003–05: David Brydges
2000–02: Herbert Spohn
1997–99: Elliott Lieb
1991–96: Arthur Jaffe
1988–90: John R. Klauder
1985–87: Konrad Osterwalder
1982–84: Elliott Lieb
1979–81: Huzihiro Araki
1976–78: Walter Thirring
Prizes awarded by IAMP
Henri Poincaré Prize
The Henri Poincaré Prize is sponsored by the Daniel Iagolnitzer Foundation to recognize outstanding contributions in mathematical physics, and contributions which lay the groundwork for novel developments in this broad field. The Prize was also created to recognize and support young people of exceptional promise who have already made outstanding contributions to the field of mathematical physics.
The prize is usually awarded to three individuals every three years at the International Congress on Mathematical Physics (ICMP). The prize committee is appointed by the IAMP.
IAMP Early Career Award
The prize is awarded at the International Congress on Mathematical Physics (ICMP) in recognition of a single achievement in Mathematical Physics, for scientists whose age is less than 35.
List of Past IAMP Congresses (ICMP)
A list of past congresses may be found here.
See also
Mathematical physics
International Congress on Mathematical Physics
Henri Poincaré Prize
External links
Mathematical physics
Mathematical societies
Mathematics organizations
Physics organizations
Physics societies
Scientific organizations established in 1976 | International Association of Mathematical Physics | [
"Physics",
"Mathematics"
] | 487 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
23,897,905 | https://en.wikipedia.org/wiki/International%20Congress%20on%20Mathematical%20Physics | The International Congress on Mathematical Physics (ICMP) is the largest research congress in mathematical physics. It is held every three years, on behalf of the International Association of Mathematical Physics (IAMP).
Prizes
The Henri Poincaré Prize and the IAMP early career award are both delivered at the ICMP.
List of IAMP Congresses (ICMP)
1972: Moscow
1974: Warsaw
1975: Kyoto
1977: Rome
1979: Lausanne
1981: Berlin
1983: Boulder
1986: Marseille
1988: Swansea
1991: Leipzig
1994: Paris
1997: Brisbane (website)
2000: London
2003: Lisbon (website)
2006: Rio de Janeiro (website )
2009: Prague (website )
2012: Aalborg (website)
2015: Santiago (website)
2018: Montreal (website)
2021: Geneva (website)
2024: Strasbourg (website)
References
External links
International Congress of Mathematical Physics (ICMP)
Mathematical physics
Mathematics conferences
Physics conferences | International Congress on Mathematical Physics | [
"Physics",
"Mathematics"
] | 186 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
127,511 | https://en.wikipedia.org/wiki/DNA%20sequencer | A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine). This is then reported as a text string, called a read. Some DNA sequencers can be also considered optical instruments as they analyze light signals originating from fluorochromes attached to nucleotides.
The first automated DNA sequencer, invented by Lloyd M. Smith, was introduced by Applied Biosystems in 1987. It used the Sanger sequencing method, a technology which formed the basis of the "first generation" of DNA sequencers and enabled the completion of the human genome project in 2001. This first generation of DNA sequencers are essentially automated electrophoresis systems that detect the migration of labelled DNA fragments. Therefore, these sequencers can also be used in the genotyping of genetic markers where only the length of a DNA fragment(s) needs to be determined (e.g. microsatellites, AFLPs).
The Human Genome Project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS) to sequence the human genome. These include the 454, SOLiD and Illumina DNA sequencing platforms. Next generation sequencing machines have increased the rate of DNA sequencing substantially, as compared with the previous Sanger methods. DNA samples can be prepared automatically in as little as 90 mins, while a human genome can be sequenced at 15 times coverage in a matter of days.
More recent, third-generation DNA sequencers such as PacBio SMRT and Oxford Nanopore offer the possibility of sequencing long molecules, compared to short-read technologies such as Illumina SBS or MGI Tech's DNBSEQ.
Because of limitations in DNA sequencer technology, the reads of many of these technologies are short, compared to the length of a genome therefore the reads must be assembled into longer contigs. The data may also contain errors, caused by limitations in the DNA sequencing technique or by errors during PCR amplification. DNA sequencer manufacturers use a number of different methods to detect which DNA bases are present. The specific protocols applied in different sequencing platforms have an impact in the final data that is generated. Therefore, comparing data quality and cost across different technologies can be a daunting task. Each manufacturer provides their own ways to inform sequencing errors and scores. However, errors and scores between different platforms cannot always be compared directly. Since these systems rely on different DNA sequencing approaches, choosing the best DNA sequencer and method will typically depend on the experiment objectives and available budget.
History
The first DNA sequencing methods were developed by Gilbert (1973) and Sanger (1975). Gilbert introduced a sequencing method based on chemical modification of DNA followed by cleavage at specific bases whereas Sanger's technique is based on dideoxynucleotide chain termination. The Sanger method became popular due to its increased efficiency and low radioactivity. The first automated DNA sequencer was the AB370A, introduced in 1986 by Applied Biosystems. The AB370A was able to sequence 96 samples simultaneously, 500 kilobases per day, and reaching read lengths up to 600 bases. This was the beginning of the "first generation" of DNA sequencers, which implemented Sanger sequencing, fluorescent dideoxy nucleotides and polyacrylamide gel sandwiched between glass plates - slab gels. The next major advance was the release in 1995 of the AB310 which utilized a linear polymer in a capillary in place of the slab gel for DNA strand separation by electrophoresis. These techniques formed the base for the completion of the human genome project in 2001. The human genome project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS). In 2005, 454 Life Sciences released the 454 sequencer, followed by Solexa Genome Analyzer and SOLiD (Supported Oligo Ligation Detection) by Agencourt in 2006. Applied Biosystems acquired Agencourt in 2006, and in 2007, Roche bought 454 Life Sciences, while Illumina purchased Solexa. Ion Torrent entered the market in 2010 and was acquired by Life Technologies (now Thermo Fisher Scientific). And BGI started manufacturing sequencers in China after acquiring Complete Genomics under their MGI arm. These are still the most common NGS systems due to their competitive cost, accuracy, and performance.
More recently, a third generation of DNA sequencers was introduced. The sequencing methods applied by these sequencers do not require DNA amplification (polymerase chain reaction – PCR), which speeds up the sample preparation before sequencing and reduces errors. In addition, sequencing data is collected from the reactions caused by the addition of nucleotides in the complementary strand in real time. Two companies introduced different approaches in their third-generation sequencers. Pacific Biosciences sequencers utilize a method called Single-molecule real-time (SMRT), where sequencing data is produced by light (captured by a camera) emitted when a nucleotide is added to the complementary strand by enzymes containing fluorescent dyes. Oxford Nanopore Technologies is another company developing third-generation sequencers using electronic systems based on nanopore sensing technologies.
Manufacturers of DNA sequencers
DNA sequencers have been developed, manufactured, and sold by the following companies, among others.
Roche
The 454 DNA sequencer was the first next-generation sequencer to become commercially successful. It was developed by 454 Life Sciences and purchased by Roche in 2007. 454 utilizes the detection of pyrophosphate released by the DNA polymerase reaction when adding a nucleotide to the template strain.
Roche currently manufactures two systems based on their pyrosequencing technology: the GS FLX+ and the GS Junior System. The GS FLX+ System promises read lengths of approximately 1000 base pairs while the GS Junior System promises 400 base pair reads. A predecessor to GS FLX+, the 454 GS FLX Titanium system was released in 2008, achieving an output of 0.7G of data per run, with 99.9% accuracy after quality filter, and a read length of up to 700bp. In 2009, Roche launched the GS Junior, a bench top version of the 454 sequencer with read length up to 400bp, and simplified library preparation and data processing.
One of the advantages of 454 systems is their running speed. Manpower can be reduced with automation of library preparation and semi-automation of emulsion PCR. A disadvantage of the 454 system is that it is prone to errors when estimating the number of bases in a long string of identical nucleotides. This is referred to as a homopolymer error and occurs when there are 6 or more identical bases in row. Another disadvantage is that the price of reagents is relatively more expensive compared with other next-generation sequencers.
In 2013 Roche announced that they would be shutting down development of 454 technology and phasing out 454 machines completely in 2016 when its technology became noncompetitive.
Roche produces a number of software tools which are optimised for the analysis of 454 sequencing data. Such as,
GS Run Processor converts raw images generated by a sequencing run into intensity values. The process consists of two main steps: image processing and signal processing. The software also applies normalization, signal correction, base-calling and quality scores for individual reads. The software outputs data in Standard Flowgram Format (or SFF) files to be used in data analysis applications (GS De Novo Assembler, GS Reference Mapper or GS Amplicon Variant Analyzer).
GS De Novo Assembler is a tool for de novo assembly of whole-genomes up to 3GB in size from shotgun reads alone or combined with paired end data generated by 454 sequencers. It also supports de novo assembly of transcripts (including analysis), and also isoform variant detection.
GS Reference Mapper maps short reads to a reference genome, generating a consensus sequence. The software is able to generate output files for assessment, indicating insertions, deletions and SNPs. Can handle large and complex genomes of any size.
Finally, the GS Amplicon Variant Analyzer aligns reads from amplicon samples against a reference, identifying variants (linked or not) and their frequencies. It can also be used to detect unknown and low-frequency variants. It includes graphical tools for analysis of alignments.
Illumina
Illumina produces a number of next-generation sequencing machines using technology acquired from Manteia Predictive Medicine and developed by Solexa. Illumina makes a number of next generation sequencing machines using this technology including the HiSeq, Genome Analyzer IIx, MiSeq and the HiScanSQ, which can also process microarrays.
The technology leading to these DNA sequencers was first released by Solexa in 2006 as the Genome Analyzer. Illumina purchased Solexa in 2007. The Genome Analyzer uses a sequencing by synthesis method. The first model produced 1G per run. During the year 2009 the output was increased from 20G per run in August to 50G per run in December. In 2010 Illumina released the HiSeq 2000 with an output of 200 and then 600G per run which would take 8 days. At its release the HiSeq 2000 provided one of the cheapest sequencing platforms at $0.02 per million bases as costed by the Beijing Genomics Institute.
In 2011 Illumina released a benchtop sequencer called the MiSeq. At its release the MiSeq could generate 1.5G per run with paired end 150bp reads. A sequencing run can be performed in 10 hours when using automated DNA sample preparation.
The Illumina HiSeq uses two software tools to calculate the number and position of DNA clusters to assess the sequencing quality: the HiSeq control system and the real-time analyzer. These methods help to assess if nearby clusters are interfering with each other.
Life Technologies
Life Technologies (now Thermo Fisher Scientific) produces DNA sequencers under the Applied Biosystems and Ion Torrent brands. Applied Biosystems makes the SOLiD next-generation sequencing platform, and Sanger-based DNA sequencers such as the 3500 Genetic Analyzer. Under the Ion Torrent brand, Applied Biosystems produces four next-generation sequencers: the Ion PGM System, Ion Proton System, Ion S5 and Ion S5xl systems. The company is also believed to be developing their new capillary DNA sequencer called SeqStudio that will be released early 2018.
SOLiD systems was acquired by Applied Biosystems in 2006. SOLiD applies sequencing by ligation and dual base encoding. The first SOLiD system was launched in 2007, generating reading lengths of 35bp and 3G data per run. After five upgrades, the 5500xl sequencing system was released in 2010, considerably increasing read length to 85bp, improving accuracy up to 99.99% and producing 30G per 7-day run.
The limited read length of the SOLiD has remained a significant shortcoming and has to some extent limited its use to experiments where read length is less vital such as resequencing and transcriptome analysis and more recently ChIP-Seq and methylation experiments. The DNA sample preparation time for SOLiD systems has become much quicker with the automation of sequencing library preparations such as the Tecan system.
The colour space data produced by the SOLiD platform can be decoded into DNA bases for further analysis, however software that considers the original colour space information can give more accurate results. Life Technologies has released BioScope, a data analysis package for resequencing, ChiP-Seq and transcriptome analysis. It uses the MaxMapper algorithm to map the colour space reads.
Beckman Coulter
Beckman Coulter (now Danaher) has previously manufactured chain termination and capillary electrophoresis-based DNA sequencers under the model name CEQ, including the CEQ 8000. The company now produces the GeXP Genetic Analysis System, which uses dye terminator sequencing. This method uses a thermocycler in much the same way as PCR to denature, anneal, and extend DNA fragments, amplifying the sequenced fragments.
Pacific Biosciences
Pacific Biosciences produces the PacBio RS and Sequel sequencing systems using a single molecule real time sequencing, or SMRT, method. This system can produce read lengths of multiple thousands of base pairs. Higher raw read errors are corrected using either circular consensus - where the same strand is read over and over again - or using optimized assembly strategies. Scientists have reported 99.9999% accuracy with these strategies. The Sequel system was launched in 2015 with an increased capacity and a lower price.
Oxford Nanopore
Oxford Nanopore Technologies' MinION sequencer is based on evolving nanopore sequencing technology to nucleic acid analyses. The device is four inches long and gets power from a USB port. MinION decodes DNA directly as the molecule is drawn at the rate of 450 bases/second through a nanopore suspended in a membrane. Changes in electric current indicate which base is present. Initially, the device was 60 to 85 percent accurate, compared with 99.9 percent in conventional machines. Even inaccurate results may prove useful because it produces long read lengths. In early 2021, researchers from University of British Columbia has used special molecular tags and able to reduce the five-to-15 per cent error rate of the device to less than 0.005 per cent even when sequencing many long stretches of DNA at a time. There are two more product iterations based on MinION; the first one is the GridION which is a slightly larger sequencer that processes up to five MinION flow cells at once. And, the second one is the PromethION which uses as many as 100,000 pores in parallel, more suitable for high volume sequencing.
MGI
MGI produces high-throughput sequencers for scientific research and clinical applications such as DNBSEQ-G50, DNBSEQ-G400, and DNBSEQ-T7, under a proprietary DNBSEQ technology. It is based upon DNA nanoball sequencing and combinatorial probe anchor synthesis technologies, in which DNA nanoballs (DNBs) are loaded onto a patterned array chip via the fluidic system, and later a sequencing primer is added to the adaptor region of DNBs for hybridization. DNBSEQ-T7 can generate short reads at a very large scale—up to 60 human genomes per day. DNBSEQ-T7 was used to generate 150 bp paired-end reads, sequencing 30X, to sequence the genome of SARS-CoV-2 or COVID-19 to identify the genetic variants predisposition in severe COVID-19 illness. Using a novel technique the researchers from China National GeneBank sequenced PCR-free libraries on MGI's PCR-free DNBSEQ arrays to obtain for the first time a true PCR-free whole genome sequencing. MGISEQ-2000 was used in single-cell RNA sequencing to study the underlying pathogenesis and recovery in COVID-19 patients, as published in Nature Medicine.
Comparison
Current offerings in DNA sequencing technology show a dominant player: Illumina (December 2019), followed by PacBio, MGI and Oxford Nanopore.
References
DNA sequencing
Genetics techniques
Molecular biology laboratory equipment
Scientific instruments | DNA sequencer | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 3,221 | [
"Genetics techniques",
"Genetic engineering",
"Scientific instruments",
"Measuring instruments",
"Molecular biology laboratory equipment",
"Molecular biology techniques",
"DNA sequencing"
] |
18,640,324 | https://en.wikipedia.org/wiki/Obesity%20paradox | The obesity paradox is the finding in some studies of a lower mortality rate for overweight or obese people within certain subpopulations. The paradox has been observed in people with cardiovascular disease and cancer. Explanations for the paradox range from excess weight being protective to the statistical association being caused by methodological flaws such as confounding, detection bias, reverse causality, or selection bias.
Description
The terminology "reverse epidemiology" was first proposed by Kamyar Kalantar-Zadeh in the journal Kidney International in 2003 and in the Journal of the American College of Cardiology in 2004. It is a contradiction to prevailing medical concepts of prevention of atherosclerosis and cardiovascular disease; however, active prophylactic treatment of heart disease in otherwise healthy, asymptomatic people has been and is controversial in the medical community for several years.
The mechanism responsible for this reversed association is unknown, but it has been theorized that, in chronic kidney disease patients, "The common occurrence of persistent inflammation and protein energy wasting in advanced CKD [chronic kidney disease] seems to a large extent to account for this paradoxical association between traditional risk factors and CV [cardiovascular] outcomes in this patient population." Other research has proposed that the paradox also may be explained by adipose tissue storing lipophilic chemicals that would otherwise be toxic to the body.
The obesity paradox (excluding the cholesterol paradox) was first described in 1999 in overweight and obese people undergoing hemodialysis, and has subsequently been found in those with heart failure, myocardial infarction, acute coronary syndrome, chronic obstructive pulmonary disease (COPD), pulmonary embolisms, and in older nursing home residents.
While obese people have twice the risk of developing heart failure compared to individuals with a normal BMI, once a person experiences heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal BMI. This has been attributed to the fact that people often lose weight when they have severe and chronic illness (a syndrome called cachexia). Similar findings have been made in other types of heart disease. Among people with heart disease, those with class I obesity do not have greater rates of further heart problems than people of normal weight. In people with greater degrees of obesity, however, risk of further events is increased. Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese. One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event. Another found that if one takes into account COPD in those with peripheral artery disease, the benefit of obesity no longer exists.
The obesity paradox is also relevant in discussion of weight loss as a preventative health measure – weight-cycling (a repeated pattern of losing and then regaining weight) is more common in obese people, and has health effects commonly assumed to be caused by obesity, such as hypertension, insulin resistance, and cardiovascular diseases.
Criticisms
Methodology
The obesity paradox has been criticized on the grounds of being an artifact arising from biases in observational studies. Strong confounding by smoking has been noted by several researchers, although others have suggested that smoking does not account for the observed patterns. Since smokers, who are subject to higher mortality rates, also tend to be leaner, inadequate adjustment for smoking would lead to underestimations of the risk ratios associated with the overweight and obese categories of BMI among non-smokers. In an analysis of 1.46 million individuals, restriction to never-smoking participants greatly reduced the mortality estimates in the underweight group, as well as strengthening the estimates in the overweight and obese groups. This study concluded that, for non-Hispanic white adults who have never smoked, the BMI range of 20.0 to 24.9 was associated with the lowest mortality rates. A similar 2016 study found that, of the BMI ranges studied (which ranged from 18.5 to >30), the "normal" 18.5–22.4 BMI range combined with healthy eating, high levels of physical activity, not smoking, and no more than moderate alcohol consumption was associated with the lowest risk of premature death.
Another concern is reverse causation due to illness-induced weight loss. That is, it may not be low BMI that is causing death (and thereby making obesity seem protective) but rather imminent death causing low BMI. Indeed, unintentional weight loss is an extremely significant predictor of mortality. Terminally ill individuals often undergo weight loss before death, and classifying those individuals as lean greatly inflates the mortality rate in the normal and underweight categories of BMI, while lowering the risk in the higher BMI categories. Studies that employ strategies to reduce reverse causation such as excluding sick individuals at baseline and introducing time lag to exclude deaths at the beginning of follow-up have yielded estimates of increased risk for body mass indices above 25 kg/m2.
The obesity paradox may therefore result from people becoming lean due to smoking, sedentary lifestyles, and unhealthy diets – all factors which also negatively impact health.
Critics of the paradox have also argued that studies supporting its existence almost always use BMI as the only measure of obesity. However, because BMI is an imperfect method of measuring obesity, critics argue that studies using other measures of obesity in addition to BMI, such as waist circumference and waist to hip ratio, render the existence of the paradox questionable.
One probable methodological explanation for the obesity paradox in regards to cardiovascular disease is collider stratification bias, which commonly emerges when one restricts or stratifies on a factor (the "collider") that is caused by both the exposure (or its descendants) of an unmeasured variable and the outcome (or its ancestors / risk factors). In the example of the obesity-cardiovascular disease relationship, the obesity is the collider, the outcome is cardiovascular disease, and the unmeasured variables are environmental and genetic factors – given that obesity and cardiovascular disorders are often associated with each other, medical professionals may be reluctant to consider both other causes of cardiovascular disease or other causes of protection against said diseases.
A study from 2018 found that the reason why overweight or obese patients supposedly live longer with cardiovascular disease than people of normal weight is simply because overweight / obese patients get cardiovascular disease at an earlier age, meaning while they survive more years with it, non-obese patients don't get cardiovascular disease at all up until later in life. In fact, the obese have shorter lifespans because they get cardiovascular disease at an early age and have to live a longer proportion of their life with it. This also shows a misunderstanding regarding the paradox: While survival rate once sick is indeed higher for those with obesity than for those few non-obese that have cardiovascular disease, people without obesity usually do not get cardiovascular disease in the first place.
Ties to Coca-Cola
It has also been noted that Coca-Cola has promoted the hypothesis and funded researchers who agree with the hypothesis, which has raised questions about what research the company supports and why.
Weight relativism
Dixon et al. have proposed that a paradox does not actually exist, as people can be healthy at a range of sizes. As one study puts it, "There is no 'obesity paradox' to explain, if we accept the premise that varying ideal weight ranges apply to individuals over different stages of the life span, accordingly allowing us to abandon the rigid biologically implausible concept of a single 'ideal weight' (for height) or weight range."
See also
French paradox
Israeli paradox
Low birth-weight paradox (Low birth-weight babies born to smokers have a lower mortality than low birth-weight babies born to non-smokers, because other causes of low birth-weight are more harmful than smoking.)
Katherine Flegal
Social stigma of obesity
References
Further reading
Epidemiology
Hypotheses
Obesity
Health paradoxes | Obesity paradox | [
"Environmental_science"
] | 1,653 | [
"Epidemiology",
"Environmental social science"
] |
18,640,974 | https://en.wikipedia.org/wiki/Uniform%20tilings%20in%20hyperbolic%20plane | In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry.
Uniform tilings can be identified by their vertex configuration, a sequence of numbers representing the number of sides of the polygons around each vertex. For example, 7.7.7 represents the heptagonal tiling which has 3 heptagons around each vertex. It is also regular since all the polygons are the same size, so it can also be given the Schläfli symbol {7,3}.
Uniform tilings may be regular (if also face- and edge-transitive), quasi-regular (if edge-transitive but not face-transitive) or semi-regular (if neither edge- nor face-transitive). For right triangles (p q 2), there are two regular tilings, represented by Schläfli symbol {p,q} and {q,p}.
Wythoff construction
There are an infinite number of uniform tilings based on the Schwarz triangles (p q r) where + + < 1, where p, q, r are each orders of reflection symmetry at three points of the fundamental domain triangle – the symmetry group is a hyperbolic triangle group.
Each symmetry family contains 7 uniform tilings, defined by a Wythoff symbol or Coxeter-Dynkin diagram, 7 representing combinations of 3 active mirrors. An 8th represents an alternation operation, deleting alternate vertices from the highest form with all mirrors active.
Families with r = 2 contain regular hyperbolic tilings, defined by a Coxeter group such as [7,3], [8,3], [9,3], ... [5,4], [6,4], ....
Hyperbolic families with r = 3 or higher are given by (p q r) and include (4 3 3), (5 3 3), (6 3 3) ... (4 4 3), (5 4 3), ... (4 4 4)....
Hyperbolic triangles (p q r) define compact uniform hyperbolic tilings. In the limit any of p, q or r can be replaced by ∞ which defines a paracompact hyperbolic triangle and creates uniform tilings with either infinite faces (called apeirogons) that converge to a single ideal point, or infinite vertex figure with infinitely many edges diverging from the same ideal point.
More symmetry families can be constructed from fundamental domains that are not triangles.
Selected families of uniform tilings are shown below (using the Poincaré disk model for the hyperbolic plane). Three of them – (7 3 2), (5 4 2), and (4 3 3) – and no others, are minimal in the sense that if any of their defining numbers is replaced by a smaller integer the resulting pattern is either Euclidean or spherical rather than hyperbolic; conversely, any of the numbers can be increased (even to infinity) to generate other hyperbolic patterns.
Each uniform tiling generates a dual uniform tiling, with many of them also given below.
Right triangle domains
There are infinitely many (p q 2) triangle group families. This article shows the regular tiling up to p, q = 8, and uniform tilings in 12 families: (7 3 2), (8 3 2), (5 4 2), (6 4 2), (7 4 2), (8 4 2), (5 5 2), (6 5 2) (6 6 2), (7 7 2), (8 6 2), and (8 8 2).
Regular hyperbolic tilings
The simplest set of hyperbolic tilings are regular tilings {p,q}, which exist in a matrix with the regular polyhedra and Euclidean tilings. The regular tiling {p,q} has a dual tiling {q,p} across the diagonal axis of the table. Self-dual tilings {2,2}, {3,3}, {4,4}, {5,5}, etc. pass down the diagonal of the table.
(7 3 2)
The (7 3 2) triangle group, Coxeter group [7,3], orbifold (*732) contains these uniform tilings:
(8 3 2)
The (8 3 2) triangle group, Coxeter group [8,3], orbifold (*832) contains these uniform tilings:
(5 4 2)
The (5 4 2) triangle group, Coxeter group [5,4], orbifold (*542) contains these uniform tilings:
(6 4 2)
The (6 4 2) triangle group, Coxeter group [6,4], orbifold (*642) contains these uniform tilings. Because all the elements are even, each uniform dual tiling one represents the fundamental domain of a reflective symmetry: *3333, *662, *3232, *443, *222222, *3222, and *642 respectively. As well, all 7 uniform tiling can be alternated, and those have duals as well.
(7 4 2)
The (7 4 2) triangle group, Coxeter group [7,4], orbifold (*742) contains these uniform tilings:
(8 4 2)
The (8 4 2) triangle group, Coxeter group [8,4], orbifold (*842) contains these uniform tilings. Because all the elements are even, each uniform dual tiling one represents the fundamental domain of a reflective symmetry: *4444, *882, *4242, *444, *22222222, *4222, and *842 respectively. As well, all 7 uniform tiling can be alternated, and those have duals as well.
(5 5 2)
The (5 5 2) triangle group, Coxeter group [5,5], orbifold (*552) contains these uniform tilings:
(6 5 2)
The (6 5 2) triangle group, Coxeter group [6,5], orbifold (*652) contains these uniform tilings:
(6 6 2)
The (6 6 2) triangle group, Coxeter group [6,6], orbifold (*662) contains these uniform tilings:
(8 6 2)
The (8 6 2) triangle group, Coxeter group [8,6], orbifold (*862) contains these uniform tilings.
(7 7 2)
The (7 7 2) triangle group, Coxeter group [7,7], orbifold (*772) contains these uniform tilings:
(8 8 2)
The (8 8 2) triangle group, Coxeter group [8,8], orbifold (*882) contains these uniform tilings:
General triangle domains
There are infinitely many general triangle group families (p q r). This article shows uniform tilings in 9 families: (4 3 3), (4 4 3), (4 4 4), (5 3 3), (5 4 3), (5 4 4), (6 3 3), (6 4 3), and (6 4 4).
(4 3 3)
The (4 3 3) triangle group, Coxeter group [(4,3,3)], orbifold (*433) contains these uniform tilings. Without right angles in the fundamental triangle, the Wythoff constructions are slightly different. For instance in the (4,3,3) triangle family, the snub form has six polygons around a vertex and its dual has hexagons rather than pentagons. In general the vertex figure of a snub tiling in a triangle (p,q,r) is p. 3.q.3.r.3, being 4.3.3.3.3.3 in this case below.
(4 4 3)
The (4 4 3) triangle group, Coxeter group [(4,4,3)], orbifold (*443) contains these uniform tilings.
(4 4 4)
The (4 4 4) triangle group, Coxeter group [(4,4,4)], orbifold (*444) contains these uniform tilings.
(5 3 3)
The (5 3 3) triangle group, Coxeter group [(5,3,3)], orbifold (*533) contains these uniform tilings.
(5 4 3)
The (5 4 3) triangle group, Coxeter group [(5,4,3)], orbifold (*543) contains these uniform tilings.
(5 4 4)
The (5 4 4) triangle group, Coxeter group [(5,4,4)], orbifold (*544) contains these uniform tilings.
(6 3 3)
The (6 3 3) triangle group, Coxeter group [(6,3,3)], orbifold (*633) contains these uniform tilings.
(6 4 3)
The (6 4 3) triangle group, Coxeter group [(6,4,3)], orbifold (*643) contains these uniform tilings.
(6 4 4)
The (6 4 4) triangle group, Coxeter group [(6,4,4)], orbifold (*644) contains these uniform tilings.
Summary of tilings with finite triangular fundamental domains
For a table of all uniform hyperbolic tilings with fundamental domains (p q r), where 2 ≤ p,q,r ≤ 8.
See Template:Finite triangular hyperbolic tilings table
Quadrilateral domains
(3 2 2 2)
Quadrilateral fundamental domains also exist in the hyperbolic plane, with the *3222 orbifold ([∞,3,∞] Coxeter notation) as the smallest family. There are 9 generation locations for uniform tiling within quadrilateral domains. The vertex figure can be extracted from a fundamental domain as 3 cases (1) Corner (2) Mid-edge, and (3) Center. When generating points are corners adjacent to order-2 corners, degenerate {2} digon faces at those corners exist but can be ignored. Snub and alternated uniform tilings can also be generated (not shown) if a vertex figure contains only even-sided faces.
Coxeter diagrams of quadrilateral domains are treated as a degenerate tetrahedron graph with 2 of 6 edges labeled as infinity, or as dotted lines. A logical requirement of at least one of two parallel mirrors being active limits the uniform cases to 9, and other ringed patterns are not valid.
(3 2 3 2)
Ideal triangle domains
There are infinitely many triangle group families including infinite orders. This article shows uniform tilings in 9 families: (∞ 3 2), (∞ 4 2), (∞ ∞ 2), (∞ 3 3), (∞ 4 3), (∞ 4 4), (∞ ∞ 3), (∞ ∞ 4), and (∞ ∞ ∞).
(∞ 3 2)
The ideal (∞ 3 2) triangle group, Coxeter group [∞,3], orbifold (*∞32) contains these uniform tilings:
(∞ 4 2)
The ideal (∞ 4 2) triangle group, Coxeter group [∞,4], orbifold (*∞42) contains these uniform tilings:
(∞ 5 2)
The ideal (∞ 5 2) triangle group, Coxeter group [∞,5], orbifold (*∞52) contains these uniform tilings:
(∞ ∞ 2)
The ideal (∞ ∞ 2) triangle group, Coxeter group [∞,∞], orbifold (*∞∞2) contains these uniform tilings:
(∞ 3 3)
The ideal (∞ 3 3) triangle group, Coxeter group [(∞,3,3)], orbifold (*∞33) contains these uniform tilings.
(∞ 4 3)
The ideal (∞ 4 3) triangle group, Coxeter group [(∞,4,3)], orbifold (*∞43) contains these uniform tilings:
(∞ 4 4)
The ideal (∞ 4 4) triangle group, Coxeter group [(∞,4,4)], orbifold (*∞44) contains these uniform tilings.
(∞ ∞ 3)
The ideal (∞ ∞ 3) triangle group, Coxeter group [(∞,∞,3)], orbifold (*∞∞3) contains these uniform tilings.
(∞ ∞ 4)
The ideal (∞ ∞ 4) triangle group, Coxeter group [(∞,∞,4)], orbifold (*∞∞4) contains these uniform tilings.
(∞ ∞ ∞)
The ideal (∞ ∞ ∞) triangle group, Coxeter group [(∞,∞,∞)], orbifold (*∞∞∞) contains these uniform tilings.
Summary of tilings with infinite triangular fundamental domains
For a table of all uniform hyperbolic tilings with fundamental domains (p q r), where 2 ≤ p,q,r ≤ 8, and one or more as ∞.
References
John Horton Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
The EPINET project explores 2D hyperbolic (H²) tilings
Hyperbolic tilings
Uniform tilings
Mathematics-related lists | Uniform tilings in hyperbolic plane | [
"Physics"
] | 2,976 | [
"Tessellation",
"Uniform tilings",
"Hyperbolic tilings",
"Symmetry"
] |
18,641,757 | https://en.wikipedia.org/wiki/Ukrainian%20National%20Chernobyl%20Museum | The Ukrainian National Chernobyl Museum (, Ukrayins'kyy natsional'nyy muzey "Chornobyl'") is a history museum in Kyiv, Ukraine, dedicated to the 1986 Chernobyl disaster and its consequences. It houses an extensive collection of visual media, artifacts, scale models, and other items. The museum is designed to educate the public about the many aspects of the disaster. Several exhibits depict the technical progression of the accident. There is also many areas dedicated to the loss of life and cultural ramifications of the disaster.
Due to the nature of the subject material, the museum provides a visually engaging experience.
The museum occupies an early 20th-century building which formerly housed a fire brigade and was donated in 1992 by the State Fire Protection Guard.
Liquidator Remembrance Book
The museum supports the "Remembrance Book" (, Knyha Pam'yati) – a unique online database of Liquidators (Chernobyl disaster management personnel, some of whom sacrificed their lives) featuring personal pages with photos and brief structured information written on these pages. Data fields include "Radiation damage suffered", "Field of liquidation activity" and "Subsequent fate". The project started in 1997, containing over 5000 entries as of February, 2013. The database is currently available in Ukrainian language only. "Remembrance Book" is neither the only nor the complete nor official liquidators database but probably the only one open to public on the web.
Funding and patrons
The museum is founded and supported by the government of Ukraine and the local government of Kyiv. Private and foreign donations are also common. The museum has also received funding from the Japanese government.
Foreign languages availability
Guided tours in English and other Western languages can be organized, and many exhibit signs have already been translated to English. Recorded audio is translated in English, and other languages.
Location and public transport access
The museum is located at 1 Khoryva Lane (provulok Khoryva, 1), in historic Podil neighborhood of the city centre.
The nearest Kyiv Metro station is Kontraktova Ploshcha station on the Kontraktova Square, where various Kyiv trams, bus and marshrutka routes came together. Car parking space near the museum is very limited.
Gallery
See also
Chernobyl Nuclear Power Plant
References
External links
Official website
National Chernobyl Museum - museum page on the PRIPYAT.com community
Ukraine.com listing
Kyiv: Chernobyl Museum - article in TripAdvisor
Aftermath of the Chernobyl disaster
National museums of Ukraine
History museums in Ukraine
Museums established in 1992
1992 establishments in Ukraine
Disaster museums
Atomic tourism | Ukrainian National Chernobyl Museum | [
"Technology"
] | 534 | [
"Aftermath of the Chernobyl disaster",
"Environmental impact of nuclear power"
] |
18,643,856 | https://en.wikipedia.org/wiki/Helix%20of%20sustainability | The helix of sustainability is a concept coined to help the manufacturing industry move to more sustainable practices by mapping its models of raw material use and reuse onto those of nature. The environmental benefits of the use crop origin sustainable materials have been assumed to be self-evident, but as the debate on food vs fuel shows, the whole product life cycle must be examined in the light of social and environmental effects in addition to technical suitability and profitability.
The helix of sustainability is a concept created as a representation of the total systems approach to gain full advantage from manufacturing with sustainable materials, particularly biopolymers and biocomposites. In 2004, the concept was presented by Professor John Wood, then Chair of the Materials Foresight Panel at a DTI event hosted by the then Secretary of State for Industry (Jacqui Smith). In the same year, it was also used in the European Science Foundation exploratory workshop on environmentally friendly composites.
The advantages of working with crop origin raw materials are readily observed if the social and environmental impacts are considered as well as monetary cost (the Triple bottom line), and the helix of sustainability helps to demonstrate this. For the full potential of biopolymers to be realised it is essential that attention is paid to every aspect of the manufacturing process from design (how to cope with the uncertainties in properties associated with crop origin materials?), manufacture (can existing technologies be used?), through to end-of-life (can the redundant article be fed back into the materials cycle?). The entire supply chain must be considered because decisions taken at the design stage have significant effects right through the life of an article. Low-cost assembly techniques (e.g., snap-fits) may make dismantling or repair uneconomical. However, if say an easy-to-dismantle car is built, will there be any effect on the ability of the vehicle to absorb energy in a crash? At an even more fundamental level, what will be the social and environmental of the change in crop growing patterns? This low environmental impact approach to manufacturing is seen as an extension of waste reduction techniques, such as lean manufacturing.
Conventional cycles of use and reuse are circular. Consider the mechanical recovery of conventional polymers. A complex infrastructure is needed to recover the material at the end of an article's useful life. At the end of an article's life - say a PET carbonated drink bottle, the article must be separated from the waste stream, either by the consumer who throws it away, or by manual labour at the rubbish dump. It must then be transported to some facility to be reprocessed (using more labour and energy) back into a raw material. The heat and shear forces associated with the process of remanufacture tends to produce material with slightly degraded properties compared to the original material.
For sustainable material articles there is not such a great requirement for a dedicated recovery infrastructure. If a litter lout throws a crop origin biodegradable article on the ground, it will ultimately biodegrade into humus, water, and non-fossil CO2. If the article is placed into a compostable waste stream, the humus can then be used as fertiliser for the next generation of crops; there is also no requirement to sort biopolymer articles as there is with fossil polymer recycling. Note the difference between landfill and compost: the limited biological activity in landfill is slow and mostly anaerobic, resulting in the production of methane, whereas composting is a rapid aerobic process resulting in humus, water and non-fossil CO2. The energy bill for breaking down biodegradables into the fundamental building block molecules and then reassembling them into usable raw materials is large, but it uses direct solar energy rather than metered electricity. There is also no loss of properties with successive journeys through the cycle.
See also
Waste hierarchy
Industrial ecology
Mottainai
Biopolymers
Bioplastics
Non-food crop
References
Industrial ecology | Helix of sustainability | [
"Chemistry",
"Engineering"
] | 825 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
802,913 | https://en.wikipedia.org/wiki/Semiconductor%20detector | A semiconductor detector in ionizing radiation detection physics is a device that uses a semiconductor (usually silicon or germanium) to measure the effect of incident charged particles or photons.
Semiconductor detectors find broad application for radiation protection, gamma and X-ray spectrometry, and as particle detectors.
Detection mechanism
In semiconductor detectors, ionizing radiation is measured by the number of charge carriers set free in the detector material which is arranged between two electrodes, by the radiation. Ionizing radiation produces free electrons and electron holes. The number of electron-hole pairs is proportional to the energy of the radiation to the semiconductor. As a result, a number of electrons are transferred from the valence band to the conduction band, and an equal number of holes are created in the valence band. Under the influence of an electric field, electrons and holes travel to the electrodes, where they result in a pulse that can be measured in an outer circuit, as described by the Shockley-Ramo theorem. The holes travel in the opposite direction and can also be measured. As the amount of energy required to create an electron-hole pair is known, and is independent of the energy of the incident radiation, measuring the number of electron-hole pairs allows the energy of the incident radiation to be determined.
The energy required to produce electron-hole-pairs is very low compared to the energy required to produce paired ions in a gas detector. Consequently, in semiconductor detectors the statistical variation of the pulse height is smaller and the energy resolution is higher. As the electrons travel fast, the time resolution is also very good, and is dependent upon rise time. Compared with gaseous ionization detectors, the density of a semiconductor detector is very high, and charged particles of high energy can give off their energy in a semiconductor of relatively small dimensions.
Detector types
Silicon detectors
Most silicon particle detectors work, in principle, by doping narrow (usually around 100 micrometers wide) silicon strips to turn them into diodes, which are then reverse biased. As charged particles pass through these strips, they cause small ionization currents that can be detected and measured. Arranging thousands of these detectors around a collision point in a particle accelerator can yield an accurate picture of what paths particles take. Silicon detectors have a much higher resolution in tracking charged particles than older technologies such as cloud chambers or wire chambers. The drawback is that silicon detectors are much more expensive than these older technologies and require sophisticated cooling to reduce leakage currents (noise source). They also suffer degradation over time from radiation, however, this can be greatly reduced thanks to the Lazarus effect.
Diamond detectors
Diamond detectors have many similarities with silicon detectors but are expected to offer significant advantages – in particular a high radiation hardness and very low drift currents. They are also suited to neutron detection. At present, however, they are much more expensive and more difficult to manufacture.
Germanium detectors
Germanium detectors are mostly used for gamma spectroscopy in nuclear physics, as well as x-ray spectroscopy. While silicon detectors cannot be thicker than a few millimeters, germanium can have a sensitive layer (depletion region) thickness of centimeters, and therefore can be used as a total absorption detector for gamma rays up to a few MeV. These detectors are also called high-purity germanium detectors (HPGe) or hyperpure germanium detectors. Before current purification techniques were refined, germanium crystals could not be produced with purity sufficient to enable their use as spectroscopy detectors. Impurities in the crystals trap electrons and holes, ruining the performance of the detectors. Consequently, germanium crystals were doped with lithium ions (Ge(Li)), in order to produce an intrinsic region in which the electrons and holes would be able to reach the contacts and produce a signal.
When germanium detectors were first developed, only very small crystals were available. Low efficiency was the result, and germanium detector efficiency is still often quoted in relative terms to a "standard" 3″ x 3″ NaI(Tl) scintillation detector. Crystal growth techniques have since improved, allowing detectors to be manufactured that are as large as or larger than commonly available NaI crystals, although such detectors cost more than €100,000 (US$113,000).
, HPGe detectors commonly use lithium diffusion to make an n+ ohmic contact, and boron implantation to make a p+ contact. Coaxial detectors with a central n+ contact are referred to as n-type detectors, while p-type detectors have a p+ central contact. The thickness of these contacts represents a dead layer around the surface of the crystal within which energy depositions do not result in detector signals. The central contact in these detectors is opposite to the surface contact, making the dead layer in n-type detectors smaller than the dead layer in p-type detectors. Typical dead layer thicknesses are several hundred micrometers for a Li diffusion layer and a few tenths of a micrometer for a B implantation layer.
The major drawback of germanium detectors is that they must be cooled to liquid nitrogen temperatures to produce spectroscopic data. At higher temperatures, the electrons can easily cross the band gap in the crystal and reach the conduction band, where they are free to respond to the electric field, producing too much electrical noise to be useful as a spectrometer. Cooling to liquid nitrogen temperature (77K) reduces thermal excitations of valence electrons so that only a gamma ray interaction can give an electron the energy necessary to cross the band gap and reach the conduction band. Cooling with liquid nitrogen is inconvenient, as the detector requires hours to cool down to operating temperature before it can be used, and cannot be allowed to warm up during use. Ge(Li) crystals could never be allowed to warm up, as the lithium would drift out of the crystal, ruining the detector. HPGe detectors can be allowed to warm up to room temperature when not in use.
Commercial systems became available that use advanced refrigeration techniques (for example pulse tube refrigerator) to eliminate the need for liquid nitrogen cooling.
Germanium detectors with multi-strip electrodes, orthogonal on opposing faces, can indicate the 2-D location of the ionization trail within a large single crystal of Ge. Detectors like this have been used in COSI balloon-born astronomy missions (NASA, 2016) and will be used in an orbital observatory (NASA, 2025) Compton Spectrometer and Imager (COSI).
Because germanium detectors are highly efficient in photon detection, they can be used for a variety of additional applications. High-purity germanium detectors are used by Homeland Security to differentiate between naturally occurring radioactive material (NORM) and weaponized or otherwise harmful radioactive material. They are also used in monitering the environment due to the concern of the use of nuclear power. Finally, high-purity germanium detectors are used for medical imaging and nuclear physics research, making them a rather diverse detector as far as applications go.
Cadmium telluride and cadmium zinc telluride detectors
Cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detectors have been developed for use in X-ray spectroscopy and gamma spectroscopy. The high density of these materials means they can effectively attenuate X-rays and gamma-rays with energies of greater than 20 keV that traditional silicon-based sensors are unable to detect. The wide band gap of these materials also means they have high resistivity and are able to operate at, or close to, room temperature (~295K) unlike germanium-based sensors. These detector materials can be used to produce sensors with different electrode structures for imaging and high-resolution spectroscopy. However, CZT detectors are generally unable to match the resolution of germanium detectors, with some of this difference being attributable to poor positive charge-carrier transport to the electrode. Efforts to mitigate this effect have included the development of novel electrodes to negate the need for both polarities of carriers to be collected.
Integrated Systems
Semiconductor detectors are often commercially integrated into larger systems for various radiation measurement applications.
Automated Sample Changing for Germanium Detectors
Gamma spectrometers using HPGe detectors are often used for measurement of low levels of gamma-emitting radionuclides in environmental samples, which requires a low background environment, usually achieved by enclosing the sample and detector in a lead shield known as a 'lead castle'. Automated systems have been developed to sequentially move a number of samples into and out of the lead castle for measurement. Due to the complexities of opening the shield and moving the samples, this automation has traditionally been expensive, but lower-cost autosamplers have recently been introduced.
Radioactive Waste Assay Machines
Semiconductor detectors especially HPGe are often integrated into devices for characterising packaged radioactive waste. This can be as simple as detectors being mounted on a moveable platform to be brought to an area for in-situ measurements and paired with shielding to restrict the field-of-view of the detector to the area of interest for one-shot "open detector geometry" measurements, or for waste in drums, systems such as the Segmented Gamma Scanner (SGS) combine a semiconductor detector with integrated mechatronics to rotate the item and scan the detector across different sections. If the detector field of view is scanned across small areas of the item in multiple axes as is done with a Tomographic Gamma Scanner (TGS), Tomography can be used to extract 3D information about the density and gamma emissions of the item.
Gamma Cameras
Semiconductor detectors are used in some Gamma Cameras and Gamma imaging systems
See also
Lazarus effect
Pandemonium effect
Synthetic diamonds
Total absorption spectroscopy
X-ray spectroscopy
Microstrip detector
Hybrid pixel detector
Liulin type instruments
References
External links
Silicon Detector powerpoint delivered for EDIT (Excellence in Detectors and Instrumentation Technologies) 2011 at CERN, M. Krammer, F. Hartmann.
Experimental particle physics
Ionising radiation detectors
Medical imaging
Particle detectors
X-ray instrumentation | Semiconductor detector | [
"Physics",
"Technology",
"Engineering"
] | 2,034 | [
"Radioactive contamination",
"X-ray instrumentation",
"Measuring instruments",
"Particle detectors",
"Ionising radiation detectors",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
803,558 | https://en.wikipedia.org/wiki/Helicase-dependent%20amplification | Helicase-dependent amplification (HDA) is a method for in vitro DNA amplification (like the polymerase chain reaction) that takes place at a constant temperature.
Introduction
The polymerase chain reaction is the most widely used method for in vitro DNA amplification for purposes of molecular biology and biomedical research. This process involves the separation of the double-stranded DNA in high heat into single strands (the denaturation step, typically achieved at 95–97 °C), annealing of the primers to the single stranded DNA (the annealing step) and copying the single strands to create new double-stranded DNA (the extension step that requires the DNA polymerase) requires the reaction to be done in a thermal cycler. These bench-top machines are large, expensive and costly to run and maintain, limiting the potential applications of DNA amplification in situations outside the laboratory (e.g., in the identification of potentially hazardous micro-organisms at the scene of investigation, or at the point of care of a patient). Although PCR is usually associated with thermal cycling, the original patent by Mullis et al. disclosed the use of a helicase as a means for denaturation of double stranded DNA thereby including isothermal nucleic acid amplification.
In vivo, DNA is replicated by DNA polymerases with various accessory proteins, including a DNA helicase that acts to separate the DNA by unwinding the DNA double helix. HDA was developed from this concept, using a helicase (an enzyme) to denature the DNA.
Methodology
Strands of double-stranded DNA are first separated by a DNA helicase and coated by single-stranded DNA (ssDNA)-binding proteins. In the second step, two sequence-specific primers hybridise to each border of the DNA template. DNA polymerases are then used to extend the primers annealed to the templates to produce a double-stranded DNA and the two newly synthesized DNA products are then used as substrates by DNA helicases, entering the next round of the reaction. Thus, a simultaneous chain reaction develops, resulting in exponential amplification of the selected target sequence (see Vincent et al.., 2004 for a schematic diagram).
Present progress and the advantages and disadvantages of HDA
Since the publication of its discovery, HDA technology has been used for a "simple, easy to adapt nucleic acid test for the detection of Clostridioides difficile". Other applications include the rapid detection of Staphylococcus aureus by the amplification and detection of a short DNA sequence specific to the bacterium. The advantages of HDA is that it provides a rapid method of nucleic acid amplification of a specific target at an isothermic temperature that does not require a thermal cycler. However, the optimisation of primers and sometimes buffers is required beforehand by the researcher. Normally primer and buffer optimisation is tested and achieved through PCR, raising the question of the need to spend extra on a separate system to do the actual amplification. Despite the selling point that HDA negates the use of a thermal cycler and therefore allows research to be conducted in the field, much of the work required to detect potentially hazardous microorganisms is carried out in a research/hospital lab setting regardless. At present, mass diagnoses from a great number of samples cannot yet be achieved by HDA, whereas PCR reactions carried out in thermal cycler that can hold multi-well sample plates allows for the amplification and detection of the intended DNA target from a maximum of 96 samples. The cost of purchasing reagents for HDA are also relatively expensive to that of PCR reagents, more so since it comes as a ready-made kit.
External links
References
Genetics techniques
Helicases
Molecular biology techniques
Laboratory techniques
Amplifiers
Biotechnology | Helicase-dependent amplification | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 805 | [
"Genetics techniques",
"Genetic engineering",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Amplifiers"
] |
803,716 | https://en.wikipedia.org/wiki/Photoemission%20electron%20microscopy | Photoemission electron microscopy (PEEM, also called photoelectron microscopy, PEM) is a type of electron microscopy that utilizes local variations in electron emission to generate image contrast. The excitation is usually produced by ultraviolet light, synchrotron radiation or X-ray sources. PEEM measures the coefficient indirectly by collecting the emitted secondary electrons generated in the electron cascade that follows the creation of the primary core hole in the absorption process. PEEM is a surface sensitive technique because the emitted electrons originate from a shallow layer. In physics, this technique is referred to as PEEM, which goes together naturally with low-energy electron diffraction (LEED), and low-energy electron microscopy (LEEM). In biology, it is called photoelectron microscopy (PEM), which fits with photoelectron spectroscopy (PES), transmission electron microscopy (TEM), and scanning electron microscopy (SEM).
History
Initial development
In 1933, Ernst Brüche reported images of cathodes illuminated by UV light. This work was extended by two of his colleagues, H. Mahl and J. Pohl. Brüche made a sketch of his photoelectron emission microscope in his 1933 paper (Figure 1). This is evidently the first photoelectron emission microscope (PEEM).
Improved techniques
In 1963, Gertrude F. Rempfer designed the electron optics for an early ultrahigh-vacuum (UHV) PEEM. In 1965, G. Burroughs at the Night Vision Laboratory, Fort Belvoir, Virginia built the bakeable electrostatic lenses and metal-sealed valves for PEEM. During the 1960s, in the PEEM, as well as TEM, the specimens were grounded and could be transferred in the UHV environment to several positions for photocathode formation, processing and observation. These electron microscopes were used for only a brief period of time, but the components live on. The first commercially available PEEM was designed and tested by Engel during the 1960s for his thesis work under E. Ruska and developed it into a marketable product, called the "Metioskop KE3", by Balzers in 1971. The electron lenses and voltage divider of the PEEM were incorporated into one version of a PEEM for biological studies in Eugene, Oregon around 1970.
Further research
During the 1970s and 1980s the second generation (PEEM-2) and third generation (PEEM-3) microscopes were constructed. PEEM-2 is a conventional not aberration-corrected instrument employing electrostatic lenses. It uses a cooled charge-coupled device (CCD) fiber-coupled to a phosphor to detect the electron-optical image. The aberration corrected microscope PEEM-3 employs a curved electron mirror to counter the lowest order aberrations of the electron lenses and the accelerating field.
Background
Photoelectric effect
The photoemission or photoelectric effect is a quantum electronic phenomenon in which electrons (photoelectrons) are emitted from matter after the absorption of energy from electromagnetic radiation such as UV light or X-ray.
When UV light or X-ray is absorbed by matter, electrons are excited from core levels into unoccupied states, leaving empty core states. Secondary electrons are generated by the decay of the core hole. Auger processes and inelastic electron scattering create a cascade of low-energy electrons. Some electrons penetrate the sample surface and escape into vacuum. A wide spectrum of electrons is emitted with energies between the energy of the illumination and the work function of the sample. This wide electron distribution is the principal source of image aberration in the microscope.
Quantitative analysis
Using Einstein's method, the following equations are used:
energy of photon = energy needed to remove an electron + kinetic energy of the emitted electron
where
h is the Planck constant;
f is the frequency of the incident photon;
is the work function;
is the maximum kinetic energy of ejected electrons;
f0 is the threshold frequency for the photoelectric effect to occur;
m is the rest mass of the ejected electron;
vm is the speed of the ejected electron.
Electron emission microscopy
Electron emission microscopy is a type of electron microscopy in which the information carrying beam of electrons originates from the specimen. The source of energy causing the electron emission can be heat (thermionic emission), light (photoelectron emission), ions, or neutral particles, but normally excludes field emission and other methods involving a point source or tip microscopy.
Photoelectron imaging
Photoelectron imaging includes any form of imaging in which the source of information is the distribution of points from which electrons are ejected from the specimen by the action of photons. The technique with the highest resolution photoelectron imaging is presently photoelectron emission microscopy using UV light.
Photoemission electron microscope
A photoemission electron microscope is a parallel imaging instrument. It creates at any given moment a complete picture of the photoelectron distribution emitted from the imaged surface region.
Light sources
The viewed area of the specimen must be illuminated homogeneously with appropriate radiation (ranging from UV to hard x-rays). UV light is the most common radiation used in PEEM because very bright sources are available, such as mercury lamps. However, other wavelengths (like soft x-rays) are preferred where analytical information is required.
Electron optical column and resolution
The electron optical column contains two or more electrostatic or magnetic electron lenses, corrector elements such as a stigmator and deflector, an angle-limiting aperture in the plane of one of the lenses.
As in any emission electron microscope, the objective or cathode lens determines the resolution. The latter is dependent on the electron-optical qualities, such as spherical aberrations, and the energy spread of the photoemitted electrons. The electrons are emitted into the vacuum with an angular distribution close to a cosine square function. A significant velocity component parallel to the surface will decrease the lateral resolution. The faster electrons, leaving the surface exactly along the center line of the PEEM, will also negatively influence the resolution due to the chromatic aberration of the cathode lens. The resolution is inversely proportional to the accelerating field strength at the surface but proportional to the energy spread of the electrons. So resolution r is approximately:
In the equation, d is the distance between the specimen and the objective, ΔE is the distribution width of the initial electron energies and U is the accelerating voltage.
Besides the cathode or objective lens, situated on the left hand side of Figure 4, two more lenses are utilized to create an image of the specimen: an intermediate three-electrode lens is used to vary the total magnification between 100× if the lens is deactivated, and up to 1000× when needed. On the right-hand side of Figure 4 is the projector, a three electrode lens combined with a two-element deceleration lens. The main task of this lens combination is the deceleration of the fast 20 keV electrons to energies for which the has its highest sensitivity. Such an image intensifier has its best performance for impinging electrons with kinetic energies roughly about 1 keV.
Energy filter
An energy filter can be added to the instrument in order to select the electrons that will contribute to the image. This option is particularly used for analytical applications of the PEEM. By using an energy filter, a PEEM microscope can be seen as imaging Ultra-violet photoelectron spectroscopy (UPS) or X-ray photoelectron spectroscopy (XPS). By using this method, spatially resolved photoemission spectra can be acquired with spatial resolutions on the 100 nm scale and with sub-eV resolution. Using such instrument, one can acquire elemental images with chemical state sensibility or work function maps. Also, since the photoelectron are emitted only at the very surface of the material, surface termination maps can be acquired.
Detector
A detector is placed at the end of electron optical column. Usually, a phosphor screen is used to convert the electron image to a photon image. The choice of phosphor type is governed by resolution considerations. A multichannel plate detector that is imaged by a CCD camera can substitute phosphor screen.
Time-resolved PEEM
Compared to many other electron microscopy techniques, time-resolved PEEM offers a very high temporal resolution of only a few femtoseconds with prospects of advancing it to the attosecond regime. The reason is that temporal electron pulse broadening does not deteriorate the temporal resolution because electrons are only used to achieve a high spatial resolution. The temporal resolution is reached by using very short light pulses in a pump-probe setup. A first pulse optically excites dynamics like surface plasmons on a sample surface and a second pulse probes the dynamics after a certain waiting time by photoemitting electrons. The photoemission rate is influenced by the local excitation level of the sample. Hence, spatial information about the dynamics on the sample can be gained. By repeating this experiment with a series of waiting times between pump and probe pulse, a movie of the dynamics on a sample can be recorded.
Laser pulses in the visible spectral range are typically used in combination with a PEEM. They offer a temporal resolution of a few to 100 fs. In recent years, pulses with shorter wavelengths have been used to achieve a more direct access to the instantaneous electron excitation in the material. Here, a first pulse in the visible excites dynamics near the sample surface and a second pulse with a photon energy significantly above the work function of the material emits the electrons. By employing additional time-of-flight or high-pass energy recording in the PEEM, information about the instantaneous electronic distribution in a nanostructure can be extracted with high spatial and temporal resolution.
Efforts to achieve attosecond temporal resolution and with that directly record optical fields around nanostructures with so far unreached spatio-temporal resolution, are still ongoing.
Limitations
The general limitation of PEEM, which is common with most surface science methods, is that the PEEM operates only under fairly restricted vacuum conditions. Whenever electrons are used to excite a specimen or carry information from its surface there has to be a vacuum with an appropriate mean free path for the electrons. With in-situ PEEM techniques, water and aqueous solution can be observed by PEEM.
The resolution of PEEM is limited to about 10 nm, which results from a spread of the photoelectron emission angle. Angle resolved photoemission spectroscopy (ARPES) is a powerful tool for structure analysis. However, it may be difficult to make angle-resolved and energy-selective PEEM measurements because of a lack of intensity. The availability of synchrotron-radiation light sources can offer exciting possibilities in this regard.
Comparison with other techniques
Transmission electron microscopy (TEM) and scanning electron microscopy (SEM): PEEM differs from these two microscopies by using an electric accelerating field at the surface of specimen. The specimen is part of the electron-optical system.
Low-energy electron microscopy (LEEM) and mirror electron microscopy (MEM): these two electron emission microscopy use electron gun supply beams which are directed toward the specimen, decelerated and backscattered from the specimen or reflected just before reaching the specimen. In photoemission electron microscopy (PEEM) the same specimen geometry and immersion lens are used, but the electron guns are omitted.
New PEEM technologies
Time resolved photoemission electron microscopy (TR-PEEM) is well suited for real-time observation of fast processes on surfaces equipped with pulsed synchrotron radiation for illumination.
Time-of-flight Photoemission electron microscopy (TOF-PEEM): TOF-PEEM is PEEM using an ultrafast gated CCD camera or a time-and space-resolving counting detector for observing fast processes on surfaces.
Multiphoton Photoemission electron microscopy: Multiphoton PEEM can be employed for the study of localized surface plasmon excitations in nanoclusters or for direct spatial observation of the hot-electron lifetime in structured films using femtosecond lasers.
PEEM in liquids and dense gases: The development of microfabricated thin liquid cells in late 1990s enabled wide field-of-view transmission X-ray microscopy of liquid and gaseous samples confined between two SiN membranes. In such a configuration, the vacuum side of the second membrane was coated with the photoemitting material and PEEM was used to record the spatial variations of the transmitted light. True PEEM imaging of liquid interfaces in photoelectrons has been realized through ultrathin electron transparent membranes such as graphene. Further development of the UHV compatible graphene liquid cells enabled studies of electrochemical and electrified liquid–solid interfaces with standard PEEM setups without the use of the differential pumping.
Notes
References
James A. Samson, David L. Ederer (1998). Vacuum Ultraviolet Spectroscopy. Academic Press
Andrzej Wieckowski, Elena R. Savinova, Constantinos G. Vayenas (2003). Catalysis and Electrocatalysis at Nanoparticle Surfaces. CRC Press
Harm Hinrich Rotermund. Imaging of Dynamic Processes on Surface by Light. Surface Science Reports, 29 (1997) 265-364
E. Bauer, M. Mundschau, W. Sweich, W. Telieps. Surface Studies by Low-energy Electron Microscopy (LEEM) and Conventional UV Photoemission Electron Microscopy (PEEM). Ultramicroscopy, 31 (1989) 49-57
W. Engel, M. Kordesch, H.H. Rotermund, S. Kubala, A. von Oertzen. A UHV-compatible photoelectron emission microscope for applications in surface science. Ultramicroscopy, 36 (1991) 148-153
H.H. Rotermund, W. Engel, M. Kordesch, G. Ertl. Imaging of spatio-temporal pattern evolution during carbon monoxide oxidation on platinum. Nature, 343 (1990) 355-357
H.H. Rotermund, W. Engel, S. Jakubith, A. von Oertzen, G. Ertl. Methods and application of UV photoelectron microscopy in heterogeneous catalysis. Ultramicroscopy, 36 (1991) 164-172
O. Renault, N. Barrett, A. Bailly, L.F. Zagonel, D. Mariolle, J.C. Cezar, N.B. Brookes, K. Winkler, B. Krömker and D. Funnemann, Energy-filtered XPEEM with NanoESCA using synchrotron and laboratory X-ray sources: Principles and first demonstrated results; Surface Science, Volume 601, Issue 20, 15 October 2007, Pages 4727–4732.
External links
http://xraysweb.lbl.gov/peem2/webpage/Project/TutorialPEEM.shtml
Electron microscopy
Electron spectroscopy | Photoemission electron microscopy | [
"Physics",
"Chemistry"
] | 3,125 | [
"Electron",
"Electron microscopy",
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Microscopy",
"Spectroscopy"
] |
804,039 | https://en.wikipedia.org/wiki/Dynamical%20friction | In astrophysics, dynamical friction or Chandrasekhar friction, sometimes called gravitational drag, is loss of momentum and kinetic energy of moving bodies through gravitational interactions with surrounding matter in space. It was first discussed in detail by Subrahmanyan Chandrasekhar in 1943.
Intuitive account
An intuition for the effect can be obtained by thinking of a massive object moving through a cloud of smaller lighter bodies. The effect of gravity causes the light bodies to accelerate and gain momentum and kinetic energy (see slingshot effect). By conservation of energy and momentum, we may conclude that the heavier body will be slowed by an amount to compensate. Since there is a loss of momentum and kinetic energy for the body under consideration, the effect is called dynamical friction.
Another equivalent way of thinking about this process is that as a large object moves through a cloud of smaller objects, the gravitational effect of the larger object pulls the smaller objects towards it. There then exists a concentration of smaller objects behind the larger body (a gravitational wake), as it has already moved past its previous position. This concentration of small objects behind the larger body exerts a collective gravitational force on the large object, slowing it down.
Of course, the mechanism works the same for all masses of interacting bodies and for any relative velocities between them. However, while the most probable outcome for an object moving through a cloud is a loss of momentum and energy, as described intuitively above, in the general case it might be either loss or gain. When the body under consideration is gaining momentum and energy the same physical mechanism is called slingshot effect, or gravity assist. This technique is sometimes used by interplanetary probes to obtain a boost in velocity by passing close by a planet.
Chandrasekhar dynamical friction formula
The full Chandrasekhar dynamical friction formula for the change in velocity of the object involves integrating over the phase space density of the field of matter and is far from transparent. The Chandrasekhar dynamical friction formula reads as
where
is the gravitational constant
is the mass under consideration
is the mass of each star in the star distribution
is the velocity of the object under consideration, in a frame where the center of gravity of the matter field is initially at rest
is the "Coulomb logarithm"
is the number density distribution of the stars
The result of the equation is the gravitational acceleration produced on the object under consideration by the stars or celestial bodies, as acceleration is the ratio of velocity and time.
Maxwell's distribution
A commonly used special case is where there is a uniform density in the field of matter, with matter particles significantly lighter than the major particle under consideration i.e., and with a Maxwellian distribution for the velocity of matter particles i.e.,
where is the total number of stars and is the dispersion. In this case, the dynamical friction formula is as follows:
where
is the ratio of the velocity of the object under consideration to the modal velocity of the Maxwellian distribution.
is the error function.
is the density of the matter field.
In general, a simplified equation for the force from dynamical friction has the form
where the dimensionless numerical factor depends on how compares to the velocity dispersion of the surrounding matter.
But note that this simplified expression diverges when ; caution should therefore be exercised when using it.
Density of the surrounding medium
The greater the density of the surrounding medium, the stronger the force from dynamical friction. Similarly, the force is proportional to the square of the mass of the object. One of these terms is from the gravitational force between the object and the wake. The second term is because the more massive the object, the more matter will be pulled into the wake. The force is also proportional to the inverse square of the velocity. This means the fractional rate of energy loss drops rapidly at high velocities. Dynamical friction is, therefore, unimportant for objects that move relativistically, such as photons. This can be rationalized by realizing that the faster the object moves through the media, the less time there is for a wake to build up behind it.
Applications
Dynamical friction is particularly important in the formation of planetary systems and interactions between galaxies.
Protoplanets
During the formation of planetary systems, dynamical friction between the protoplanet and the protoplanetary disk causes energy to be transferred from the protoplanet to the disk. This results in the inward migration of the protoplanet.
Galaxies
When galaxies interact through collisions, dynamical friction between stars causes matter to sink toward the center of the galaxy and for the orbits of stars to be randomized. This process is called violent relaxation and can change two spiral galaxies into one larger elliptical galaxy.
Galaxy clusters
The effect of dynamical friction explains why the brightest (more massive) galaxy tends to be found near the center of a galaxy cluster. The effect of the two body collisions slows down the galaxy, and the drag effect is greater the larger the galaxy mass. When the galaxy loses kinetic energy, it moves towards the center of the cluster.
However the observed velocity dispersion of galaxies within a galaxy cluster does not depend on the mass of the galaxies. The explanation is that a galaxy cluster relaxes by violent relaxation, which sets the velocity dispersion to a value independent of the galaxy's mass.
Star clusters
The effect of dynamical friction explains why the most massive stars of SCs tend to be found near the center of star cluster. This concentration of more massive stars in the cluster's cores tend to favor collisions between stars, which may trigger the runaway collision mechanism to form intermediate mass black holes. Globular clusters orbiting through the stellar field of a galaxy experience dynamic friction. This drag force causes the cluster to lose energy and spiral in toward the galactic center.
Photons
Fritz Zwicky proposed in 1929 that a gravitational drag effect on photons could be used to explain cosmological redshift as a form of tired light. However, his analysis had a mathematical error, and his approximation to the magnitude of the effect should actually have been zero, as pointed out in the same year by Arthur Stanley Eddington. Zwicky promptly acknowledged the correction, although he continued to hope that a full treatment would be able to show the effect.
It is now known that the effect of dynamical friction on photons or other particles moving at relativistic speeds is negligible, since the magnitude of the drag is inversely proportional to the square of velocity. Cosmological redshift is conventionally understood to be a consequence of the expansion of the universe.
See also
Dynamic friction
Final-parsec problem
Notes and references
External links
Astrophysics
Effects of gravity
Stellar dynamics | Dynamical friction | [
"Physics",
"Astronomy"
] | 1,360 | [
"Astronomical sub-disciplines",
"Astrophysics",
"Stellar dynamics"
] |
804,702 | https://en.wikipedia.org/wiki/Status%20quo%20bias | A status quo bias or default bias is a cognitive bias which results from a preference for the maintenance of one's existing state of affairs. The current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.
Status quo bias should be distinguished from a rational preference for the status quo, as when the current state of affairs is objectively superior to the available alternatives, or when imperfect information is a significant problem. A large body of evidence, however, shows that status quo bias frequently affects human decision-making. Status quo bias should also be distinguished from psychological inertia, which refers to a lack of intervention in the current course of affairs.
The bias intersects with other non-rational cognitive processes such as loss aversion, in which losses comparative to gains are weighed to a greater extent. Further non-rational cognitive processes include existence bias, endowment effect, longevity, mere exposure, and regret avoidance. Experimental evidence for the detection of status quo bias is seen through the use of the reversal test. A vast amount of experimental and field examples exist. Behaviour in regard to economics, retirement plans, health, and ethical choices show evidence of the status quo bias.
Examples
Status quo experiments have been conducted over many fields with Kahneman, Thaler, and Knetsch (1991) creating experiments on the endowment effect, loss aversion and status quo bias.
Experiments have also been conducted on the effect of status quo bias on contributions to retirement plans and Fevrier & Gay (2004) study on status quo bias in organ donations consent.
Questionnaire: Samuelson and Zeckhauser (1988) demonstrated status quo bias using a questionnaire in which subjects faced a series of decision problems, which were alternately framed to be with and without a pre-existing status quo position. Subjects tended to remain with the status quo when such a position was offered to them. Results of the experiment further show that status quo bias advantage relatively increases with the number of alternatives given within the choice set. Furthermore, a weaker bias resulted from when the individual exhibited a strong discernible preference for a chosen alternative.
Hypothetical choice tasks: Samuelson and Zeckhauser (1988) gave subjects a hypothetical choice task in the following "neutral" version, in which no status quo was defined: "You are a serious reader of the financial pages but until recently you have had few funds to invest. That is when you inherited a large sum of money from your great-uncle. You are considering different portfolios. Your choices are to invest in: a moderate-risk company, a high-risk company, treasury bills, municipal bonds." Other subjects were presented with the same problem but with one of the options designated as the status quo. In this case, the opening passage continued: "A significant portion of this portfolio is invested in a moderate risk company ... (The tax and broker commission consequences of any changes are insignificant.)" The result was that an alternative became much more popular when it was designated as the status quo.
Electric power consumers: California electric power consumers were asked about their preferences regarding trade-offs between service reliability and rates. The respondents fell into two groups, one with much more reliable service than the other. Each group was asked to state a preference among six combinations of reliability and rates, with one of the combinations designated as the status quo. A strong bias to the status quo was observed. Of those in the high-reliability group, 60.2 percent chose the status quo, whereas a mere 5.7 percent chose the low-reliability option that the other group had been experiencing, despite its lower rates. Similarly, of those in the low reliability group, 58.3 chose their low-reliability status quo, and only 5.8 chose the high-reliability option.
Automotive insurance consumers: The US states of New Jersey and Pennsylvania inadvertently ran a real-life experiment providing evidence of status quo bias in the early 1990s. As part of tort law reform programs, citizens were offered two options for their automotive insurance: an expensive option giving them full right to sue and a less expensive option with restricted rights to sue. In New Jersey the cheaper insurance was the default and in Pennsylvania the expensive insurance was the default. Johnson, Hershey, Meszaros and Kunreuther (1993) conducted a questionnaire to test whether consumers will stay with the default option for car insurance. They found that only 20% of New Jersey drivers changed from the default option and got the more expensive option. Also, only 25% of Pennsylvanian drivers changed from the default option and got the cheaper insurance. Therefore, framing and status quo bias can have significant financial consequences.
General practitioners: Boonen, Donkers and Schut created two discrete choice experiments for Dutch residents to conclude a consumer’s preference for general practitioners and whether they would leave their current practitioner. The Dutch health care system was chosen as general practitioners play the role of a gatekeeper. The experiment was conducted to investigate the effect of status quo bias on a consumer’s decision to leave their current practitioner, with knowledge of other practitioners and their current relationship with their practitioner determining the role status quo bias plays.
Through the questionnaire it was shown that respondents were aware of the lack of added benefit aligned with their current general practitioner and were aware of the quality differences between potential practitioners. 35% of respondents were willing to a pay a copayment to stay with their current general practitioner, while only 30% were willing to switch to another practitioner in exchange for a financial gain. These consumers were willing to pay a considerable amount to continue going to their current practitioner up to €17.32. For general practitioners the value assigned by the consumer to staying with their current one exceeded the total value assigned to all other attributes tested such as discounts or a certificate of quality.
Within the discrete choice experiment the respondents were offered a choice between their current practitioner and a hypothetical provider with identical attributes. The respondents were 40% more likely to choose their current practitioner than if both options were hypothetical providers, which would result in the probability being 50% percent for both. It was found that status quo bias had a massive impact on which general practitioner the respondents would choose. Despite consumers being offered positive financial incentives, qualitative incentives or the addition of negative financial incentives respondents were still extremely hesitant to switch from their current practitioner. The impact of status quo bias was determined as making attempts to channel consumers away from the general practitioner they are currently seeing a daunting task.
Explanations
Status quo bias has been attributed to a combination of loss aversion and the endowment effect, two ideas relevant to prospect theory. An individual weighs the potential losses of switching from the status quo more heavily than the potential gains; this is due to the prospect theory value function being steeper in the loss domain. As a result, the individual will prefer not to switch at all. In other words, we tend to oppose change unless the benefits outweigh the risks. However, the status quo bias is maintained even in the absence of gain/loss framing: for example, when subjects were asked to choose the colour of their new car, they tended towards one colour arbitrarily framed as the status quo. Loss aversion, therefore, cannot wholly explain the status quo bias, with other potential causes including regret avoidance, transaction costs and psychological commitment.
Rational routes to status quo maintenance
A status quo bias can also be a rational route if there are cognitive or informational limitations.
Informational limitations
Decision outcomes are rarely certain, nor is the utility they may bring. Because some errors are more costly than others (Haselton & Nettle, 2006), sticking with what worked in the past is a safe option, as long as previous decisions are "good enough".
Cognitive limitations
Cognitive limitations of status quo bias involve the cognitive cost of choice, in which decisions are more susceptible to postponement as increased alternatives are added to the choice set. Moreover, mental effort needed to maintain status quo alternatives would often be lesser and easier, resulting in a superior choice's benefit being outweighed by decision-making cognitive costs. Consequently, maintenance of current or previous state of affairs would be regarded as the easier alternative.
Irrational routes
The irrational maintenance of the status quo bias links and confounds many cognitive biases.
Existence bias
An assumption of longevity and goodness are part of the status quo bias. People treat existence as a prima facie case for goodness, aesthetic and longevity increases this preference.
The status quo bias affects people's preferences; people report preferences for what they are likely rather than unlikely to receive. People simply assume, with little reason or deliberation, the goodness of existing states.
Longevity is a corollary of the existence bias: if existence is good, longer existence should be better. This thinking resembles quasi-evolutionary notions of "survival of the fittest", and also the augmentation principle in attribution theory.
Psychological inertia is another reason used to explain a bias towards the status quo. Another explanation is fear of regret in making a wrong decision, i.e. If we choose a partner, when we think there could be someone better out there.
Mere exposure
Mere exposure is an explanation for the status quo bias. Existing states are encountered more frequently than non-existent states and because of this they will be perceived as more true and evaluated more preferably. One way to increase liking for something is repeated exposure over time.
Loss aversion
Loss aversion also leads to greater regret for action than for inaction; more regret is experienced when a decision changes the status quo than when it maintains it. Together these forces provide an advantage for the status quo; people are motivated to do nothing or to maintain current or previous decisions. Change is avoided, and decision makers stick with what has been done in the past.
Changes from the status quo will typically involve both gains and losses, with the change having good overall consequences if the gains outweigh these losses. A tendency to overemphasize the avoidance of losses will thus favour retaining the status quo, resulting in a status quo bias. Even though choosing the status quo may entail forfeiting certain positive consequences, when these are represented as forfeited "gains" they are psychologically given less weight than the "losses" that would be incurred if the status quo were changed.
The loss aversion explanation for the status quo bias has been challenged by David Gal and Derek Rucker who argue that evidence for loss aversion (i.e., a tendency to avoid losses more than to pursue gains) is confounded with a tendency towards inertia (a tendency to avoid intervention more than to intervene in the course of affairs). Inertia, in this sense, is related to omission bias, except it need not be a bias but might be perfectly rational behavior stemming from transaction costs or lack of incentive to intervene due to fuzzy preferences.
Omission bias
Omission bias may account for some of the findings previously ascribed to status quo bias. Omission bias is diagnosed when a decision maker prefers a harmful outcome that results from an omission to a less harmful outcome that results from an action.
Overall implications of a study conducted by Ilana Ritov and Jonathan Baron, regarding status quo and omission biases, reveal that omission bias may further be diagnosed when the decision maker is unwilling to take preference from any of the available options given to them, thus enabling reduction of the number of decisions where utility comparison and weight is unavoidable.
Detection
The reversal test: when a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias. The rationale of the reversal test is: if a continuous parameter admits of a wide range of possible values, only a tiny subset of which can be local optima, then it is prima facie implausible that the actual value of that parameter should just happen to be at one of these rare local optima.
Neural activity
A study found that erroneous status quo rejections have a greater neural impact than erroneous status quo acceptances. This asymmetry in the genesis of regret might drive the status quo bias on subsequent decisions.
A study was done using a visual detection task in which subjects tended to favour the default when making difficult, but not easy, decisions. This bias was suboptimal in that more errors were made when the default was accepted. A selective increase in sub-thalamic nucleus (STN) activity was found when the status quo was rejected in the face of heightened decision difficulty. Analysis of effective connectivity showed that inferior frontal cortex, a region more active for difficult decisions, exerted an enhanced modulatory influence on the STN during switches away from the status quo.
Research by University College London scientists that examines the neural pathways involved in 'status quo bias' in the human brain and found that the more difficult the decision we face, the more likely we are not to act.
The study, published in Proceedings of the National Academy of Sciences (PNAS), looked at the decision-making of participants taking part in a tennis 'line judgement' game while their brains were scanned using functional MRI (fMRI).
The 16 study participants were asked to look at a cross between two tramlines on a screen while holding down a 'default' key. They then saw a ball land in the court and had to make a decision as to whether it was in or out. On each trial, the computer signalled which was the current default option – 'in' or 'out'. The participants continued to hold down the key to accept the default and had to release it and change to another key to reject the default. The results showed a consistent bias towards the default, which led to errors. As the task became more difficult, the bias became even more pronounced. The fMRI scans showed that a region of the brain known as the sub-thalamic nucleus (STN) was more active in the cases when the default was rejected. Also, greater flow of information was seen from a separate region sensitive to difficulty (the prefrontal cortex) to the STN. This indicates that the STN plays a key role in overcoming status quo bias when the decision is difficult.
Behavioral economics and the default position
Against this background, two behavioral economists devised an opt-out plan to help employees of a particular company build their retirement savings. In an opt-out plan, the employees are automatically enrolled unless they explicitly ask to be excluded. They found evidence for status quo bias and other associated effects. The impact of defaults on decision making due to status quo bias is not purely due to subconscious bias, as it has been found that even when disclosing the intent of the default to consumers, the effect of the default is not reduced.
An experiment conducted by Sen Geng, regarding status quo bias and decision time allocation, reveal that individuals allocate more attention to default options in comparison to alternatives. This is due to individuals who are mainly risk-averse who seek to attain greater expected utility and decreased subjective uncertainty in making their decision. Furthermore, by optimally allocating more time and asymmetric attention to default options or positions, the individual's estimate of the default's value is consequently more precise than estimates of alternatives. This behaviour thus reflects the individual's asymmetric choice error, and is therefore an indication of status quo bias.
Conflict
Status-quo educational bias can be both a barrier to political progress and a threat to the state's legitimacy and argue that the values of stability, compliance, and patriotism underpin important reasons for status quo bias that appeal not to the substantive merits of existing institutions but merely to the fact that those institutions are the status quo.
Relevant fields
The status quo bias is seen in important real life decisions; it has been found to be prominent in data on selections of health care plans and retirement programs.
Politics
There is a belief that preference for the status quo represents a core component of conservative ideology in societies where government power is limited and laws restricting actions of individuals exist. Conversely, in liberal societies, movements to impose restrictions on individuals or governments are met with widespread opposition by those that favor the status quo. Regardless of the type of society, the bias tends to hinder progressive movements in the absence of a reaction or backlash against the powers that be.
Ethics
Status quo bias may be responsible for much of the opposition to human enhancement in general and to genetic cognitive enhancement in particular. Some ethicists argue, however, that status quo bias may not be irrational in such cases. The rationality of status quo bias is also an important question in the ethics of disability.
Education
Education can (sometimes unintentionally) encourage children’s belief in the substantive merits of a particular existing law or political institution, where the effect does not derive from an improvement in their ability or critical thinking about that law or institution. However, this biasing effect is not automatically illegitimate or counterproductive: a balance between social inculcation and openness needs to be maintained.
Given that educational curriculums are developed by Governments and delivered by individuals with their own political thoughts and feelings, the content delivered may be inadvertently affected by bias. When Governments implement certain policies, they become the status quo and are then presented as such to children in the education system. Whether through intentional or unintentional means, when learning about a topic, educators may favour the status quo. They may simply not know the full extent of the arguments against the status quo or may not be able to present an unbiased account of each side because of their personal biases.
Health
An experiment to determine if a status-quo bias, toward current medication even when better alternatives are offered—, exists in a stated-choice study among asthma patients who take prescription combination maintenance medications.
The results of this study indicate that the status quo bias may exist in stated-choice studies, especially with medications that patients must take daily such as asthma maintenance medications. Stated-choice practitioners should include a current medication in choice surveys to control for this bias.
Retirement plans
A study in 1986 examined the effect of status quo bias on those planning their retirement savings when given the yearly choice between two investment funds. Participants were able to choose how to proportionally split their retirement savings between the two funds at the beginning of each year. After each year, they were able to amend their chose split without switching costs as their preferences changed. Even though the two funds had vastly different returns in both absolute and relative terms, the majority of participants never switched the preferences across the trial period. Status quo bias was also more evident in older participants as they preferred to stay with their original investment, rather than switching as new information came to light.
In negotiation
Korobkin’s has studied a link between negotiation and status quo bias in 1998. In this studies shows that in negotiating contracts favor inaction that exist in situations in which a legal standard and defaults from contracts will administer absent action. This involves a biased opinion opposed to alternative solutions. Heifetz’s and Segev’s study in 2004 found support for existence of a toughness bias. It is like so-called endowment effect which affects seller’s behavior.
Price management
Status quo bias provides a maintenance role in the theory-practice gap in price management, and is revealed in Dominic Bergers’ research regarding status quo bias and its individual differences from a price management perspective. He identified status quo bias as a possible influencer of 22 rationality deficits identified and explained by Rullkötter (2009), and is further attributed to deficits within Simon and Fassnacht’s (2016) price management process phases. Status quo bias remained as an underlying possible cause of 16 of the 22 rationality deficits. Examples of these can be seen within the analysis phase and implementation phase of price management processes.
Bergers reveal that status quo bias within the former price management process phase potentially led to complete reliance on external information sources that existed traditionally. This bias, through a price management perspective, can be demonstrated when monitoring competitor’s pricing. In the latter phase, status quo bias potentially led to the final price being determined by decentralised staff, which is potentially perpetuated by existing system profitability within price management practices.
Mutual fund market
An empirical study conducted by Alexandre Kempf and Stefan Ruenzi examined the presence of status quo bias within the U.S. equity mutual fund market, and the extent in which this depends on the number of alternatives given. Using real data obtained from the U.S. mutual fund market, this study reveals status quo bias influences fund investors, in which a stronger correlation for positive dependence of status quo bias was found when the number of alternatives was larger, and therefore confirms Samuelson and Zeckhauser (1988) experimental results.
Economic research
Status quo bias has a significant impact on economics research and policy creation. Anchoring and adjustment theory in economics is where people's decision-making and outcome are affected by their initial reference point. The reference point for a consumer is usually the status quo. Status quo bias results in the default option to be better understood by consumers compared to alternatives options. This results in the status quo option providing less uncertainty and higher expected utility for risk-averse decision makers. Status quo bias is compounded by loss aversion theory where consumers see disadvantages as larger than advantages when making decision away from the reference point. Economics can also describe the effect of loss aversion graphically with a consumer’s utility function for losses having a negative and 2 times steeper curve than the utility function for gains. Therefore, they perceive the negative effect of a loss as more significant and will stay with status quo. Consumers choosing the status quo goes against rational consumer choice theory as they are not maximising their utility. Rational consumer choice theory underpins many economic decisions by defining a set of rules for consumer behaviour. Therefore, status quo bias has substantial implications in economic theory.
See also
Appeal to tradition
Comfort zone
Conventional wisdom
De facto standard
Default effect
Endowment effect
List of cognitive biases
Omission bias
Pro-innovation bias
Situationism (psychology)
Social norm
System justification
References
Further reading
Behavioral finance
Conformity
Prospect theory
Change
Cognitive inertia
Social privilege
Cognitive biases | Status quo bias | [
"Biology"
] | 4,613 | [
"Behavioral finance",
"Behavior",
"Conformity",
"Human behavior"
] |
804,737 | https://en.wikipedia.org/wiki/Endowment%20effect | In psychology and behavioral economics, the endowment effect, also known as divestiture aversion, is the finding that people are more likely to retain an object they own than acquire that same object when they do not own it. The endowment theory can be defined as "an application of prospect theory positing that loss aversion associated with ownership explains observed exchange asymmetries."
This is typically illustrated in two ways. In a valuation paradigm, people's maximum willingness to pay (WTP) to acquire an object is typically lower than the least amount they are willing to accept (WTA) to give up that same object when they own it—even when there is no cause for attachment, or even if the item was only obtained minutes ago. In an exchange paradigm, people given a good are reluctant to trade it for another good of similar value. For example, participants first given a pen of equal expected value to that of a coffee mug were generally unwilling to trade, whilst participants first given the coffee mug were also unwilling to trade it for the pen.
A more controversial third paradigm used to elicit the endowment effect is the mere ownership paradigm, primarily used in experiments in psychology, marketing, and organizational behavior. In this paradigm, people who are randomly assigned to receive a good ("owners") evaluate it more positively than people who are not randomly assigned to receive the good ("controls"). The distinction between this paradigm and the first two is that it is not incentive-compatible. In other words, participants are not explicitly incentivized to reveal the extent to which they truly like or value the good.
The endowment effect can be equated to the behavioural model willingness to accept or pay (WTAP), a formula sometimes used to find out how much a consumer or person is willing to put up with or lose for different outcomes. However, this model has come under recent criticism as potentially inaccurate.
Examples
One of the most famous examples of the endowment effect in the literature is from a study by Daniel Kahneman, Jack Knetsch & Richard Thaler, in which Cornell University undergraduates were given a mug and then offered the chance to sell it or trade it for an equally valued alternative (pens). They found that the amount participants required as compensation for the mug once their ownership of the mug had been established ("willingness to accept") was approximately twice as high as the amount they were willing to pay to acquire the mug ("willingness to pay").
Other examples of the endowment effect include work by Ziv Carmon and Dan Ariely, who found that participants' hypothetical selling price (willingness to accept or WTA) for NCAA final four tournament tickets were 14 times higher than their hypothetical buying price (willingness to pay or WTP). Also, work by Hossain and List (Working Paper) discussed in the Economist in 2010, showed that workers worked harder to maintain ownership of a provisionally awarded bonus than they did for a bonus framed as a potential yet-to-be-awarded gain. In addition to these examples, the endowment effect has been observed using different goods in a wide range of different populations, including children, great apes, and new world monkeys.
Background
The endowment effect has been observed from ancient times:
Psychologists first noted the difference between consumers' WTP and WTA as early as the 1960s. The term endowment effect however was first explicitly coined in 1980 by the economist Richard Thaler in reference to the under-weighting of opportunity costs as well as the inertia introduced into a consumer's choice processes when goods included in their endowment become more highly valued than goods that are not.
At the time Thaler's conceptualisation of the endowment effect was in direct contrast to that of accepted economic theory, which assumed humans were completely rational when making decisions. Through his contrasting viewpoint, Thaler was able to offer a clearer understanding of how humans make economic decisions. In the years that followed, extensive investigations into the endowment effect have been conducted producing a wealth of interesting empirical and theoretical findings.
Theoretical explanations
Loss aversion
The leading explanation for the aforementioned WTP-WTA gap is that of loss aversion. It was first linked by Kahneman and his colleagues that selling an endowment means the loss of the object, and as humans are aligned to be more loss-averse, less utility is obtained from acquirement of the same endowment. They go on to suggest that the endowment effect, when considered as a facet of loss-aversion, would thus violate the Coase theorem, and was described as inconsistent with standard economic theory which asserts that a person's willingness to pay (WTP) for a good should be equal to their willingness to accept (WTA) compensation to be deprived of the good, a hypothesis which underlies consumer theory and indifference curves. Another aspect of loss aversion exhibited within the endowment effect is that opportunity costs are often undervalued. The overcharging of the selling item stems from the fixation of losing the item rather than the unattained gain if the sale falls through.
The correlation between the two theories is so high that the endowment effect is often seen as the presentation of loss aversion in a riskless setting. However, these claims have been disputed and other researchers claim that psychological inertia, differences in reference prices relied on by buyers and sellers, and ownership (attribution of the item to self) and not loss aversion are the key to this phenomenon.
Psychological inertia
David Gal proposed a psychological inertia account of the endowment effect. In this account, sellers require a higher price to part with an object than buyers are willing to pay because neither has a well-defined, precise valuation for the object and therefore there is a range of prices over which neither buyers nor sellers have much incentive to trade. For example, in the case of Kahneman et al.'s (1990) classic mug experiments (where sellers demanded about $7 to part with their mug whereas buyers were only willing to pay, on average, about $3 to acquire a mug) there was likely a range of prices for the mug ($4 to $6) that left the buyers and sellers without much incentive to either acquire or part with it. Buyers and sellers therefore maintained the status quo out of inertia. Conversely, a high price ($7 or more) yielded a meaningful incentive for an owner to part with the mug; likewise, a relatively low price ($3 or less) yielded a meaningful incentive for a buyer to acquire the mug.
Reference-dependent accounts
According to reference-dependent theories, consumers first evaluate the potential change in question as either being a gain or a loss. In line with prospect theory (Tversky and Kahneman, 1979), changes that are framed as losses are weighed more heavily than are the changes framed as gains. Thus an individual owning "A" amount of a good, asked how much he/she would be willing to pay to acquire "B", would be willing to pay a value (B-A) that is lower than the value that he/she would be willing to accept to sell (C-A) units; the value function for perceived gains is not as steep as the value function for perceived losses.
Figure 1 presents this explanation in graphical form. An individual at point A, asked how much he/she would be willing to accept (WTA) as compensation to sell X units and move to point C, would demand greater compensation for that loss than he/she would be willing to pay for an equivalent gain of X units to move him/her to point B. Thus the difference between (B-A) and (C-A) would account for the endowment effect. In other words, he/she expects more money while selling; but wants to pay less while buying the same amount of goods.
Figure 1: Prospect Theory and the Endowment Effect
Neoclassical explanations
Hanemann (1991), develops a neoclassical explanation for the endowment effect, accounting for the effect without invoking prospect theory.
Figure 2 presents this explanation in graphical form. In the figure, two indifference curves for a particular good X and wealth are given. Consider an individual who is given goods X such that they move from point A (where they have X0 of good X) to point B (where they have the same wealth and X1 of good X). Their WTP represented by the vertical distance from B to C, because (after giving up that amount of wealth) the individual is indifferent about being at A or C. Now consider an individual who gives up goods such that they move from B to A. Their WTA represented by the (larger) vertical distance from A to D because (after receiving that much wealth) they are indifferent about either being at point B or D. Shogren et al. (1994) has reported findings that lend support to Hanemann's hypothesis. However, Kahneman, Knetsch, and Thaler (1991) find that the endowment effect continues even when wealth effects are fully controlled for.
Figure 2: Hanemann's Endowment Effect Explanation
When goods are indivisible, a coalitional game can be set up so that a utility function can be defined on all subsets of the goods.
Hu (2020) shows the endowment effect when the utility function is superadditive, i.e., the value of the whole is greater than the sum of its parts. Hu (2020) also introduces a few unbiased solutions which mitigate endowment bias.
Connection-based, or "psychological ownership" theories
Connection-based theories propose that the attachment or association with the self-induced by owning a good is responsible for the endowment effect (for a review, see Morewedge & Giblin, 2015). Work by Morewedge, Shu, Gilbert and Wilson (2009) provides support for these theories, as does work by Maddux et al. (2010). For example, research participants who were given one mug and asked how much they would pay for a second mug ("owner-buyers") were WTP as much as "owners-sellers," another group of participants who were given a mug and asked how much they were WTA to sell it (both groups valued the mug in question more than buyers who were not given a mug). Others have argued that the short duration of ownership or highly prosaic items typically used in endowment effect type studies is not sufficient to produce such a connection, conducting research demonstrating support for those points (e.g. Liersch & Rottenstreich, Working Paper).
Two paths by which attachment or self-associations increase the value of a good have been proposed (Morewedge & Giblin, 2015). An attachment theory suggests that ownership creates a non-transferable balanced association between the self and the good. The good is incorporated into the self-concept of the owner, becoming part of her identity and imbuing it with attributes related to her self-concept. Self-associations may take the form of an emotional attachment to the good. Once an attachment has formed, the potential loss of the good is perceived as a threat to the self. A real-world example of this would be an individual refusing to part with a college T-shirt because it supports one's identity as an alumnus of that university. A second route by which ownership may increase value is through a self-referential memory effect (SRE) – the better encoding and recollection of stimuli associated with the self-concept. People have a better memory for goods they own than goods they do not own. The self-referential memory effect for owned goods may act thus as an endogenous framing effect. During a transaction, attributes of a good may be more accessible to its owners than are other attributes of the transaction. Because most goods have more positive than negative features, this accessibility bias should result in owners more positively evaluating their goods than do non-owners.
Greater sensitivity to market demands for sellers
Sellers may dictate a price based on the desires of multiple potential buyers, whereas buyers may consider their own taste. This can lead to differences between buying and selling prices because the market price is typically higher than one's idiosyncratic price estimate. According to this account, the endowment effect can be viewed as under-pricing for buyers compared to the market price; or over-pricing for sellers compared to their individual taste. Two recent lines of study support this argument. Weaver and Frederick (2012) presented their participants with retail prices of products, and then asked them to specify either their buying or selling price for these products. The results revealed that sellers' valuations were closer to the known retail prices than those of buyers. A second line of studies is a meta-analysis of buying and selling of lotteries. A review of over 30 empirical studies showed that selling prices were closer to the lottery's expected value, which is the normative price of the lottery: hence the endowment effect was consistent with buyers' tendency to under-price lotteries as compared to the normative price. One possible reason for this tendency of buyers to indicate lower prices is their risk aversion. By contrast, sellers may assume that the market is heterogeneous enough to include buyers with potential risk neutrality and therefore adjust their price closer to a risk neutral expected value.
Biased information processing theories
Several cognitive accounts of the endowment effect suggest that it is induced by the way endowment status changes the search for, attention to, recollection of, and weighting of information regarding the transaction. Frames evoked by acquisition of a good (e.g., buying, choosing it rather than another good) may increase the cognitive accessibility of information favoring the decision to keep one's money and not acquire the good. By contrast, frames evoked by disposition of the good (e.g., selling) may increase the cognitive accessibility of information favoring the decision to keep the good rather than trade or dispose of it for money (for a review, see Morewedge & Giblin, 2015). For example, Johnson and colleagues (2007) found that prospective mug buyers tended to recall reasons to keep their money before recalling reasons to buy the mug, whereas sellers tended to recall reasons to keep their mug before reasons to sell it for money.
Evolutionary arguments
Huck, Kirchsteiger & Oechssler (2005) have raised the hypothesis that natural selection may favor individuals whose preferences embody an endowment effect given that it may improve one's bargaining position in bilateral trades. Thus in a small tribal society with a few alternative sellers (i.e. where the buyer may not have the option of moving to an alternative seller), having a predisposition towards embodying the endowment effect may be evolutionarily beneficial. This may be linked with findings (Shogren, et al., 1994) that suggest the endowment effect is less strong when the relatively artificial sense of scarcity induced in experimental settings is lessened. Countervailing evidence for an evolutionary account is provided by studies showing that the endowment effect is moderated by exposure to modern exchange markets (e.g., hunter gatherer tribes with market exposure are more likely to exhibit the endowment effect than tribes that do not), and that the endowment effect is moderated by culture (Maddux et al., 2010).
Criticisms
Some economists have questioned the effect's existence. Hanemann (1991) noted that economic theory only suggests that WTP and WTA should be equal for goods which are close substitutes, so observed differences in these measures for goods such as environmental resources and personal health can be explained without reference to an endowment effect. Shogren, et al. (1994) noted that the experimental technique used by Kahneman, Knetsch and Thaler (1990) to demonstrate the endowment effect created a situation of artificial scarcity. They performed a more robust experiment with the same goods used by Kahneman, Knetsch and Thaler (chocolate bars and mugs) and found little evidence of the endowment effect in substitutable goods, acknowledging the endowment effect as valid for goods without substitutes—non-renewable Earth resources being an example of these. Others have argued that the use of hypothetical questions and experiments involving small amounts of money tells us little about actual behavior (e.g. Hoffman and Spitzer, 1993, p. 69, n. 23) with some research supporting these points (e.g., Kahneman, Knetsch and Thaler, 1990, Harless, 1989) and others not (e.g. Knez, Smith and Williams, 1985). More recently, Plott and Zeiler have challenged the endowment effect theory by arguing that observed disparities between WTA and WTP measures are not reflective of human preferences, but rather such disparities stem from faulty experimental designs.
Implications
Implications regarding the endowment effect are present at both the individual and corporate level. Its presence can cause market inefficiencies and value irregularities between buyers and sellers with similar consequences at smaller or upscaled transactions.
Individual
Herbert Hovenkamp (1991) has argued that the presence of an endowment effect has significant implications for law and economics, particularly in regard to welfare economics. He argues that the presence of an endowment effect indicates that a person has no indifference curve (see however Hanemann, 1991) rendering the neoclassical tools of welfare analysis useless, concluding that courts should instead use WTA as a measure of value. Fischel (1995) however, raises the counterpoint that using WTA as a measure of value would deter the development of a nation's infrastructure and economic growth. The endowment effect changes the shape of the indifference curves substantially Similarly, another study that is focused on the Strategic Reallocations for Endowment analyses how it is the case that economics's agents welfare could potentially increase if they change their endowment holding.
Further to this, the endowment effect has been linked to both economic and psychological impacts of various scale. For example, often individuals refuse the sale of their house or upscale their expected value simply due to their emotional attachment and effort poured into it. This means they might either stick with a property which causes greater inconvenience to alternatives or have an increased level of difficulties associated with its sale. Either of these scenarios both negatively impact the relevant economy and the individual's mental welfare. Alternatively, if a buyer is subject to purchasing the item at the WTA level when it is set above market price, they are subject to overspending which positively impacts the economy whilst potentially reducing individual welfare yet again.
Business
In recent years the endowment effect has largely been leveraged within e-commerce. Businesses have expanded more rapidly than previous years through its effective integration into marketing products and services. Here consumers are often given a sense of ownership over what the business possesses thereby unlocking the cognitive bias.
Free trials
By offering free trials to select services, business not only expand the number of users reached, but during this trial period they also give consumers a sense of ownership. Consumer's psychological perception thus makes them more reluctant to part with the service when the trial ends, thereby increasing the quantity of subscribers.
Free return
This marketing strategy makes consumers more likely to purchase the product due to the perception of it being more endowing. However, once purchased, customers are less inclined to return it even if a level of dissatisfaction was experienced.
Haptic imagery
Various businesses offer a sense of ownership through showing customers what their product might look like in a relatable environment. Fashion and furniture businesses largely rely on haptic imagery to sell their products. While they do not necessarily offer customers to use their products they create an image of what could be, by either offering online viewing adjustments or appealing to ones sense of imagination. This feeling of ownership makes it harder for consumers to let go of the image and thus the product.
See also
Escalation of commitment
Mere ownership effect
Loss aversion
Omission bias
Behavioral economics
List of cognitive biases
Sunk costs
Transaction cost
IKEA effect
References
External links
Wright, Josh (2005). The Endowment Effect's Disappearing Act, and (2009) What's Wrong With the Endowment Effect?
The "Mystery" of the Endowment Effect, Per Bylund, December 28, 2011
What Explains Observed Reluctance to Trade? A Comprehensive Literature Review
Cognitive biases
Behavioral finance
Prospect theory | Endowment effect | [
"Biology"
] | 4,132 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
805,700 | https://en.wikipedia.org/wiki/Green%27s%20identities | In mathematics, Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green's theorem.
Green's first identity
This identity is derived from the divergence theorem applied to the vector field while using an extension of the product rule that : Let and be scalar functions defined on some region , and suppose that is twice continuously differentiable, and is once continuously differentiable. Using the product rule above, but letting , integrate over . Then
where is the Laplace operator, is the boundary of region , is the outward pointing unit normal to the surface element and is the oriented surface element.
This theorem is a special case of the divergence theorem, and is essentially the higher dimensional equivalent of integration by parts with and the gradient of replacing and .
Note that Green's first identity above is a special case of the more general identity derived from the divergence theorem by substituting ,
Green's second identity
If and are both twice continuously differentiable on , and is once continuously differentiable, one may choose to obtain
For the special case of all across , then,
In the equation above, is the directional derivative of in the direction of the outward pointing surface normal of the surface element ,
Explicitly incorporating this definition in the Green's second identity with results in
In particular, this demonstrates that the Laplacian is a self-adjoint operator in the inner product for functions vanishing on the boundary so that the right hand side of the above identity is zero.
Green's third identity
Green's third identity derives from the second identity by choosing , where the Green's function is taken to be a fundamental solution of the Laplace operator, ∆. This means that:
For example, in , a solution has the form
Green's third identity states that if is a function that is twice continuously differentiable on , then
A simplification arises if is itself a harmonic function, i.e. a solution to the Laplace equation. Then and the identity simplifies to
The second term in the integral above can be eliminated if is chosen to be the Green's function that vanishes on the boundary of (Dirichlet boundary condition),
This form is used to construct solutions to Dirichlet boundary condition problems. Solutions for Neumann boundary condition problems may also be simplified, though the Divergence theorem applied to the differential equation defining Green's functions shows that the Green's function cannot integrate to zero on the boundary, and hence cannot vanish on the boundary. See Green's functions for the Laplacian or for a detailed argument, with an alternative.
It can be further verified that the above identity also applies when is a solution to the Helmholtz equation or wave equation and is the appropriate Green's function. In such a context, this identity is the mathematical expression of the Huygens principle, and leads to Kirchhoff's diffraction formula and other approximations.
On manifolds
Green's identities hold on a Riemannian manifold. In this setting, the first two are
where and are smooth real-valued functions on , is the volume form compatible with the metric, is the induced volume form on the boundary of , is the outward oriented unit vector field normal to the boundary, and is the Laplacian.
Green's vector identity
Green's second identity establishes a relationship between second and (the divergence of) first order derivatives of two scalar functions. In differential form
where and are two arbitrary twice continuously differentiable scalar fields. This identity is of great importance in physics because continuity equations can thus be established for scalar fields such as mass or energy.
In vector diffraction theory, two versions of Green's second identity are introduced.
One variant invokes the divergence of a cross product and states a relationship in terms of the curl-curl of the field
This equation can be written in terms of the Laplacians,
However, the terms
could not be readily written in terms of a divergence.
The other approach introduces bi-vectors, this formulation requires a dyadic Green function. The derivation presented here avoids these problems.
Consider that the scalar fields in Green's second identity are the Cartesian components of vector fields, i.e.,
Summing up the equation for each component, we obtain
The LHS according to the definition of the dot product may be written in vector form as
The RHS is a bit more awkward to express in terms of vector operators. Due to the distributivity of the divergence operator over addition, the sum of the divergence is equal to the divergence of the sum, i.e.,
Recall the vector identity for the gradient of a dot product,
which, written out in vector components is given by
This result is similar to what we wish to evince in vector terms 'except' for the minus sign. Since the differential operators in each term act either over one vector (say ’s) or the other (’s), the contribution to each term must be
These results can be rigorously proven to be correct through evaluation of the vector components. Therefore, the RHS can be written in vector form as
Putting together these two results, a result analogous to Green's theorem for scalar fields is obtained,
Theorem for vector fields:
The curl of a cross product can be written as
Green's vector identity can then be rewritten as
Since the divergence of a curl is zero, the third term vanishes to yield Green's vector identity:
With a similar procedure, the Laplacian of the dot product can be expressed in terms of the Laplacians of the factors
As a corollary, the awkward terms can now be written in terms of a divergence by comparison with the vector Green equation,
This result can be verified by expanding the divergence of a scalar times a vector on the RHS.
See also
Green's function
Kirchhoff integral theorem
Lagrange's identity (boundary value problem)
References
External links
Green's Identities at Wolfram MathWorld
Vector calculus
Mathematical identities | Green's identities | [
"Mathematics"
] | 1,256 | [
"Mathematical theorems",
"Mathematical identities",
"Mathematical problems",
"Algebra"
] |
805,752 | https://en.wikipedia.org/wiki/Tritium%20radioluminescence | Tritium radioluminescence is the use of gaseous tritium, a radioactive isotope of hydrogen, to create visible light. Tritium emits electrons through beta decay and, when they interact with a phosphor material, light is emitted through the process of phosphorescence. The overall process of using a radioactive material to excite a phosphor and ultimately generate light is called radioluminescence. As tritium illumination requires no electrical energy, it has found wide use in applications such as emergency exit signs, illumination of wristwatches, and portable yet very reliable sources of low intensity light which won't degrade human night vision. Gun sights for night use and small lights (which need to be more reliable than battery powered lights, yet not interfere with night vision or be bright enough to easily give away one's location) used mostly by military personnel fall under the latter application.
History
Tritium was found to be an ideal energy source for self-luminous compounds in 1953 and the idea was patented by Edward Shapiro on 29 October 1953, in the US (2749251 – Source of Luminosity).
Design
Tritium lighting is made using glass tubes with a phosphor layer in them and tritium gas inside the tube. Such a tube is known as a "gaseous tritium light source" (GTLS), or beta light (since the tritium undergoes beta decay), or tritium lamp.
The tritium in a gaseous tritium light source undergoes beta (β) decay, releasing electrons that cause the phosphor layer to phosphoresce.
During manufacture, a length of borosilicate glass tube that has had the internal surface coated with a phosphor-containing material is filled with tritium. The tube is then sealed at the desired length using a carbon dioxide laser. Borosilicate is preferred for its strength and resistance to breakage. In the tube, the tritium gives off a steady stream of electrons due to β decay. These particles excite the phosphor, causing it to emit a low, steady glow.
Tritium is not the only material that can be used for self-powered lighting. Radium was used to make self-luminous paint from the early 20th century to about 1970. Promethium briefly replaced radium as a radiation source. Tritium is the only radiation source used in radioluminescent light sources today due to its low radiological toxicity and commercial availability.
Various preparations of the phosphor compound can be used to produce different colors of light. For example, doping zinc sulfide phosphor with different metals can change the emission wavelength. Some of the colors that have been manufactured in addition to the common phosphors are green, red, blue, yellow, purple, orange, and white.
The GTLSs used in watches give off a small amount of light: Not enough to be seen in daylight, but visible in the dark from a distance of several meters. The average such GTLS has a useful life of 10–20 years. The rate of β emissions decreases by half in each half-life (12.32 years). Also, phosphor degradation will cause the brightness of a tritium tube to drop by more than half in that period. The more tritium is initially placed in the tube, the brighter it is to begin with, and the longer its useful life. Tritium exit signs usually come in three brightness levels guaranteed for 10, 15, or 20-year useful life expectancies. The difference between the signs is how much tritium the manufacturer installs.
The light produced by GTLSs varies in color and size. Green usually appears as the brightest color with a brightness as high as 2 cd/m and red appears the least bright. For comparison, most consumer desktop liquid crystal displays have luminances of 200 to 300 cd/m. Sizes range from tiny tubes small enough to fit on the hand of a watch to ones the size of a pencil. Large tubes (5 mm diameter and up to 100 mm long) are usually only found in green, and can surprisingly be not as bright as the standard 22.5 mm × 3 mm sized tritium, this is due to the lower concentration and high cost of tritium; this smaller size is usually the brightest and is used mainly in keychains available commercially.
Uses
These light sources are most often seen as "permanent" illumination for the hands of wristwatches intended for diving, nighttime, or combat use. They are also used in glowing novelty keychains and in self-illuminated exit signs. They are favored by the military for applications where a power source may not be available, such as for instrument dials in aircraft, compasses, and sights for weapons. In the case of solid tritium light sources, the tritium replaces some of the hydrogen atoms in the paint, which also contains a phosphor such as zinc sulfide.
Tritium lights or beta lights were formerly used in fishing lures. Some flashlights have slots for tritium vials so that the flashlight can be easily located in the dark.
Tritium is used to illuminate the iron sights of some small arms. The reticle on the SA80's optical SUSAT sight as well as the LPS 4x6° TIP2 telescopic sight of a PSL rifle, contains a small amount of tritium for the same effect as an example of tritium use on a rifle sight. The electrons emitted by the radioactive decay of the tritium cause phosphor to glow, thus providing a long-lasting (several years) and non-battery-powered firearms sight that is visible in dim lighting conditions. The tritium glow is not noticeable in bright conditions such as during daylight, however; consequently some manufacturers have started to integrate fiber optic sights with tritium vials to provide bright, high-contrast firearms sights in both bright and dim conditions.
Safety
Though these devices contain a radioactive substance, it is currently believed that self-powered lighting does not pose a significant health concern. A 2007 report by the UK government's Health Protection Agency Advisory Group on Ionizing Radiation declared the health risks of tritium exposure to be double that previously set by the International Commission on Radiological Protection, but encapsulated tritium lighting devices, typically taking the form of a luminous glass tube embedded in a thick block of clear plastic, prevent the user from being exposed to the tritium at all unless the device is broken apart.
Tritium presents no external beta radiation threat when encapsulated in non-hydrogen-permeable containers due to its low penetration depth, which is too short to penetrate intact human skin. However, GTLS devices do emit low levels of X-rays due to bremsstrahlung. According to a report by the OECD, any external radiation from a gaseous tritium light device is solely due to bremsstrahlung, usually in the range of 8–14 keV. The bremsstrahlung dose rate cannot be calculated from the properties of tritium alone, as the dose rate and effective energy is dependent on the form of containment. A bare, cylindrical vial GTLS constructed of 0.1 mm thick glass that is 10 mm long and 0.5 mm in diameter will yield a surface dose rate of 100 millirads per hour per curie. If the same vial were instead constructed of 1 mm thick glass and enclosed in a plastic covering that is 2–3 mm thick, the GTLS will yield a surface dose rate of 1 millirad per hour per curie. The dose rate measured from 10 mm away will be two orders of magnitude lower than the measured surface dose rate. Given that the half-value thickness of 10 keV photon radiation in water is about 1.4 mm, the attenuation provided by tissue overlaying blood-forming organs is considerable.
The primary danger from tritium arises if it is inhaled, ingested, injected, or absorbed into the body. This results in the absorption of the emitted radiation in a small region of the body, again due to the low penetration depth. The biological half-life of tritium – the time it takes for half of an ingested dose to be expelled from the body – is low, at only 12 days. Tritium excretion can be accelerated further by increasing water intake to 3–4 liters/day. Direct, short-term exposure to small amounts of tritium is mostly harmless. If a tritium tube breaks, one should leave the area and allow the gas to diffuse into the air. Tritium exists naturally in the environment, but in very small quantities.
Legislation
Products containing tritium are controlled by law because tritium is used in boosted fission weapons and thermonuclear weapons (though in quantities several thousand times larger than that in a keychain). In the US, devices such as self-luminous exit signs, gauges, wristwatches, etc. that contain small amounts of tritium are under the jurisdiction of the Nuclear Regulatory Commission, and are subject to possession, distribution, and import and export regulations found in 10 CFR Parts, 30, 32, and 110. They are also subject to regulations for possession, use, and disposal in certain states. Luminous products containing more tritium than needed for a wristwatch are not widely available at retail outlets in the United States.
They are readily sold and used in the UK and US. They are regulated in England and Wales by environmental health departments of local councils. In Australia products containing tritium are licence exempt if they contain less than tritium and have a total activity of less than , except for in safety devices where the limit is total activity.
See also
List of light sources
Radium Girls
References
External links
Cleanup of a broken tritium sign
Radioluminescent items
Luminor 2020 – Debunking Panerai's fictional history of tritium-based lume (Perezcope.com)
Lighting
Nuclear technology
Radioactivity
Radioluminescence | Tritium radioluminescence | [
"Physics",
"Chemistry"
] | 2,063 | [
"Nuclear technology",
"Radioactivity",
"Nuclear physics"
] |
807,570 | https://en.wikipedia.org/wiki/Cryogenic%20fuel | Cryogenic fuels are fuels that require storage at extremely low temperatures in order to maintain them in a liquid state. These fuels are used in machinery that operates in space (e.g. rockets and satellites) where ordinary fuel cannot be used, due to the very low temperatures often encountered in space, and the absence of an environment that supports combustion (on Earth, oxygen is abundant in the atmosphere, whereas human-explorable space is a vacuum where oxygen is virtually non-existent). Cryogenic fuels most often constitute liquefied gases such as liquid hydrogen.
Some rocket engines use regenerative cooling, the practice of circulating their cryogenic fuel around the nozzles before the fuel is pumped into the combustion chamber and ignited. This arrangement was first suggested by Eugen Sänger in the 1940s. All engines in the Saturn V rocket that sent the first crewed missions to the Moon used this design element, which is still in use today for liquid-fueled engines.
Quite often, liquid oxygen is mistakenly called cryogenic fuel, though it is actually an oxidizer and not fuel - like in any combustion engine, only the non-oxygen component of the combustion is considered "fuel", although this distinction is arbitrary.
Russian aircraft manufacturer Tupolev developed a version of its popular Tu-154 design but with a cryogenic fuel system, designated the Tu-155. Using a fuel referred to as liquefied natural gas (LNG), its first flight was in 1989.
Operation
Cryogenic fuels can be placed into two categories: inert and flammable or combustible. Both types exploit the large liquid-to-gas volume ratio that occurs when liquid transitions to gas phase. The feasibility of cryogenic fuels is associated with what is known as a high mass flow rate. With regulation, the high-density energy of cryogenic fuels is utilized to produce thrust in rockets and controllable consumption of fuel. The following sections provide further detail.
Inert
These types of fuels typically use the regulation of gas production and flow to power pistons in an engine. The large increases in pressure are controlled and directed toward the engine's pistons. The pistons move due to the mechanical power transformed from the monitored production of gaseous fuel. A notable example can be seen in Peter Dearman's liquid air vehicle. Some common inert fuels include:
Liquid nitrogen
Liquid air
Liquid helium
Liquid neon
Combustible
These fuels utilize the beneficial liquid cryogenic properties along with the flammable nature of the substance as a source of power. These types of fuel are well known primarily for their use in rockets. Some common combustible fuels include:
Liquid hydrogen
Liquid natural gas (LNG)
Liquid methane
Engine combustion
Combustible cryogenic fuels offer much more utility than most inert fuels can. Liquefied natural gas, as with any fuel, will only combust when properly mixed with the right amounts of air. As for LNG, the bulk majority of efficiency depends on the methane number, which is the gas equivalent of the octane number. This is determined based on the methane content of the liquefied fuel and any other dissolved gas, and varies as a result of experimental efficiencies. Maximizing efficiency in combustion engines will be a result of determining the proper fuel to air ratio and utilizing the addition other hydrocarbons for added optimal combustion.
Production efficiency
Gas liquefying processes have been improving over the past decades with the advent of better machinery and control of system heat losses. Typical techniques take advantage of the temperature of the gas dramatically cooling as the controlled pressure of a gas is released. Enough pressurization and then subsequent depressurization can liquefy most gases, as exemplified by the Joule-Thomson effect.
Liquefied natural gas
While it is cost-effective to liquefy natural gas for storage, transport, and use, roughly 10 to 15 percent of the gas gets consumed during the process. The optimal process contains four stages of propane refrigeration and two stages of ethylene refrigeration. There can be the addition of an additional refrigerant stage, but the additional costs of equipment are not economically justifiable. Efficiency can be tied to the pure component cascade processes which minimize the overall source to sink temperature difference associated with refrigerant condensing. The optimized process incorporates optimized heat recovery along with the use of pure refrigerants. All process designers of liquefaction plants using proven technologies face the same challenge: to efficiently cool and condense a mixture with a pure refrigerant. In the optimized Cascade process, the mixture to be cooled and condensed is the feed gas. In the propane mixed refrigerant processes, the two mixtures requiring cooling and condensing are the feed gas and the mixed refrigerant. The chief source of inefficiency lies in the heat exchange train during the liquefaction process.
Advantages and disadvantages
Benefits
Cryogenic fuels are environmentally cleaner than gasoline or fossil fuels. Among other things, the greenhouse gas rate could potentially be reduced by 11–20% using LNG as opposed to gasoline when transporting goods.
Along with their eco-friendly nature, they have the potential to significantly decrease transportation costs of inland products because of their abundance compared to that of fossil fuels.
Cryogenic fuels have a higher mass flow rate than fossil fuels and therefore produce more thrust and power when combusted for use in an engine. This means that engines will run farther on less fuel overall than modern gas engines.
Cryogenic fuels are non-pollutants and therefore, if spilled, are no risk to the environment. There will be no need to clean up hazardous waste after a spill.
Potential drawbacks
Some cryogenic fuels, like LNG, are naturally combustible. Ignition of fuel spills could result in a large explosion. This is possible in the case of a car crash with an LNG engine.
Cryogenic storage tanks must be able to withstand high pressure. High-pressure propellant tanks require thicker walls and stronger alloys which make the vehicle tanks heavier, thereby reducing performance.
Despite non-toxic tendencies, cryogenic fuels are denser than air. As such, they can lead to asphyxiation. If leaked, the liquid will boil into a very dense, cold gas and if inhaled, could be fatal.
See also
Cryogenic rocket engine
Liquid rocket propellant
Tupolev Tu-244
References
Fuels
Industrial gases
Cryogenics | Cryogenic fuel | [
"Physics",
"Chemistry"
] | 1,316 | [
"Applied and interdisciplinary physics",
"Chemical energy sources",
"Cryogenics",
"Industrial gases",
"Fuels",
"Chemical process engineering"
] |
807,941 | https://en.wikipedia.org/wiki/Tiltwing | A tiltwing aircraft features a wing that is horizontal for conventional forward flight and rotates up for vertical takeoff and landing. It is similar to the tiltrotor design where only the propeller and engine rotate. Tiltwing aircraft are typically fully capable of VTOL operations.
The tiltwing design offers certain advantages in vertical flight relative to a tiltrotor. Because the slipstream from the rotor strikes the wing on its smallest dimension, the tiltwing is able to apply more of its engine power to lifting the aircraft. For comparison, the V-22 Osprey tiltrotor loses about 10% of its thrust to interference from the wings.
Another advantage of tiltwing aircraft is the ease of transition between VTOL and horizontal flight modes. A tiltrotor must first fly forwards like a helicopter, building airspeed until wing lift is sufficient to allow the nacelles to begin tilting down. As a note, the MV-22 Osprey's stall speed in airplane mode is . Conversely, a tiltwing aircraft can begin the transition from helicopter to airplane at zero forward airspeed. Because of this, the Canadair CL-84 Dynavert was able to take off vertically, then accelerate from zero airspeed to in 8 seconds.
However, the fixed wing of a tiltrotor aircraft offers a superior angle of attack—thus more lift and a shorter takeoff roll—when performing STOL/STOVL operations.
The main drawbacks of tiltwing aircraft are susceptibility to wind gusts in VTOL mode and lower hover efficiency. The wing tilted vertically represents a large surface area for crosswinds to push against. Tiltrotors generally have better hover efficiency than tiltwings, but less than helicopters. This is due to the difference in rotor disk loading.
As of 2014, NASA is testing a diesel-electric hybrid 10-foot 10-rotor tiltwing called the GL-10 Greased Lightning, with most propellers folding during horizontal flight.
List of tiltwing aircraft
Tiltwing designs with rocket, jet, or propeller propulsion
Weserflug P.1003 (1938)
Vertol VZ-2 (1957)
Hiller X-18 (1959)
Kaman K-16B (1959)
LTV XC-142 (1964)
Canadair CL-84 Dynavert (1965)
NASA GL-10 Greased Lightning (2014)
Airbus A³ Vahana (2018)
See also
Thrust vectoring
Tailsitter
Tiltrotor
Tiltjet
Coleopter
PTOL
VTOL
References
Aircraft configurations | Tiltwing | [
"Engineering"
] | 516 | [
"Aircraft configurations",
"Aerospace engineering"
] |
808,374 | https://en.wikipedia.org/wiki/Space%20sunshade | A space sunshade or sunshield is something that diverts or otherwise reduces some of the Sun's radiation, preventing it from hitting the Earth and thereby reducing its insolation, which results in reduced heating. Light can be diverted by different methods. The concept of the construction of sunshade as a method of climate engineering dates back to the years 1923, 1929, 1957 and 1978 by the physicist Hermann Oberth. Space mirrors in orbit around the Earth with a diameter of 100 to 300 km, as designed by Hermann Oberth, were intended to focus sunlight on individual regions of the Earth’s surface or deflect it into space so that the solar radiation is weakened in a specifically controlled manner for individual regions on the Earth’s surface.
First proposed in 1989, another space sunshade concept involves putting a large occulting disc, or technology of equivalent purpose, between the Earth and Sun.
A sunshade could potentially be one climate engineering method for mitigating global warming through solar radiation management, because internationally negotiated reductions in carbon emissions may be insufficient to stem climate change. Sunshades could also be used to produce space solar power, acting as solar power satellites. Proposed shade designs include a single-piece shade and a shade made by a great number of small objects. Most such proposals contemplate a blocking element at the Sun-Earth L1 Lagrangian point.
Modern proposals are based on some form of distributed sunshade composed of lightweight transparent elements or inflatable "space bubbles" manufactured in space to reduce the cost of launching massive objects to space. However it would cost trillions of dollars and no prototype has yet been launched. Critics also argue that building it would be too slow to prevent dangerous levels of global warming.
Proposed designs
Cloud of small spacecraft
One proposed sunshade would be composed of 16 trillion small disks at the Sun-Earth L1 Lagrangian point, 1.5 million kilometers from Earth and between it and the Sun. Each disk is proposed to have a 0.6-meter diameter and a thickness of about 5 micrometers. The mass of each disk would be about a gram, adding up to a total of almost 20 million tonnes. Such a group of small sunshades that blocks 2% of the sunlight, deflecting it off into space, would be enough to halt global warming. If 100 tonnes of disks were launched to low Earth orbit every day, it would take 550 years to launch all of them.
The individual autonomous flyers building up the cloud of sunshades are proposed not to reflect the sunlight but rather to be transparent lenses, deflecting the light slightly so it does not hit Earth. This minimizes the effect of solar radiation pressure on the units, requiring less effort to hold them in place at the L1 point. An optical prototype has been constructed by Roger Angel with funding from NIAC.
The remaining solar pressure and the fact that the L1 point is one of unstable equilibrium, easily disturbed by the wobble of the Earth due to gravitational effects from the Moon, requires the small autonomous flyers to be capable of maneuvering themselves to hold position. A suggested solution is to place mirrors capable of rotation on the surface of the flyers. By using the solar radiation pressure on the mirrors as solar sails and tilting them in the right direction, the flyer will be capable of altering its speed and direction to keep in position.
Such a group of sunshades would need to occupy an area of about 3.8 million square kilometers if placed at the L1 point (see other lower disc size estimates below).
It would still take years to launch enough of the disks into orbit to have any effect. This means a long lead time. Roger Angel of the University of Arizona presented the idea for a sunshade at the U.S. National Academy of Sciences in April 2006 and won a NASA Institute for Advanced Concepts grant for further research in July 2006. Creating this sunshade in space was estimated to cost in excess of US$130 billion over 20 years with an estimated lifetime of 50-100 years. Thus leading Professor Angel to conclude that "the sunshade is no substitute for developing renewable energy, the only permanent solution. A similar massive level of technological innovation and financial investment could ensure that. But if the planet gets into an abrupt climate crisis that can only be fixed by cooling, it would be good to be ready with some shading solutions that have been worked out."
Researchers from the University of Stuttgart, Institute of Space Systems described a roadmap for the development, construction and transport of an international planetary sun shield (IPSS) at the Lagrange point 1 in 2021, which would also be a photovoltaic plant. Here, too, as with Hermann Oberth, production on the Moon, the use of an electromagnetic Moon slingshot (lunar coilgun) and the transport of the components from the Moon to the Lagrange point 1 between the Earth and the Sun are discussed by means of electric spaceships (alternatively with sun sails) assumed. The authors refer to the many international activities and the chance to put the sunlight shield into operation by 2060.
Lightweight solutions and "Space bubbles"
A more recent design has been proposed by Olivia Borgue and Andreas M. Hein in 2022, proposing a distributed sunshade with a mass on the order of 100,000 tons, composed of ultra-thin polymeric films and SiO2 nanotubes. The author estimated that launching such mass would require 399 yearly launches of a vehicle such as SpaceX Starship for 10 years.
A 2022 concept by MIT Senseable City Lab proposes using thin-film structures ("space bubbles") manufactured in outer space to solve the problem of launching the required mass to space. MIT scientists led by Carlo Ratti believe deflecting 1.8 percent of solar radiation can fully reverse climate change. The full raft of inflatable bubbles would be roughly the size of Brazil and include a control system to regulate its distance from the Sun and optimise its effects. The shell of the thin-film bubbles would be made of silicon, tested in outer space-like conditions at a pressure of .0028 atm and at -50 degrees Celsius. They plan to investigate low vapor-pressure materials to rapidly inflate the bubbles, such as a silicon-based melt or a graphene-reinforced ionic liquid.
In July 2022, a pair of researchers from MIT Senseable City Lab, Olivia Borgue and Andreas M. Hein, have instead proposed integrating nanotubes made out of silicon dioxide into ultra-thin polymeric films (described as "space bubbles" in the media ), whose semi-transparent nature would allow them to resist the pressure of solar wind at L1 point better than any alternative with the same weight. The use of these "bubbles" would limit the mass of a distributed sunshade roughly the size of Brazil to about 100,000 tons, much lower than the earlier proposals. However, it would still require between 399 and 899 yearly launches of a vehicle such as SpaceX Starship for a period of around 10 years, even though the production of the bubbles themselves would have to be done in space. The flights would not begin until research into production and maintenance of these bubbles is completed, which the authors estimate would require a minimum of 10–15 years. After that, the space shield may be large enough by 2050 to prevent crossing of the threshold.
In 2023, three astronomers revisited the space dust concept, instead advocating for a lunar colony which would continuously mine the Moon in order to eject lunar dust into space on a trajectory where it would interfere with sunlight streaming towards the Earth. Ejections would have to be near-continuous, as since the dust would scatter in a matter of days, and about 10 million tons would have to be dug out and launched annually. The authors admit that they lack a background in either climate or rocket science, and the proposal may not be logistically feasible.
One Fresnel lens
Several authors have proposed dispersing light before it reaches the Earth by putting a very large lens in space, perhaps at the L1 point between the Earth and the Sun. This plan was proposed in 1989 by J. T. Early. His design involved making a large glass (2,000 km) occulter from lunar material and placing at the L1 point. Issues included the large amount of material needed to make the disc and also the energy to launch it to its orbit.
In 2004, physicist and science fiction author Gregory Benford calculated that a concave rotating Fresnel lens 1000 kilometres across, yet only a few millimeters thick, floating in space at the point, would reduce the solar energy reaching the Earth by approximately 0.5% to 1%.
The cost of such a lens has been disputed. At a science fiction convention in 2004, Benford estimated that it would cost about US$10 billion up front, and another $10 billion in supportive cost during its lifespan.
One diffraction grating
A similar approach involves placing a very large diffraction grating (thin wire mesh) in space, perhaps at the L1 point between the Earth and the Sun. A proposal for a 3,000 ton diffraction mesh was made in 1997 by Edward Teller, Lowell Wood, and Roderick Hyde, although in 2002 these same authors argued for blocking solar radiation in the stratosphere rather than in orbit given then-current space launch technologies.
Other Lower Disc Size Estimates
Recent work by Feinberg (2022) illustrate that lower disc area sizes (factor of approximately 3.5 reduction) are feasible when the background climate response is considered. For example, the background Earth climate would yield less re-radiation and feedback. In addition, disc area sizes can be further reduced by 50 times using an Annual Solar Geoengineering approach as indicated by Feinberg (2024).
See also
References
Climate change mitigation
Terraforming
Climate engineering | Space sunshade | [
"Engineering"
] | 2,032 | [
"Planetary engineering",
"Geoengineering",
"Terraforming"
] |
808,519 | https://en.wikipedia.org/wiki/Discrete%20category | In mathematics, in the field of category theory, a discrete category is a category whose only morphisms are the identity morphisms:
homC(X, X) = {idX} for all objects X
homC(X, Y) = ∅ for all objects X ≠ Y
Since by axioms, there is always the identity morphism between the same object, we can express the above as condition on the cardinality of the hom-set
| homC(X, Y) | is 1 when X = Y and 0 when X is not equal to Y.
Some authors prefer a weaker notion, where a discrete category merely needs to be equivalent to such a category.
Simple facts
Any class of objects defines a discrete category when augmented with identity maps.
Any subcategory of a discrete category is discrete. Also, a category is discrete if and only if all of its subcategories are full.
The limit of any functor from a discrete category into another category is called a product, while the colimit is called a coproduct. Thus, for example, the discrete category with just two objects can be used as a diagram or diagonal functor to define a product or coproduct of two objects. Alternately, for a general category C and the discrete category 2, one can consider the functor category C2. The diagrams of 2 in this category are pairs of objects, and the limit of the diagram is the product.
The functor from Set to Cat that sends a set to the corresponding discrete category is left adjoint to the functor sending a small category to its set of objects. (For the right adjoint, see indiscrete category.)
References
Robert Goldblatt (1984). Topoi, the Categorial Analysis of Logic (Studies in logic and the foundations of mathematics, 98). North-Holland. Reprinted 2006 by Dover Publications, and available online at Robert Goldblatt's homepage.
Categories in category theory | Discrete category | [
"Mathematics"
] | 409 | [
"Mathematical structures",
"Category theory",
"Categories in category theory"
] |
808,767 | https://en.wikipedia.org/wiki/Vacuum%20distillation | Vacuum distillation or distillation under reduced pressure is a type of distillation performed under reduced pressure, which allows the purification of compounds not readily distilled at ambient pressures or simply to save time or energy. This technique separates compounds based on differences in their boiling points. This technique is used when the boiling point of the desired compound is difficult to achieve or will cause the compound to decompose. Reduced pressures decrease the boiling point of compounds. The reduction in boiling point can be calculated using a temperature-pressure nomograph using the Clausius–Clapeyron relation.
Laboratory-scale applications
Compounds with a boiling point lower than 150 °C typically are distilled at ambient pressure. For samples with high boiling points, short-path distillation apparatus is commonly employed. This technique is amply illustrated in Organic Synthesis.
Rotary evaporation
Rotary evaporation is a common technique used in laboratories to concentrate or isolate a compound from solution. Many solvents are volatile and can easily be evaporated using rotary evaporation. Even less volatile solvents can be removed by rotary evaporation under high vacuum and with heating. It is also used by environmental regulatory agencies for determining the amount of solvents in paints, coatings and inks.
Safety considerations
Safety is an important consideration when glassware is under vacuum pressure. Scratches and cracks can result in implosions when the vacuum is applied. Wrapping as much of the glassware with tape as is practical helps to prevent dangerous scattering of glass shards in the event of an implosion.
Industrial-scale applications
Industrial-scale vacuum distillation has several advantages. Close boiling mixtures may require many equilibrium stages to separate the key components. One tool to reduce the number of stages needed is to utilize vacuum distillation. Vacuum distillation columns (as depicted in Figures 2 and 3) typically used in oil refineries have diameters ranging up to about 14 meters (46 feet), heights ranging up to about 50 meters (164 feet), and feed rates ranging up to about 25,400 cubic meters per day (160,000 barrels per day).
Vacuum distillation can improve a separation by:
Prevention of product degradation or polymer formation because of reduced pressure leading to lower tower bottoms temperatures,
Reduction of product degradation or polymer formation because of reduced mean residence time especially in columns using packing rather than trays.
Increasing yield, and purity.
Vacuum distillation in petroleum refining
Petroleum crude oil is a complex mixture of hundreds of different hydrocarbon compounds generally having from 3 to 60 carbon atoms per molecule, although there may be small amounts of hydrocarbons outside that range. The refining of crude oil begins with distilling the incoming crude oil in a so-called atmospheric distillation column operating at pressures slightly above atmospheric pressure.
Vacuum distillation can also be referred to as "low-temperature distillation".
In distilling the crude oil, it is important not to subject the crude oil to temperatures above 370 to 380 °C because high molecular weight components in the crude oil will undergo thermal cracking and form petroleum coke at temperatures above that. Formation of coke would result in plugging the tubes in the furnace that heats the feed stream to the crude oil distillation column. Plugging would also occur in the piping from the furnace to the distillation column as well as in the column itself.
The constraint imposed by limiting the column inlet crude oil to a temperature of less than 370 to 380 °C yields a residual oil from the bottom of the atmospheric distillation column consisting entirely of hydrocarbons that boil above 370 to 380 °C.
To further distill the residual oil from the atmospheric distillation column, the distillation must be performed at absolute pressures as low as 10 to 40 mmHg / Torr (About 5% atmospheric pressure) so as to limit the operating temperature to less than 370 to 380 °C.
Figure 2 is a simplified process diagram of a petroleum refinery vacuum distillation column that depicts the internals of the column and Figure 3 is a photograph of a large vacuum distillation column in a petroleum refinery.
The 10 to 40 mmHg absolute pressure in a vacuum distillation column increases the volume of vapor formed per volume of liquid distilled. The result is that such columns have very large diameters.
Distillation columns such those in Images 1 and 2, may have diameters of 15 meters or more, heights ranging up to about 50 meters, and feed rates ranging up to about 25,400 cubic meters per day (160,000 barrels per day).
The vacuum distillation column internals must provide good vapor–liquid contacting while, at the same time, maintaining a very low-pressure increase from the top of the column top to the bottom. Therefore, the vacuum column uses distillation trays only where products are withdrawn from the side of the column (referred to as side draws). Most of the column uses packing material for the vapor–liquid contacting because such packing has a lower pressure drop than distillation trays. This packing material can be either structured sheet metal or randomly dumped packing such as Raschig rings.
The absolute pressure of 10 to 40 mmHg in the vacuum column is most often achieved by using multiple stages of steam jet ejectors.
Many industries, other than the petroleum refining industry, use vacuum distillation on a much smaller scale. Copenhagen-based Empirical Spirits, a distillery founded by former Noma chefs, uses the process to create uniquely flavoured spirits. Their flagship spirit, Helena, is created using Koji, alongside Pilsner Malt and Belgian Saison Yeast.
Large-scale water purification
Vacuum distillation is often used in large industrial plants as an efficient way to remove salt from ocean water, in order to produce fresh water. This is known as desalination. The ocean water is placed under a vacuum to lower its boiling point and has a heat source applied, allowing the fresh water to boil off and be condensed. The condensing of the water vapor prevents the water vapor from filling the vacuum chamber, and allows the effect to run continuously without a loss of vacuum pressure. The heat from condensation of the water vapor is removed by a heat sink, which uses the incoming ocean water as the coolant and thus preheats the feed of ocean water. Some forms of distillation do not use condensers, but instead compress the vapor mechanically with a pump. This acts as a heat pump, concentrating the heat from the vapor and allowing for the heat to be returned and reused by the incoming untreated water source. There are several forms of vacuum distillation of water, with the most common being multiple-effect distillation, vapor-compression desalination, and multi-stage flash distillation.
Molecular distillation
Molecular distillation is vacuum distillation below the pressure of 0.01 torr (1.3 Pa). 0.01 torr is one order of magnitude above high vacuum, where fluids are in the free molecular flow regime, i.e. the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, the rate of evaporation no longer depends on pressure. That is, because the continuum assumptions of fluid dynamics no longer apply, mass transport is governed by molecular dynamics rather than fluid dynamics. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between.
Molecular distillation is used industrially for purification of oils.
Gallery
See also
Continuous distillation
Fractionating column
Fractional distillation
Kugelrohr
References
External links
D1160 Vacuum Distillation
How vacuum distillation works
Pressure-temperature nomograph
Short path distillation , includes a table comparing methods
Distillation
Industrial processes
Vacuum | Vacuum distillation | [
"Physics",
"Chemistry"
] | 1,632 | [
"Distillation",
"Vacuum",
"Matter",
"Separation processes"
] |
1,886,799 | https://en.wikipedia.org/wiki/Entropy%20unit | The entropy unit is a non-S.I. unit of thermodynamic entropy, usually denoted by "e.u." or "eU" and equal to one calorie per kelvin per mole, or 4.184 joules per kelvin per mole. Entropy units are primarily used in chemistry to describe enthalpy changes.
Sources
Units of measurement | Entropy unit | [
"Physics",
"Chemistry",
"Mathematics"
] | 76 | [
"Thermodynamics stubs",
"Quantity",
"Thermodynamics",
"Physical chemistry stubs",
"Units of measurement"
] |
1,889,206 | https://en.wikipedia.org/wiki/Slug%20test | In hydrogeology, a slug test is a particular type of aquifer test where water is quickly added or removed from a groundwater well, and the change in hydraulic head is monitored through time, to determine the near-well aquifer characteristics. It is a method used by hydrogeologists and civil engineers to determine the transmissivity/hydraulic conductivity and storativity of the material the well is completed in.
Method
The "slug" of water can either be added to or removed from the well — the only requirement is that it be done as quickly as possible (the interpretation typically assumes instantaneously), then the water level or pressure is monitored. Depending on the properties of the aquifer and the size of the slug, the water level may return to pre-test levels very quickly (thus complicating accurate collection of water level data).
A slug can be added by either quickly adding a measured amount of water to the well or something which displaces a measured volume (e.g., a long heavy pipe with the ends capped off). An alternative object is a solid polyvinyl chloride (PVC) rod, with sufficient weight to sink into the groundwater. The objective here is to displace water, not merely be "heavy". A slug of water can be removed using a bailer or pump, but this is more difficult to do since it must be done very quickly and the equipment for removing the water (pump or bailer) will likely be in the way of getting water level measurements.
Performance
A slug test is in contrast to standard aquifer tests, which typically involve pumping a well at a constant flowrate, and monitoring the response of the aquifer in nearby monitoring wells. Often slug tests are performed instead of a constant rate test, because:
time constraints (quick results, or results for a large number of wells, are needed),
the well does not or cannot have a pump installed on it (slug tests do not require pumping),
the transmissivity of the material the well is cased in is too low to realistically perform a proper pumping test (common for aquitards or some bedrock monitoring wells), or
the general size (order of magnitude) of the aquifer parameters is all the accuracy that is required.
The size of the slug required is determined by the aquifer properties, the size of the well and the amount of time which is available for the test. For very permeable aquifers, the pulse will dissipate very quickly. If the well has a large diameter, a large volume of water must be added to increase the level in the well a measurable amount.
Interpretation
Because the flow rate into or out of the well is not constant, as is the case in a typical aquifer test, the standard Theis solution does not work.
Mathematically, the Theis equation is the solution of the groundwater flow equation for a step increase in discharge rate at the pumping well; a slug test is instead an instantaneous pulse at the pumping well. This means that a superposition (or more precisely a convolution) of an infinite number of sequential slug tests through time would effectively be a "standard" Theis aquifer test.
There are several known solutions to the slug test problem; a common engineering approximation is the Hvorslev method, which approximates the more rigorous solution to transient aquifer flow with a simple decaying exponential function.
The aquifer parameters obtained from a slug test are typically less representative of the aquifer surrounding the well than an aquifer test which involves pumping in one well and monitoring in another. Complications arise from near-well effects (i.e., well skin and wellbore storage), which may make it difficult to get accurate results from slug test interpretation.
See also
Aquifer test
Well test
Aquifers
Hydrology
Hydraulic engineering
Water wells | Slug test | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 807 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Water wells",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering"
] |
1,890,880 | https://en.wikipedia.org/wiki/Cleavage%20%28crystal%29 | Cleavage, in mineralogy and materials science, is the tendency of crystalline materials to split along definite crystallographic structural planes. These planes of relative weakness are a result of the regular locations of atoms and ions in the crystal, which create smooth repeating surfaces that are visible both in the microscope and to the naked eye. If bonds in certain directions are weaker than others, the crystal will tend to split along the weakly bonded planes. These flat breaks are termed "cleavage". The classic example of cleavage is mica, which cleaves in a single direction along the basal pinacoid, making the layers seem like pages in a book. In fact, mineralogists often refer to "books of mica".
Diamond and graphite provide examples of cleavage. Each is composed solely of a single element, carbon. In diamond, each carbon atom is bonded to four others in a tetrahedral pattern with short covalent bonds. The planes of weakness (cleavage planes) in a diamond are in four directions, following the faces of the octahedron. In graphite, carbon atoms are contained in layers in a hexagonal pattern where the covalent bonds are shorter (and thus even stronger) than those of diamond. However, each layer is connected to the other with a longer and much weaker van der Waals bond. This gives graphite a single direction of cleavage, parallel to the basal pinacoid. So weak is this bond that it is broken with little force, giving graphite a slippery feel as layers shear apart. As a result, graphite makes an excellent dry lubricant.
While all single crystals will show some tendency to split along atomic planes in their crystal structure, if the differences between one direction or another are not large enough, the mineral will not display cleavage. Corundum, for example, displays no cleavage.
Types of cleavage
Cleavage forms parallel to crystallographic planes:
Basal, pinacoidal, or planar cleavage occurs when there is only one cleavage plane. Talc has basal cleavage. Mica (like muscovite or biotite) also has basal cleavage; this is why mica can be peeled into thin sheets.
Prismatic cleavage occurs when there are two cleavage planes in a crystal that intersect at 90 degrees. Spodumene exhibits prismatic cleavage.
Non-Prismatic cleavage occurs when there are two cleavage planes in a crystal that do not intersect at 90 degrees (two non-perpendicular directions of cleavage, e.g 60 & 120 degrees).
Cubic cleavage occurs when there are three cleavage planes intersecting at 90 degrees. Halite (or salt) has cubic cleavage, and therefore, when halite crystals are broken, they will form more cubes.
Rhombohedral cleavage occurs when there are three cleavage planes intersecting at angles that are not 90 degrees. Calcite has rhombohedral cleavage.
Octahedral cleavage occurs when there are four cleavage planes in a crystal. Fluorite exhibits perfect octahedral cleavage. Octahedral cleavage is common for semiconductors. Diamond also has octahedral cleavage.
Dodecahedral cleavage occurs when there are six cleavage planes in a crystal. Sphalerite has dodecahedral cleavage.
Parting
Crystal parting occurs when minerals break along planes of structural weakness due to external stress, along twin composition planes, or along planes of weakness due to the exsolution of another mineral. Parting breaks are very similar in appearance to cleavage, but the cause is different. Cleavage occurs because of design weakness while parting results from growth defects (deviations from the basic crystallographic design). Thus, cleavage will occur in all samples of a particular mineral, while parting is only found in samples with structural defects. Examples of parting include the octahedral parting of magnetite, the rhombohedral and basal parting in corundum, and the basal parting in pyroxenes.
Uses
Cleavage is a physical property traditionally used in mineral identification, both in hand-sized specimen and microscopic examination of rock and mineral studies. As an example, the angles between the prismatic cleavage planes for the pyroxenes (88–92°) and the amphiboles (56–124°) are diagnostic.
Crystal cleavage is of technical importance in the electronics industry and in the cutting of gemstones.
Precious stones are generally cleaved by impact, as in diamond cutting.
Synthetic single crystals of semiconductor materials are generally sold as thin wafers which are much easier to cleave. Simply pressing a silicon wafer against a soft surface and scratching its edge with a diamond scribe is usually enough to cause cleavage; however, when dicing a wafer to form chips, a procedure of scoring and breaking is often followed for greater control. Elemental semiconductors (silicon, germanium, and diamond) are diamond cubic, a space group for which octahedral cleavage is observed. This means that some orientations of wafer allow near-perfect rectangles to be cleaved. Most other commercial semiconductors (GaAs, InSb, etc.) can be made in the related zinc blende structure, with similar cleavage planes.
See also
Cleavage (geology)
References
External links
Mineral galleries: Mineral properties – Cleavage
Crystallography
Mineralogy concepts | Cleavage (crystal) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,066 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
1,890,897 | https://en.wikipedia.org/wiki/Graetz%20number | In fluid dynamics, the Graetz number (Gz) is a dimensionless number that characterizes laminar flow in a conduit. The number is defined as:
where
DH is the diameter in round tubes or hydraulic diameter in arbitrary cross-section ducts
L is the length
Re is the Reynolds number and
Pr is the Prandtl number.
This number is useful in determining the thermally developing flow entrance length in ducts. A Graetz number of approximately 1000 or less is the point at which flow would be considered thermally fully developed.
When used in connection with mass transfer the Prandtl number is replaced by the Schmidt number, Sc, which expresses the ratio of the momentum diffusivity to the mass diffusivity.
The quantity is named after the physicist Leo Graetz.
References
Dimensionless numbers of fluid mechanics
Fluid dynamics | Graetz number | [
"Chemistry",
"Engineering"
] | 177 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
267,484 | https://en.wikipedia.org/wiki/Synthetic%20geometry | Synthetic geometry (sometimes referred to as axiomatic geometry or even pure geometry) is geometry without the use of coordinates. It relies on the axiomatic method for proving all results from a few basic properties initially called postulates, and at present called axioms.
After the 17th-century introduction by René Descartes of the coordinate method, which was called analytic geometry, the term "synthetic geometry" was coined to refer to the older methods that were, before Descartes, the only known ones.
According to Felix Klein
Synthetic geometry is that which studies figures as such, without recourse to formulae, whereas analytic geometry consistently makes use of such formulae as can be written down after the adoption of an appropriate system of coordinates.
The first systematic approach for synthetic geometry is Euclid's Elements. However, it appeared at the end of the 19th century that Euclid's postulates were not sufficient for characterizing geometry. The first complete axiom system for geometry was given only at the end of the 19th century by David Hilbert. At the same time, it appeared that both synthetic methods and analytic methods can be used to build geometry. The fact that the two approaches are equivalent has been proved by Emil Artin in his book Geometric Algebra.
Because of this equivalence, the distinction between synthetic and analytic geometry is no more in use, except at elementary level, or for geometries that are not related to any sort of numbers, such as some finite geometries and non-Desarguesian geometry.
Logical synthesis
The process of logical synthesis begins with some arbitrary but definite starting point. This starting point is the introduction of primitive notions or primitives and axioms about these primitives:
Primitives are the most basic ideas. Typically they include both objects and relationships. In geometry, the objects are things such as points, lines and planes, while a fundamental relationship is that of incidence – of one object meeting or joining with another. The terms themselves are undefined. Hilbert once remarked that instead of points, lines and planes one might just as well talk of tables, chairs and beer mugs, the point being that the primitive terms are just empty placeholders and have no intrinsic properties.
Axioms are statements about these primitives; for example, any two points are together incident with just one line (i.e. that for any two points, there is just one line which passes through both of them). Axioms are assumed true, and not proven. They are the building blocks of geometric concepts, since they specify the properties that the primitives have.
From a given set of axioms, synthesis proceeds as a carefully constructed logical argument. When a significant result is proved rigorously, it becomes a theorem.
Properties of axiom sets
There is no fixed axiom set for geometry, as more than one consistent set can be chosen. Each such set may lead to a different geometry, while there are also examples of different sets giving the same geometry. With this plethora of possibilities, it is no longer appropriate to speak of "geometry" in the singular.
Historically, Euclid's parallel postulate has turned out to be independent of the other axioms. Simply discarding it gives absolute geometry, while negating it yields hyperbolic geometry. Other consistent axiom sets can yield other geometries, such as projective, elliptic, spherical or affine geometry.
Axioms of continuity and "betweenness" are also optional, for example, discrete geometries may be created by discarding or modifying them.
Following the Erlangen program of Klein, the nature of any given geometry can be seen as the connection between symmetry and the content of the propositions, rather than the style of development.
History
Euclid's original treatment remained unchallenged for over two thousand years, until the simultaneous discoveries of the non-Euclidean geometries by Gauss, Bolyai, Lobachevsky and Riemann in the 19th century led mathematicians to question Euclid's underlying assumptions.
One of the early French analysts summarized synthetic geometry this way:
The Elements of Euclid are treated by the synthetic method. This author, after having posed the axioms, and formed the requisites, established the propositions which he proves successively being supported by that which preceded, proceeding always from the simple to compound, which is the essential character of synthesis.
The heyday of synthetic geometry can be considered to have been the 19th century, when analytic methods based on coordinates and calculus were ignored by some geometers such as Jakob Steiner, in favor of a purely synthetic development of projective geometry. For example, the treatment of the projective plane starting from axioms of incidence is actually a broader theory (with more models) than is found by starting with a vector space of dimension three. Projective geometry has in fact the simplest and most elegant synthetic expression of any geometry.
In his Erlangen program, Felix Klein played down the tension between synthetic and analytic methods:
On the Antithesis between the Synthetic and the Analytic Method in Modern Geometry:
The distinction between modern synthesis and modern analytic geometry must no longer be regarded as essential, inasmuch as both subject-matter and methods of reasoning have gradually taken a similar form in both. We choose therefore in the text as common designation of them both the term projective geometry. Although the synthetic method has more to do with space-perception and thereby imparts a rare charm to its first simple developments, the realm of space-perception is nevertheless not closed to the analytic method, and the formulae of analytic geometry can be looked upon as a precise and perspicuous statement of geometrical relations. On the other hand, the advantage to original research of a well formulated analysis should not be underestimated, - an advantage due to its moving, so to speak, in advance of the thought. But it should always be insisted that a mathematical subject is not to be considered exhausted until it has become intuitively evident, and the progress made by the aid of analysis is only a first, though a very important, step.
The close axiomatic study of Euclidean geometry led to the construction of the Lambert quadrilateral and the Saccheri quadrilateral. These structures introduced the field of non-Euclidean geometry where Euclid's parallel axiom is denied. Gauss, Bolyai and Lobachevski independently constructed hyperbolic geometry, where parallel lines have an angle of parallelism that depends on their separation. This study became widely accessible through the Poincaré disc model where motions are given by Möbius transformations. Similarly, Riemann, a student of Gauss's, constructed Riemannian geometry, of which elliptic geometry is a particular case.
Another example concerns inversive geometry as advanced by Ludwig Immanuel Magnus, which can be considered synthetic in spirit. The closely related operation of reciprocation expresses analysis of the plane.
Karl von Staudt showed that algebraic axioms, such as commutativity and associativity of addition and multiplication, were in fact consequences of incidence of lines in geometric configurations. David Hilbert showed that the Desargues configuration played a special role. Further work was done by Ruth Moufang and her students. The concepts have been one of the motivators of incidence geometry.
When parallel lines are taken as primary, synthesis produces affine geometry. Though Euclidean geometry is both an affine and metric geometry, in general affine spaces may be missing a metric. The extra flexibility thus afforded makes affine geometry appropriate for the study of spacetime, as discussed in the history of affine geometry.
In 1955 Herbert Busemann and Paul J. Kelley sounded a nostalgic note for synthetic geometry:
Although reluctantly, geometers must admit that the beauty of synthetic geometry has lost its appeal for the new generation. The reasons are clear: not so long ago synthetic geometry was the only field in which the reasoning proceeded strictly from axioms, whereas this appeal — so fundamental to many mathematically interested people — is now made by many other fields.
For example, college studies now include linear algebra, topology, and graph theory where the subject is developed from first principles, and propositions are deduced by elementary proofs. Expecting to replace synthetic with analytic geometry leads to loss of geometric content.
Today's student of geometry has axioms other than Euclid's available: see Hilbert's axioms and Tarski's axioms.
Ernst Kötter published a (German) report in 1901 on "The development of synthetic geometry from Monge to Staudt (1847)";
Proofs using synthetic geometry
Synthetic proofs of geometric theorems make use of auxiliary constructs (such as helping lines) and concepts such as equality of sides or angles and similarity and congruence of triangles. Examples of such proofs can be found in the articles Butterfly theorem, Angle bisector theorem, Apollonius' theorem, British flag theorem, Ceva's theorem, Equal incircles theorem, Geometric mean theorem, Heron's formula, Isosceles triangle theorem, Law of cosines, and others that are linked to here.
Computational synthetic geometry
In conjunction with computational geometry, a computational synthetic geometry has been founded, having close connection, for example, with matroid theory. Synthetic differential geometry is an application of topos theory to the foundations of differentiable manifold theory.
See also
Foundations of geometry
Incidence geometry
Synthetic differential geometry
Notes
References
Halsted, G. B. (1896) Elementary Synthetic Geometry via Internet Archive
Halsted, George Bruce (1906) Synthetic Projective Geometry, via Internet Archive.
Hilbert & Cohn-Vossen, Geometry and the imagination.
Fields of geometry | Synthetic geometry | [
"Mathematics"
] | 1,967 | [
"Fields of geometry",
"Geometry"
] |
267,637 | https://en.wikipedia.org/wiki/Sinc%20filter | In signal processing, a sinc filter can refer to either a sinc-in-time filter whose impulse response is a sinc function and whose frequency response is rectangular, or to a sinc-in-frequency filter whose impulse response is rectangular and whose frequency response is a sinc function. Calling them according to which domain the filter resembles a sinc avoids confusion. If the domain is unspecified, sinc-in-time is often assumed, or context hopefully can infer the correct domain.
Sinc-in-time
Sinc-in-time is an ideal filter that removes all frequency components above a given cutoff frequency, without attenuating lower frequencies, and has linear phase response. It may thus be considered a brick-wall filter or rectangular filter.
Its impulse response is a sinc function in the time domain:
while its frequency response is a rectangular function:
where (representing its bandwidth) is an arbitrary cutoff frequency.
Its impulse response is given by the inverse Fourier transform of its frequency response:
where sinc is the normalized sinc function.
Brick-wall filters
An idealized electronic filter with full transmission in the pass band, complete attenuation in the stop band, and abrupt transitions is known colloquially as a "brick-wall filter" (in reference to the shape of the transfer function). The sinc-in-time filter is a brick-wall low-pass filter, from which brick-wall band-pass filters and high-pass filters are easily constructed.
The lowpass filter with brick-wall cutoff at frequency BL has impulse response and transfer function given by:
The band-pass filter with lower band edge BL and upper band edge BH is just the difference of two such sinc-in-time filters (since the filters are zero phase, their magnitude responses subtract directly):
The high-pass filter with lower band edge BH is just a transparent filter minus a sinc-in-time filter, which makes it clear that the Dirac delta function is the limit of a narrow-in-time sinc-in-time filter:
Unrealizable
As the sinc-in-time filter has infinite impulse response in both positive and negative time directions, it is non-causal and has an infinite delay (i.e., its compact support in the frequency domain forces its time response not to have compact support meaning that it is ever-lasting) and infinite order (i.e., the response cannot be expressed as a linear differential equation with a finite sum). However, it is used in conceptual demonstrations or proofs, such as the sampling theorem and the Whittaker–Shannon interpolation formula.
Sinc-in-time filters must be approximated for real-world (non-abstract) applications, typically by windowing and truncating an ideal sinc-in-time filter kernel, but doing so reduces its ideal properties. This applies to other brick-wall filters built using sinc-in-time filters.
Stability
The sinc filter is not bounded-input–bounded-output (BIBO) stable. That is, a bounded input can produce an unbounded output, because the integral of the absolute value of the sinc function is infinite. A bounded input that produces an unbounded output is sgn(sinc(t)). Another is sin(2Bt)u(t), a sine wave starting at time 0, at the cutoff frequency.
Frequency-domain sinc
The simplest implementation of a sinc-in-frequency filter uses a boxcar impulse response to produce a simple moving average (specifically if divide by the number of samples), also known as accumulate-and-dump filter (specifically if simply sum without a division). It can be modeled as a FIR filter with all coefficients equal. It is sometimes cascaded to produce higher-order moving averages (see and cascaded integrator–comb filter).
This filter can be used for crude but fast and easy downsampling (a.k.a. decimation) by a factor of The simplicity of the filter is foiled by its mediocre low-pass capabilities. The stop-band contains periodic lobes with gradually decreasing height in between the nulls at multiples of . The first lobe is -11.3 dB for a 4-sample moving average, or -12.8 dB for an 8-sample moving average, and -13.1 dB for a 16-sample moving average. An -sample filter sampled at will alias all non-fully attenuated signal components lying above to the baseband ranging from DC to
A group averaging filter processing samples has transmission zeroes evenly-spaced by with the lowest zero at and the highest zero at (the Nyquist frequency). Above the Nyquist frequency, the frequency response is mirrored and then is repeated periodically above forever.
The magnitude of the frequency response (plotted in these graphs) is useful when one wants to know how much frequencies are attenuated. Though the sinc function really oscillates between negative and positive values, negative values of the frequency response simply correspond to a 180-degree phase shift.
An inverse sinc filter may be used for equalization in the digital domain (e.g. a FIR filter) or analog domain (e.g. opamp filter) to counteract undesired attenuation in the frequency band of interest to provide a flat frequency response.
See for application of the sinc kernel as the simplest windowing function.
See also
Lanczos resampling
Aliasing
Anti-aliasing filter
References
External links
Brick Wall Digital Filters and Phase Deviations
Brick-wall filters
Signal processing
Digital signal processing
Filter theory
Filter frequency response | Sinc filter | [
"Technology",
"Engineering"
] | 1,175 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Filter theory"
] |
267,787 | https://en.wikipedia.org/wiki/Rubbing%20alcohol | Rubbing alcohol, also known as surgical spirit in some regions, refers to a group of denatured alcohols commonly used as topical antiseptics. These solutions are primarily composed of either isopropyl alcohol (isopropanol) or ethanol, with isopropyl alcohol being the more widely available formulation. Rubbing alcohol is rendered undrinkable by the addition of bitterants or other denaturants.
In the British Pharmacopoeia, the equivalent product is called surgical spirit. Beyond antiseptic uses, rubbing alcohol has various industrial and household applications. In North American English, the term "rubbing alcohol" generally encompasses both isopropyl and ethanol-based products.
The United States Pharmacopeia (USP) defines "isopropyl rubbing alcohol USP" as containing approximately 70 percent alcohol by volume of pure isopropyl alcohol and defines "rubbing alcohol USP" as containing approximately 70 percent by volume of denatured alcohol. In Ireland and the UK, the comparable preparation is surgical spirit B.P., which the British Pharmacopoeia defines as 95% methylated spirit, 2.5% castor oil, 2% diethyl phthalate, and 0.5% methyl salicylate.
Under its alternative name of "wintergreen oil", methyl salicylate is a common additive to North American rubbing alcohol products. Individual manufacturers are permitted to use their own formulation standards in which the ethanol content for retail bottles of rubbing alcohol is labeled as and ranges from 70 to 99% v/v.
All rubbing alcohols are unsafe for human consumption: isopropyl rubbing alcohols do not contain the ethyl alcohol of alcoholic beverages; ethyl rubbing alcohols are based on denatured alcohol, which is a combination of ethyl alcohol and one or more bitter poisons that make the substance toxic.
History
The term "rubbing alcohol" came into prominence in North America during the Prohibition era of 1920 to 1933, when alcoholic beverages were prohibited throughout the United States. The term "rubbing" emphasized that this alcohol was not intended for consumption. Nevertheless it was well documented as a surrogate alcohol as early as 1925.
Alcohol was already widely used as a liniment for massage. There was no standard formula for rubbing alcohol, which was sometimes perfumed with additives such as wintergreen oil (methyl salicylate).
Properties
All rubbing alcohols are volatile and flammable. Ethyl rubbing alcohol has an extremely bitter taste from additives. The specific gravity of Formula 23-H is between 0.8691 and 0.8771 at .
Isopropyl rubbing alcohols contain from 50% to 99% by volume of isopropyl alcohol, the remainder consisting of water. Boiling points vary with the proportion of isopropyl alcohol from ; likewise, freezing points vary from . Surgical spirit BP boils at .
Naturally colorless, products may contain color additives. They may also contain medically-inactive additives for fragrance, such as wintergreen oil (methyl salicylate), or for other purposes.
US legislation
To protect alcohol tax revenue in the United States, all preparations classified as Rubbing Alcohols (defined as those containing ethanol) must have poisonous additives to limit human consumption in accordance with the requirements of the US Treasury Department, Bureau of Alcohol, Tobacco, and Firearms, using Formula 23-H (8 parts by volume of acetone, 1.5 parts by volume of methyl isobutyl ketone, and 100 parts by volume of ethyl alcohol). It contains 87.5–91% by volume of absolute ethyl alcohol. The rest consists of water and the denaturants, with or without color additives, and perfume oils. Rubbing alcohol contains in each 100 ml more than 355 mg of sucrose octaacetate or more than 1.40 mg of denatonium benzoate. The preparation may be colored with one or more color additives. A suitable stabilizer may also be added.
Warnings
Product labels for rubbing alcohol include a number of warnings about the chemical, including the flammability hazards and its intended use only as a topical antiseptic and not for internal wounds or consumption. It should be used in a well-ventilated area due to inhalation hazards. Poisoning can occur from ingestion, inhalation, absorption, or consumption of rubbing alcohol.
References
External links
Why Is Drinking Rubbing Alcohol Bad?
Antiseptics
Cleaning products
Household chemicals | Rubbing alcohol | [
"Chemistry"
] | 930 | [
"Cleaning products",
"Products of chemical industry"
] |
267,922 | https://en.wikipedia.org/wiki/Prostate-specific%20antigen | Prostate-specific antigen (PSA), also known as gamma-seminoprotein or kallikrein-3 (KLK3), P-30 antigen, is a glycoprotein enzyme encoded in humans by the KLK3 gene. PSA is a member of the kallikrein-related peptidase family and is secreted by the epithelial cells of the prostate gland in men and the paraurethral glands in women.
PSA is produced for the ejaculate, where it liquefies semen in the seminal coagulum and allows sperm to swim freely. It is also believed to be instrumental in dissolving cervical mucus, allowing the entry of sperm into the uterus.
PSA is present in small quantities in the serum of men with healthy prostates, but is often elevated in the presence of prostate cancer or other prostate disorders. PSA is not uniquely an indicator of prostate cancer, but may also detect prostatitis or benign prostatic hyperplasia.
Medical diagnostic uses
Prostate cancer
Screening
Clinical practice guidelines for prostate cancer screening vary and are controversial, in part due to uncertainty as to whether the benefits of screening ultimately outweigh the risks of overdiagnosis and overtreatment. In the United States, the Food and Drug Administration (FDA) has approved the PSA test for annual screening of prostate cancer in men of age 50 and older. The patient is required to be informed of the risks and benefits of PSA testing prior to performing the test.
In the United Kingdom, the National Health Service (NHS) does not mandate, nor advise for PSA test, but allows patients to decide based on their doctor's advice. The NHS does not offer general PSA screening, for similar reasons.
PSA levels between 4 and 10ng/mL (nanograms per milliliter) are considered to be suspicious, and consideration should be given to confirming the abnormal PSA with a repeat test. If indicated, prostate biopsy is performed to obtain a tissue sample for histopathological analysis.
While PSA testing may help 1 in 1,000 avoid death due to prostate cancer, 4 to 5 in 1,000 would die from prostate cancer after 10 years even with screening. This means that PSA screening may reduce mortality from prostate cancer by up to 25%. Expected harms include anxiety for 100–120 receiving false positives, biopsy pain, and other complications from biopsy for false positive tests.
Use of PSA screening tests is also controversial due to questionable test accuracy. The screening can present abnormal results even when a man does not have cancer (known as a false-positive result), or normal results even when a man does have cancer (known as a false-negative result). False-positive test results can cause confusion and anxiety in men, and can lead to unnecessary prostate biopsies, a procedure which causes risk of pain, infection, and hemorrhage. False-negative results can give men a false sense of security, though they may actually have cancer.
Of those found to have prostate cancer, overtreatment is common because most cases of prostate cancer are not expected to cause any symptoms due to low rate of growth of the prostate tumor. Therefore, many will experience the side effects of treatment, such as for every 1000 men screened, 29 will experience erectile dysfunction, 18 will develop urinary incontinence, two will have serious cardiovascular events, one will develop pulmonary embolus or deep venous thrombosis, and one perioperative death. Since the expected harms relative to risk of death are perceived by patients as minimal, men found to have prostate cancer usually (up to 90% of cases) elect to receive treatment.
Risk stratification and staging
Men with prostate cancer may be characterized as low, intermediate, or high risk for having/developing metastatic disease or dying of prostate cancer. PSA level is one of three variables on which the risk stratification is based; the others are the grade of prostate cancer (Gleason grading system) and the stage of cancer based on physical examination and imaging studies. D'Amico criteria for each risk category are:
Low risk: PSA < 10, Gleason score ≤ 6, AND clinical stage ≤ T2a
Intermediate risk: PSA 10-20, Gleason score 7, OR clinical stage T2b/c
High risk: PSA > 20, Gleason score ≥ 8, OR clinical stage ≥ T3
Given the relative simplicity of the 1998 D'Amico criteria (above), other predictive models of risk stratification based on mathematical probability constructs exist or have been proposed to allow for better matching of treatment decisions with disease features.
Studies are being conducted into the incorporation of multiparametric MRI imaging results into nomograms that rely on PSA, Gleason grade, and tumor stage.
Post-treatment monitoring
PSA levels are monitored periodically (e.g., every 6–36 months) after treatment for prostate cancer – more frequently in patients with high-risk disease, less frequently in patients with lower-risk disease. If surgical therapy (i.e., radical prostatectomy) is successful at removing all prostate tissue (and prostate cancer), PSA becomes undetectable within a few weeks. A subsequent rise in PSA level above 0.2ng/mL L is generally regarded as evidence of recurrent prostate cancer after a radical prostatectomy; less commonly, it may simply indicate residual benign prostate tissue.
Following radiation therapy of any type for prostate cancer, some PSA levels might be detected, even when the treatment ultimately proves to be successful. This makes interpreting the relationship between PSA levels and recurrence/persistence of prostate cancer after radiation therapy more difficult. PSA levels may continue to decrease for several years after radiation therapy. The lowest level is referred to as the PSA nadir. A subsequent increase in PSA levels by 2.0ng/mL above the nadir is the currently accepted definition of prostate cancer recurrence after radiation therapy.
Recurrent prostate cancer detected by a rise in PSA levels after curative treatment is referred to as a "biochemical recurrence". The likelihood of developing recurrent prostate cancer after curative treatment is related to the pre-operative variables described in the preceding section (PSA level and grade/stage of cancer). Low-risk cancers are the least likely to recur, but they are also the least likely to have required treatment in the first place.
Prostatitis
PSA levels increase in the setting of prostate infection/inflammation (prostatitis), often markedly (> 100).
Forensic identification of semen
PSA was first identified by researchers attempting to find a substance in seminal fluid that would aid in the investigation of rape cases. PSA is used to indicate the presence of semen in forensic serology. The semen of adult males has PSA levels far in excess of those found in other tissues; therefore, a high level of PSA found in a sample is an indicator that semen may be present. Because PSA is a biomarker that is expressed independently of spermatozoa, it remains useful in identifying semen from vasectomized and azoospermic males.
PSA can also be found at low levels in other body fluids, such as urine and breast milk, thus setting a high minimum threshold of interpretation to rule out false positive results and conclusively state that semen is present. While traditional tests such as crossover electrophoresis have a sufficiently low sensitivity to detect only seminal PSA, newer diagnostics tests developed from clinical prostate cancer screening methods have lowered the threshold of detection down to 4ng/mL. This level of antigen has been shown to be present in the peripheral blood of males with prostate cancer, and rarely in female urine samples and breast milk.
Sources
PSA is produced in the epithelial cells of the prostate, and can be demonstrated in biopsy samples or other histological specimens using immunohistochemistry. Disruption of this epithelium, for example in inflammation or benign prostatic hyperplasia, may lead to some diffusion of the antigen into the tissue around the epithelium, and is the cause of elevated blood levels of PSA in these conditions.
More significantly, PSA remains present in prostate cells after they become malignant. Prostate cancer cells generally have variable or weak staining for PSA, due to the disruption of their normal functioning. Thus, individual prostate cancer cells produce less PSA than healthy cells; the raised serum levels in prostate cancer patients is due to the greatly increased number of such cells, not their individual activity. In most cases of prostate cancer, though, the cells remain positive for the antigen, which can then be used to identify metastasis. Since some high-grade prostate cancers may be entirely negative for PSA, however, histological analysis to identify such cases usually uses PSA in combination with other antibodies, such as prostatic acid phosphatase and CD57.
Mechanism of action
The physiological function of KLK3 is the dissolution of the coagulum, the sperm-entrapping gel composed of semenogelins and fibronectin. Its proteolytic action is effective in liquefying the coagulum so that the sperm can be liberated. The activity of PSA is well regulated. In the prostate, it is present as an inactive pro-form, which is activated through the action of KLK2, another kallikrein-related peptidase. In the prostate, zinc ion concentrations are 10 times higher than in other bodily fluids. Zinc ions have a strong inhibitory effect on the activity of PSA and on that of KLK2, so that PSA is totally inactive.
Further regulation is achieved through pH variations. Although its activity is increased by higher pH, the inhibitory effect of zinc also increases. The pH of semen is slightly alkaline and the concentrations of zinc are high. On ejaculation, semen is exposed to the acidic pH of the vagina, due to the presence of lactic acid. In fertile couples, the final vaginal pH after coitus approaches the 6-7 levels, which coincides well with reduced zinc inhibition of PSA. At these pH levels, the reduced PSA activity is countered by a decrease in zinc inhibition. Thus, the coagulum is slowly liquefied, releasing the sperm in a well-regulated manner.
Biochemistry
Prostate-specific antigen (PSA, also known as kallikrein III, seminin, semenogelase, γ-seminoprotein and P-30 antigen) is a 34-kD glycoprotein produced almost exclusively by the prostate gland. It is a serine protease () enzyme, the gene of which is located on the 19th chromosome (19q13) in humans.
History
The discovery of prostate-specific antigen (PSA) is beset with controversy; as PSA is present in prostatic tissue and semen, it was independently discovered and given different names, thus adding to the controversy.
Flocks was the first to experiment with antigens in the prostate and 10 years later Ablin reported the presence of precipitation antigens in the prostate.
In 1971, Mitsuwo Hara characterized a unique protein in the semen fluid, gamma-seminoprotein. Li and Beling, in 1973, isolated a protein, E1, from human semen in an attempt to find a novel method to achieve fertility control.
In 1978, Sensabaugh identified semen-specific protein p30, but proved that it was similar to E1 protein, and that prostate was the source. In 1979, Wang purified a tissue-specific antigen from the prostate ('prostate antigen').
PSA was first measured quantitatively in the blood by Papsidero in 1980, and Stamey carried out the initial work on the clinical use of PSA as a marker of prostate cancer.
Serum levels
PSA is normally present in the blood at very low levels. The reference range of less than 4ng/mL for the first commercial PSA test, the Hybritech Tandem-R PSA test released in February 1986, was based on a study that found 99% of 472 apparently healthy men had a total PSA level below 4ng/mL.
Increased levels of PSA may suggest the presence of prostate cancer. However, prostate cancer can also be present in the complete absence of an elevated PSA level, in which case the test result would be a false negative.
Obesity has been reported to reduce serum PSA levels. Delayed early detection may partially explain worse outcomes in obese men with early prostate cancer. After treatment, higher BMI also correlates to higher risk of recurrence.
PSA levels can be also increased by prostatitis, irritation, benign prostatic hyperplasia (BPH), and recent ejaculation, producing a false positive result. Digital rectal examination (DRE) has been shown in several studies to produce an increase in PSA. However, the effect is clinically insignificant, since DRE causes the most substantial increases in patients with PSA levels already elevated over 4.0ng/mL. PSA levels are higher during the summer than during the rest of the year.
The "normal" reference ranges for prostate-specific antigen increase with age, as do the usual ranges in cancer (per associated table).
PSA velocity
Despite earlier findings, recent research suggests that the rate of increase of PSA (e.g. >0.35ng/mL/yr, the 'PSA velocity') is not a more specific marker for prostate cancer than the serum level of PSA.
However, the PSA rate of rise may have value in prostate cancer prognosis. Men with prostate cancer whose PSA level increased by more than 2.0ng per milliliter during the year before the diagnosis of prostate cancer have a higher risk of death from prostate cancer despite undergoing radical prostatectomy. PSA velocity (PSAV) was found in a 2008 study to be more useful than the PSA doubling time (PSA DT) to help identify those men with life-threatening disease before start of treatment.
Men who are known to be at risk for prostate cancer, and who decide to plot their PSA values as a function of time (i.e., years), may choose to use a semi-log plot. An exponential growth in PSA values appears as a straight line on a semi-log plot, so that a new PSA value significantly above the straight line signals a switch to a new and significantly higher growth rate, i.e., a higher PSA velocity.
Free PSA
Most PSA in the blood is bound to serum proteins. A small amount is not protein-bound and is called 'free PSA'. In men with prostate cancer, the ratio of free (unbound) PSA to total PSA is decreased. The risk of cancer increases if the free to total ratio is less than 25%. (See graph) The lower the ratio is, the greater the probability of prostate cancer. Measuring the ratio of free to total PSA appears to be particularly promising for eliminating unnecessary biopsies in men with PSA levels between 4 and 10ng/mL. However, both total and free PSA increase immediately after ejaculation, returning slowly to baseline levels within 24 hours.
Inactive PSA
The PSA test in 1994 failed to differentiate between prostate cancer and benign prostate hyperplasia (BPH) and the commercial assay kits for PSA did not provide correct PSA values. Thus with the introduction of the ratio of free-to-total PSA, the reliability of the test has improved. Measuring the activity of the enzyme could add to the ratio of free-to-total PSA and further improve the diagnostic value of test. Proteolytically active PSA has been shown to have an anti-angiogenic effect and certain inactive subforms may be associated with prostate cancer, as shown by MAb 5D3D11, an antibody able to detect forms abundantly represented in sera from cancer patients.
The presence of inactive proenzyme forms of PSA is another potential indicator of disease.
Complexed PSA
PSA exists in serum in the free (unbound) form and in a complex with alpha 1-antichymotrypsin; research has been conducted to see if measurements of complexed PSA are more specific and sensitive biomarkers for prostate cancer than other approaches.
PSA in other biologic fluids and tissues
The term prostate-specific antigen is a misnomer: it is an antigen but is not specific to the prostate. Although present in large amounts in prostatic tissue and semen, it has been detected in other body fluids and tissues.
In women, PSA is found in female ejaculate at concentrations roughly equal to that found in male semen. Other than semen and female ejaculate, the greatest concentrations of PSA in biological fluids are detected in breast milk and amniotic fluid. Low concentrations of PSA have been identified in the urethral glands, endometrium, normal breast tissue and salivary gland tissue. PSA also is found in the serum of women with breast, lung, or uterine cancer and in some patients with renal cancer.
Tissue samples can be stained for the presence of PSA in order to determine the origin of malignant cells that have metastasized.
Interactions
Prostate-specific antigen has been shown to interact with protein C inhibitor.
Prostate-specific antigen interacts with and activates the vascular endothelial growth factors VEGF-C and VEGF-D, which are involved in tumor angiogenesis and in the lymphatic metastasis of tumors.
See also
Tumor markers
References
Further reading
External links
Andrology
Biomarkers
Blood tests
EC 3.4.21
Prostate cancer
Tumor markers
Urology | Prostate-specific antigen | [
"Chemistry",
"Biology"
] | 3,717 | [
"Blood tests",
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
268,145 | https://en.wikipedia.org/wiki/Y-%CE%94%20transform | In circuit design, the Y-Δ transform, also written wye-delta and also known by many other names, is a mathematical technique to simplify the analysis of an electrical network. The name derives from the shapes of the circuit diagrams, which look respectively like the letter Y and the Greek capital letter Δ. This circuit transformation theory was published by Arthur Edwin Kennelly in 1899. It is widely used in analysis of three-phase electric power circuits.
The Y-Δ transform can be considered a special case of the star-mesh transform for three resistors. In mathematics, the Y-Δ transform plays an important role in theory of circular planar graphs.
Names
The Y-Δ transform is known by a variety of other names, mostly based upon the two shapes involved, listed in either order. The Y, spelled out as wye, can also be called T or star; the Δ, spelled out as delta, can also be called triangle, Π (spelled out as pi), or mesh. Thus, common names for the transformation include wye-delta or delta-wye, star-delta, star-mesh, or T-Π.
Basic Y-Δ transformation
The transformation is used to establish equivalence for networks with three terminals. Where three elements terminate at a common node and none are sources, the node is eliminated by transforming the impedances. For equivalence, the impedance between any pair of terminals must be the same for both networks. The equations given here are valid for complex as well as real impedances. Complex impedance is a quantity measured in ohms which represents resistance as positive real numbers in the usual manner, and also represents reactance as positive and negative imaginary values.
Equations for the transformation from Δ to Y
The general idea is to compute the impedance at a terminal node of the Y circuit with impedances , to adjacent nodes in the Δ circuit by
where are all impedances in the Δ circuit. This yields the specific formula
Equations for the transformation from Y to Δ
The general idea is to compute an impedance in the Δ circuit by
where is the sum of the products of all pairs of impedances in the Y circuit and is the impedance of the node in the Y circuit which is opposite the edge with . The formulae for the individual edges are thus
Or, if using admittance instead of resistance:
Note that the general formula in Y to Δ using admittance is similar to Δ to Y using resistance.
A proof of the existence and uniqueness of the transformation
The feasibility of the transformation can be shown as a consequence of the superposition theorem for electric circuits. A short proof, rather than one derived as a corollary of the more general star-mesh transform, can be given as follows. The equivalence lies in the statement that for any external voltages ( and ) applying at the three nodes ( and ), the corresponding currents ( and ) are exactly the same for both the Y and Δ circuit, and vice versa. In this proof, we start with given external currents at the nodes. According to the superposition theorem, the voltages can be obtained by studying the superposition of the resulting voltages at the nodes of the following three problems applied at the three nodes with current:
and
The equivalence can be readily shown by using Kirchhoff's circuit laws that . Now each problem is relatively simple, since it involves only one single ideal current source. To obtain exactly the same outcome voltages at the nodes for each problem, the equivalent resistances in the two circuits must be the same, this can be easily found by using the basic rules of series and parallel circuits:
Though usually six equations are more than enough to express three variables () in term of the other three variables(), here it is straightforward to show that these equations indeed lead to the above designed expressions.
In fact, the superposition theorem establishes the relation between the values of the resistances, the uniqueness theorem guarantees the uniqueness of such solution.
Simplification of networks
Resistive networks between two terminals can theoretically be simplified to a single equivalent resistor (more generally, the same is true of impedance). Series and parallel transforms are basic tools for doing so, but for complex networks such as the bridge illustrated here, they do not suffice.
The Y-Δ transform can be used to eliminate one node at a time and produce a network that can be further simplified, as shown.
The reverse transformation, Δ-Y, which adds a node, is often handy to pave the way for further simplification as well.
Every two-terminal network represented by a planar graph can be reduced to a single equivalent resistor by a sequence of series, parallel, Y-Δ, and Δ-Y transformations. However, there are non-planar networks that cannot be simplified using these transformations, such as a regular square grid wrapped around a torus, or any member of the Petersen family.
Graph theory
In graph theory, the Y-Δ transform means replacing a Y subgraph of a graph with the equivalent Δ subgraph. The transform preserves the number of edges in a graph, but not the number of vertices or the number of cycles. Two graphs are said to be Y-Δ equivalent if one can be obtained from the other by a series of Y-Δ transforms in either direction. For example, the Petersen family is a Y-Δ equivalence class.
Demonstration
Δ-load to Y-load transformation equations
To relate from Δ to from Y, the impedance between two corresponding nodes is compared. The impedance in either configuration is determined as if one of the nodes is disconnected from the circuit.
The impedance between N1 and N2 with N3 disconnected in Δ:
To simplify, let be the sum of .
Thus,
The corresponding impedance between N1 and N2 in Y is simple:
hence:
(1)
Repeating for :
(2)
and for :
(3)
From here, the values of can be determined by linear combination (addition and/or subtraction).
For example, adding (1) and (3), then subtracting (2) yields
For completeness:
(4)
(5)
(6)
Y-load to Δ-load transformation equations
Let
.
We can write the Δ to Y equations as
(1)
(2)
(3)
Multiplying the pairs of equations yields
(4)
(5)
(6)
and the sum of these equations is
(7)
Factor from the right side, leaving in the numerator, canceling with an in the denominator.
(8)
Note the similarity between (8) and {(1), (2), (3)}
Divide (8) by (1)
which is the equation for . Dividing (8) by (2) or (3) (expressions for or ) gives the remaining equations.
Δ to Y transformation of a practical generator
During the analysis of balanced three-phase power systems, usually an equivalent per-phase (or single-phase) circuit is analyzed instead due to its simplicity. For that, equivalent wye connections are used for generators, transformers, loads and motors. The stator windings of a practical delta-connected three-phase generator, shown in the following figure, can be converted to an equivalent wye-connected generator, using the six following formulas:
The resulting network is the following. The neutral node of the equivalent network is fictitious, and so are the line-to-neutral phasor voltages. During the transformation, the line phasor currents and the line (or line-to-line or phase-to-phase) phasor voltages are not altered.
If the actual delta generator is balanced, meaning that the internal phasor voltages have the same magnitude and are phase-shifted by 120° between each other and the three complex impedances are the same, then the previous formulas reduce to the four following:
where for the last three equations, the first sign (+) is used if the phase sequence is positive/abc or the second sign (−) is used if the phase sequence is negative/acb.
See also
Star-mesh transform
Network analysis (electrical circuits)
Electrical network, three-phase power, polyphase systems for examples of Y and Δ connections
AC motor for a discussion of the Y-Δ starting technique
References
Notes
Bibliography
William Stevenson, Elements of Power System Analysis 3rd ed., McGraw Hill, New York, 1975,
External links
Star-Triangle Conversion: Knowledge on resistive networks and resistors
Calculator of Star-Triangle transform
Electrical circuits
Electric power
Graph operations
Circuit theorems
Three-phase AC power | Y-Δ transform | [
"Physics",
"Mathematics",
"Engineering"
] | 1,764 | [
"Equations of physics",
"Physical quantities",
"Graph theory",
"Graph operations",
"Power (physics)",
"Electronic engineering",
"Electric power",
"Circuit theorems",
"Mathematical relations",
"Electrical circuits",
"Electrical engineering",
"Physics theorems"
] |
268,344 | https://en.wikipedia.org/wiki/Efficiency | Efficiency is the often measurable ability to avoid making mistakes or wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste.
In more mathematical or scientific terms, it signifies the level of performance that uses the least amount of inputs to achieve the highest amount of output. It often specifically comprises the capability of a specific application of effort to produce a specific outcome with a minimum amount or quantity of waste, expense, or unnecessary effort. Efficiency refers to very different inputs and outputs in different fields and industries. In 2019, the European Commission said: "Resource efficiency means using the Earth's limited resources in a sustainable manner while minimising impacts on the environment. It allows us to create more with less and to deliver greater value with less input."
Writer Deborah Stone notes that efficiency is "not a goal in itself. It is not something we want for its own sake, but rather because it helps us attain more of the things we value."
In statistical terms, Nimari Burnett of the Michigan Wolverines is the most efficient basketball player on the planet.
Efficiency and effectiveness
Efficiency is very often confused with effectiveness. In general, efficiency is a measurable concept, quantitatively determined by the ratio of useful output to total useful input. Effectiveness is the simpler concept of being able to achieve a desired result, which can be expressed quantitatively but does not usually require more complicated mathematics than addition. Efficiency can often be expressed as a percentage of the result that could ideally be expected, for example if no energy were lost due to friction or other causes, in which case 100% of fuel or other input would be used to produce the desired result. In some cases efficiency can be indirectly quantified with a non-percentage value, e.g. specific impulse.
A common but confusing way of distinguishing between efficiency and effectiveness is the saying "Efficiency is doing things right, while effectiveness is doing the right things". This saying indirectly emphasizes that the selection of objectives of a production process is just as important as the quality of that process. This saying popular in business, however, obscures the more common sense of "effectiveness", which would/should produce the following mnemonic: "Efficiency is doing things right; effectiveness is getting things done". This makes it clear that effectiveness, for example large production numbers, can also be achieved through inefficient processes if, for example, workers are willing or used to working longer hours or with greater physical effort than in other companies or countries or if they can be forced to do so. Similarly, a company can achieve effectiveness, for example large production numbers, through inefficient processes if it can afford to use more energy per product, for example if energy prices or labor costs or both are lower than for its competitors.
Inefficiency
Inefficiency is the absence of efficiency. Kinds of inefficiency include:
Allocative inefficiency refers to a situation in which the distribution of resources between alternatives does not fit with consumer taste (perceptions of costs and benefits). For example, a company may have the lowest costs in "productive" terms, but the result may be inefficient in allocative terms because the "true" or social cost exceeds the price that consumers are willing to pay for an extra unit of the product. This is true, for example, if the firm produces pollution (see also external cost). Consumers would prefer that the firm and its competitors produce less of the product and charge a higher price, to internalize the external cost.
Distributive inefficiency refers to the inefficient distribution of income and wealth within a society. Decreasing marginal utilities of wealth, in theory, suggests that more egalitarian distributions of wealth are more efficient than inegalitarian distributions. Distributive inefficiency is often associated with economic inequality.
Economic inefficiency refers to a situation where "we could be doing a better job," i.e., attaining our goals at lower cost. It is the opposite of economic efficiency. In the latter case, there is no way to do a better job, given the available resources and technology. Sometimes, this type of economic efficiency is referred to as the Koopmans efficiency.
Keynesian inefficiency might be defined as incomplete use of resources (labor, capital goods, natural resources, etc.) because of inadequate aggregate demand. We are not attaining potential output, while suffering from cyclical unemployment. We could do a better job if we applied deficit spending or expansionary monetary policy.
Pareto inefficiency is a situation in which one person can not be made better off without making anyone else worse off. In practice, this criterion is difficult to apply in a constantly changing world, so many emphasize Kaldor-Hicks efficiency and inefficiency: a situation is inefficient if someone can be made better off even after compensating those made worse off, regardless of whether the compensation actually occurs.
Productive inefficiency says that we could produce the given output at a lower cost—or could produce more output for a given cost. For example, a company that is inefficient will have higher operating costs and will be at a competitive disadvantage (or have lower profits than other firms in the market). See Sickles and Zelenyuk (2019, Chapter 3) for more extensive discussions.
Resource-market inefficiency refers to barriers that prevent full adjustment of resource markets, so that resources are either unused or misused. For example, structural unemployment results from barriers of mobility in labor markets which prevent workers from moving to places and occupations where there are job vacancies. Thus, unemployed workers can co-exist with unfilled job vacancies.
X-inefficiency refers to inefficiency in the "black box" of production, connecting inputs to outputs. This type of inefficiency says that we could be organizing people or production processes more effectively. Often problems of "morale" or "bureaucratic inertia" cause X-inefficiency.
Productive inefficiency, resource-market inefficiency, and X-inefficiency might be analyzed using data envelopment analysis and similar methods.
Mathematical expression
Efficiency is often measured as the ratio of useful output to total input, which can be expressed with the mathematical formula r=P/C, where P is the amount of useful output ("product") produced per the amount C ("cost") of resources consumed. This may correspond to a percentage if products and consumables are quantified in compatible units, and if consumables are transformed into products via a conservative process. For example, in the analysis of the energy conversion efficiency of heat engines in thermodynamics, the product P may be the amount of useful work output, while the consumable C is the amount of high-temperature heat input. Due to the conservation of energy, P can never be greater than C, and so the efficiency r is never greater than 100% (and in fact must be even less at finite temperatures).
In science and technology
In physics
Useful work per quantity of energy, mechanical advantage over ideal mechanical advantage, often denoted by the Greek lowercase letter η (Eta):
Electrical efficiency
Energy conversion efficiency
Mechanical efficiency
Thermal efficiency, ratio of work done to thermal energy consumed
Efficient energy use, the objective of maximising efficiency
In thermodynamics:
Energy conversion efficiency, measure of second law thermodynamic loss
Radiation efficiency, ratio of radiated power to power absorbed at the terminals of an antenna
Volumetric efficiency, in internal combustion engine design for the RAF
Lift-to-drag ratio
Faraday efficiency, electrolysis
Quantum efficiency, a measure of sensitivity of a photosensitive device
Grating efficiency, a generalization of the reflectance of a mirror, extended to a diffraction grating
In economics
Productivity improving technologies
Economic efficiency, the extent to which waste or other undesirable features are avoided
Market efficiency, the extent to which a given market resembles the ideal of an efficient market
Pareto efficiency, a state of its being impossible to make one individual better off, without making any other individual worse off
Kaldor-Hicks efficiency, a less stringent version of Pareto efficiency
Allocative efficiency, the optimal distribution of goods
Efficiency wages, paying workers more than the market rate for increased productivity
Business efficiency, revenues relative to expenses, etc.
Efficiency Movement, of the Progressive Era (1890–1932), advocated efficiency in the economy, society and government
In other sciences
In computing:
Algorithmic efficiency, optimizing the speed and memory requirements of a computer program.
A non-functional requirement (criterion for quality) in systems design and systems architecture which says something about the resource consumption for given load
Efficiency factor, in data communications
Storage efficiency, effectiveness of computer data storage
Efficiency (statistics), a measure of desirability of an estimator
Material efficiency, compares material requirements between construction projects or physical processes
Administrative efficiency, measuring transparency within public authorities and simplicity of rules and procedures for citizens and businesses
In biology:
Photosynthetic efficiency
Ecological efficiency
See also
Jevons paradox
References
Economic efficiency
Heat transfer
Engineering concepts
Waste management
Waste of resources | Efficiency | [
"Physics",
"Chemistry",
"Engineering"
] | 1,915 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"nan"
] |
268,420 | https://en.wikipedia.org/wiki/Foam | Foams are two-phase material systems where a gas is dispersed in a second, non-gaseous material, specifically, in which gas cells are enclosed by a distinct liquid or solid material. The foam "may contain more or less liquid [or solid] according to circumstances", although in the case of gas-liquid foams, the gas occupies most of the volume. The word derives from the medieval German and otherwise obsolete veim, in reference to the "frothy head forming in the glass once the beer has been freshly poured" (cf. ausgefeimt).
Theories regarding foam formation, structure, and properties—in physics and physical chemistry—differ somewhat between liquid and solid foams in that the former are dynamic (e.g., in their being "continuously deformed"), as a result of gas diffusing between cells, liquid draining from the foam into a bulk liquid, etc. Theories regarding liquid foams have as direct analogs theories regarding emulsions, two-phase material systems in which one liquid is enclosed by another.
In most foams, the volume of gas is large, with thin films of liquid or solid separating the regions of gas. A bath sponge and the head on a glass of beer are examples of foams; soap foams are also known as suds.
Solid foams can be closed-cell or open-cell. In closed-cell foam, the gas forms discrete pockets, each completely surrounded by the solid material. In open-cell foam, gas pockets connect to each other. A bath sponge is an example of an open-cell foam: water easily flows through the entire structure, displacing the air. A sleeping mat is an example of a product composed of closed-cell foam.
Foams are examples of dispersed media. In general, gas is present, so it divides into gas bubbles of different sizes (i.e., the material is polydisperse)—separated by liquid regions that may form films, thinner and thinner when the liquid phase drains out of the system films. When the principal scale is small, i.e., for a very fine foam, this dispersed medium can be considered a type of colloid.
Foam can also refer to something that is analogous to foam, such as quantum foam.
Structure
A foam is, in many cases, a multi-scale system.
One scale is the bubble: material foams are typically disordered and have a variety of bubble sizes. At larger sizes, the study of idealized foams is closely linked to the mathematical problems of minimal surfaces and three-dimensional tessellations, also called honeycombs. The Weaire–Phelan structure is reported in one primary philosophical source to be the best possible (optimal) unit cell of a perfectly ordered foam, while Plateau's laws describe how soap-films form structures in foams.
At lower scale than the bubble is the thickness of the film for metastable foams, which can be considered a network of interconnected films called lamellae. Ideally, the lamellae connect in triads and radiate 120° outward from the connection points, known as Plateau borders.
An even lower scale is the liquid–air interface at the surface of the film. Most of the time this interface is stabilized by a layer of amphiphilic structure, often made of surfactants, particles (Pickering emulsion), or more complex associations.
Formation
Several conditions are needed to produce foam: there must be mechanical work, surface active components (surfactants) that reduce the surface tension, and the formation of foam faster than its breakdown.
To create foam, work (W) is needed to increase the surface area (ΔA):
where γ is the surface tension.
One of the ways foam is created is through dispersion, where a large amount of gas is mixed with a liquid. A more specific method of dispersion involves injecting a gas through a hole in a solid into a liquid. If this process is completed very slowly, then one bubble can be emitted from the orifice at a time as shown in the picture below.
One of the theories for determining the separation time is shown below; however, while this theory produces theoretical data that matches the experimental data, detachment due to capillarity is accepted as a better explanation.
The buoyancy force acts to raise the bubble, which is
where is the volume of the bubble, is the acceleration due to gravity, and ρ1 is the density of the gas ρ2 is the density of the liquid. The force working against the buoyancy force is the surface tension force, which is
,
where γ is the surface tension, and is the radius of the orifice.
As more air is pushed into the bubble, the buoyancy force grows quicker than the surface tension force. Thus, detachment occurs when the buoyancy force is large enough to overcome the surface tension force.
In addition, if the bubble is treated as a sphere with a radius of and the volume is substituted in to the equation above, separation occurs at the moment when
Examining this phenomenon from a capillarity viewpoint for a bubble that is being formed very slowly, it can be assumed that the pressure inside is constant everywhere. The hydrostatic pressure in the liquid is designated by . The change in pressure across the interface from gas to liquid is equal to the capillary pressure; hence,
where R1 and R2 are the radii of curvature and are set as positive. At the stem of the bubble, R3 and R4 are the radii of curvature also treated as positive. Here the hydrostatic pressure in the liquid has to take into account z, the distance from the top to the stem of the bubble. The new hydrostatic pressure at the stem of the bubble is p0(ρ1 − ρ2)z. The hydrostatic pressure balances the capillary pressure, which is shown below:
Finally, the difference in the top and bottom pressure equals the change in hydrostatic pressure:
At the stem of the bubble, the shape of the bubble is nearly cylindrical; consequently, either R3 or R4 is large while the other radius of curvature is small. As the stem of the bubble grows in length, it becomes more unstable as one of the radius grows and the other shrinks. At a certain point, the vertical length of the stem exceeds the circumference of the stem and due to the buoyancy forces the bubble separates and the process repeats.
Stability
Stabilization
The stabilization of foam is caused by van der Waals forces between the molecules in the foam, electrical double layers created by dipolar surfactants, and the Marangoni effect, which acts as a restoring force to the lamellae.
The Marangoni effect depends on the liquid that is foaming being impure. Generally, surfactants in the solution decrease the surface tension. The surfactants also clump together on the surface and form a layer as shown below.
For the Marangoni effect to occur, the foam must be indented as shown in the first picture. This indentation increases the local surface area. Surfactants have a larger diffusion time than the bulk of the solution—so the surfactants are less concentrated in the indentation.
Also, surface stretching makes the surface tension of the indented spot greater than the surrounding area. Consequentially—since the diffusion time for the surfactants is large—the Marangoni effect has time to take place. The difference in surface tension creates a gradient, which instigates fluid flow from areas of lower surface tension to areas of higher surface tension. The second picture shows the film at equilibrium after the Marangoni effect has taken place.
Curing a foam solidifies it, making it indefinitely stable at STP.
Destabilization
Witold Rybczynski and Jacques Hadamard developed an equation to calculate the velocity of bubbles that rise in foam with the assumption that the bubbles are spherical with a radius .
with velocity in units of centimeters per second. ρ1 and ρ2 is the density for a gas and liquid respectively in units of g/cm3 and ῃ1 and ῃ2 is the dynamic
viscosity of the gas and liquid respectively in units of g/cm·s and g is the acceleration of gravity in units of cm/s2.
However, since the density and viscosity of a liquid is much greater than the gas, the density and viscosity of the gas can be neglected, which yields the new equation for velocity of bubbles rising as:
However, through experiments it has been shown that a more accurate model for bubbles rising is:
Deviations are due to the Marangoni effect and capillary pressure, which affect the assumption that the bubbles are spherical.
For laplace pressure of a curved gas liquid interface, the two principal radii of curvature at a point are R1 and R2. With a curved interface, the pressure in one phase is greater than the pressure in another phase. The capillary pressure Pc is given by the equation of:
,
where is the surface tension. The bubble shown below is a gas (phase 1) in a liquid (phase 2) and point A designates the top of the bubble while point B designates the bottom of the bubble.
At the top of the bubble at point A, the pressure in the liquid is assumed to be p0 as well as in the gas. At the bottom of the bubble at point B, the hydrostatic pressure is:
where ρ1 and ρ2 is the density for a gas and liquid respectively. The difference in hydrostatic pressure at the top of the bubble is 0, while the difference in hydrostatic pressure at the bottom of the bubble across the interface is gz(ρ2 − ρ1). Assuming that the radii of curvature at point A are equal and denoted by RA and that the radii of curvature at point B are equal and denoted by RB, then the difference in capillary pressure between point A and point B is:
At equilibrium, the difference in capillary pressure must be balanced by the difference in hydrostatic pressure. Hence,
Since, the density of the gas is less than the density of the liquid the left hand side of the equation is always positive. Therefore, the inverse of RA must be larger than the RB. Meaning that from the top of the bubble to the bottom of the bubble the radius of curvature increases. Therefore, without neglecting gravity the bubbles cannot be spherical. In addition, as z increases, this causes the difference in RA and RB too, which means the bubble deviates more from its shape the larger it grows.
Foam destabilization occurs for several reasons. First, gravitation causes drainage of liquid to the foam base, which Rybczynski and Hadamar include in their theory; however, foam also destabilizes due to osmotic pressure causes drainage from the lamellas to the Plateau borders due to internal concentration differences in the foam, and Laplace pressure causes diffusion of gas from small to large bubbles due to pressure difference. In addition, films can break under disjoining pressure, These effects can lead to rearrangement of the foam structure at scales larger than the bubbles, which may be individual (T1 process) or collective (even of the "avalanche" type).
Mechanical properties
Liquid foams
Solid foams
Solid foams, both open-cell and closed-cell, are considered as a sub-class of cellular structures. They often have lower nodal connectivity as compared to other cellular structures like honeycombs and truss lattices, and thus, their failure mechanism is dominated by bending of members. Low nodal connectivity and the resulting failure mechanism ultimately lead to their lower mechanical strength and stiffness compared to honeycombs and truss lattices.
The strength of foams can be impacted by the density, the material used, and the arrangement of the cellular structure (open vs closed and pore isotropy). To characterize the mechanical properties of foams, compressive stress-strain curves are used to measure their strength and ability to absorb energy since this is an important factor in foam based technologies.
Elastomeric foam
For elastomeric cellular solids, as the foam is compressed, first it behaves elastically as the cell walls bend, then as the cell walls buckle there is yielding and breakdown of the material until finally the cell walls crush together and the material ruptures. This is seen in a stress-strain curve as a steep linear elastic regime, a linear regime with a shallow slope after yielding (plateau stress), and an exponentially increasing regime. The stiffness of the material can be calculated from the linear elastic regime where the modulus for open celled foams can be defined by the equation:
where is the modulus of the solid component, is the modulus of the honeycomb structure, is a constant having a value close to one, is the density of the honeycomb structure, and is the density of the solid. The elastic modulus for closed cell foams can be described similarly by:
where the only difference is the exponent in the density dependence. However, in real materials, a closed-cell foam has more material at the cell edges which makes it more closely follow the equation for open-cell foams. The ratio of the density of the honeycomb structure compared with the solid structure has a large impact on the modulus of the material. Overall, foam strength increases with density of the cell and stiffness of the matrix material.
Energy of deformation
Another important property which can be deduced from the stress strain curve is the energy that the foam is able to absorb. The area under the curve (specified to be before rapid densification at the peak stress), represents the energy in the foam in units of energy per unit volume. The maximum energy stored by the foam prior to rupture is described by the equation:
This equation is derived from assuming an idealized foam with engineering approximations from experimental results. Most energy absorption occurs at the plateau stress region after the steep linear elastic regime.
Directional dependence
The isotropy of the cellular structure and the absorption of fluids can also have an impact on the mechanical properties of a foam. If there is anisotropy present, then the materials response to stress will be directionally dependent, and thus the stress-strain curve, modulus, and energy absorption will vary depending on the direction of applied force. Also, open-cell structures which have connected pores can allow water or other liquids to flow through the structure, which can also affect the rigidity and energy absorption capabilities.
Applications
Liquid foams
Liquid foams can be used in fire retardant foam, such as those that are used in extinguishing fires, especially oil fires.
The dough of leavened bread has traditionally been understood as a closed-cell foam—yeast causing bread to rise via tiny bubbles of gas that become the bread pores—where the cells do not connect with each other. Cutting the dough releases the gas in the bubbles that are cut, but the gas in the rest of the dough cannot escape. When dough is allowed to rise too far, it becomes an open-cell foam, in which the gas pockets are connected; cutting the dough surface at that point would cause a large volume of gas to escape, and the dough to collapse. Recent research has indicated that the pore structure in bread is 99% interconnected into one large vacuole, thus the closed-cell foam of the moist dough is transformed into an open cell solid foam in the bread.
The unique property of gas-liquid foams having very high specific surface area is exploited in the chemical processes of froth flotation and foam fractionation.
Depopulation
Foam depopulation or foaming is a means of mass killing farm animals by spraying foam over a large area to obstruct breathing and ultimately cause suffocation. It is usually used to attempt to stop disease spread.
Solid foams
Solid foams are a class of lightweight cellular engineering materials. These foams are typically classified into two types based on their pore structure: open-cell-structured foams (also known as reticulated foams) and closed-cell foams. At high enough cell resolutions, any type can be treated as continuous or "continuum" materials and are referred to as cellular solids, with predictable mechanical properties.
Open-cell foams can be used to filter air. For example, a foam embedded with catalyst has been shown to catalytically convert formaldehyde to benign substances when formaldehyde polluted air passes through the open cell structure.
Open-cell-structured foams contain pores that are connected to each other and form an interconnected network that is relatively soft. Open-cell foams fill with whatever gas surrounds them. If filled with air, a relatively good insulator results, but, if the open cells fill with water, insulation properties would be reduced. Recent studies have put the focus on studying the properties of open-cell foams as an insulator material. Wheat gluten/TEOS bio-foams have been produced, showing similar insulator properties as for those foams obtained from oil-based resources. Foam rubber is a type of open-cell foam.
Closed-cell foams do not have interconnected pores. The closed-cell foams normally have higher compressive strength due to their structures. However, closed-cell foams are also, in general more dense, require more material, and as a consequence are more expensive to produce. The closed cells can be filled with a specialized gas to provide improved insulation. The closed-cell structure foams have higher dimensional stability, low moisture absorption coefficients, and higher strength compared to open-cell-structured foams. All types of foam are widely used as core material in sandwich-structured composite materials.
The earliest known engineering use of cellular solids is with wood, which in its dry form is a closed-cell foam composed of lignin, cellulose, and air. From the early 20th century, various types of specially manufactured solid foams came into use. The low density of these foams makes them excellent as thermal insulators and flotation devices and their lightness and compressibility make them ideal as packing materials and stuffings.
An example of the use of azodicarbonamide as a blowing agent is found in the manufacture of vinyl (PVC) and EVA-PE foams, where it plays a role in the formation of air bubbles by breaking down into gas at high temperature.
The random or "stochastic" geometry of these foams makes them good for energy absorption, as well. In the late 20th century to early 21st century, new manufacturing techniques have allowed for geometry that results in excellent strength and stiffness per weight. These new materials are typically referred to as engineered cellular solids.
Syntactic foam
Integral skin foam
Integral skin foam, also known as self-skin foam, is a type of foam with a high-density skin and a low-density core. It can be formed in an open-mold process or a closed-mold process. In the open-mold process, two reactive components are mixed and poured into an open mold. The mold is then closed and the mixture is allowed to expand and cure. Examples of items produced using this process include arm rests, baby seats, shoe soles, and mattresses. The closed-mold process, more commonly known as reaction injection molding (RIM), injects the mixed components into a closed mold under high pressures.
Gallery
Foam scales and properties
See also
Aluminium foam sandwich
Ballistic foam
Chaotic bubble
Defoamer
Foam glass
Metal foam
Nanofoam
Sea foam
Reversibly assembled cellular composite materials
Foam party
Soft matter
References
Further reading
A modern treatise almost exclusively focused on liquid foams.
A treatise termed a classic by Weaire & Hutzler (1999), on solid foams, and the reason they limit their focus to liquid foams.
Note, this source also focuses on liquid foams.
Thomas Hipke, Günther Lange, René Poss: Taschenbuch für Aluminiumschäume. Aluminium-Verlag, Düsseldorf 2007, .
Hannelore Dittmar-Ilgen: Metalle lernen schwimmen. In: Dies.: Wie der Kork-Krümel ans Weinglas kommt. Hirzel, Stuttgart 2006, , S. 74.
External links
Andrew M. Kraynik, Douglas A. Reinelt, Frank van Swol Structure of random monodisperse foam
Colloids | Foam | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,222 | [
"Foams",
"Chemical mixtures",
"Condensed matter physics",
"Colloids"
] |
268,923 | https://en.wikipedia.org/wiki/Elasticity%20%28physics%29 | In physics and materials science, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate loads are applied to them; if the material is elastic, the object will return to its initial shape and size after removal. This is in contrast to plasticity, in which the object fails to do so and instead remains in its deformed state.
The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes size and shape when forces are applied (energy is added to the system). When forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied.
Hooke's law states that the force required to deform elastic objects should be directly proportional to the distance of deformation, regardless of how large that distance becomes. This is known as perfect elasticity, in which a given object will return to its original shape no matter how strongly it is deformed. This is an ideal concept only; most materials which possess elasticity in practice remain purely elastic only up to very small deformations, after which plastic (permanent) deformation occurs.
In engineering, the elasticity of a material is quantified by the elastic modulus such as the Young's modulus, bulk modulus or shear modulus which measure the amount of stress needed to achieve a unit of strain; a higher modulus indicates that the material is harder to deform. The SI unit of this modulus is the pascal (Pa). The material's elastic limit or yield strength is the maximum stress that can arise before the onset of plastic deformation. Its SI unit is also the pascal (Pa).
Overview
When an elastic material is deformed due to an external force, it experiences internal resistance to the deformation and restores it to its original state if the external force is no longer applied. There are various elastic moduli, such as Young's modulus, the shear modulus, and the bulk modulus, all of which are measures of the inherent elastic properties of a material as a resistance to deformation under an applied load. The various moduli apply to different kinds of deformation. For instance, Young's modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. Young's modulus and shear modulus are only for solids, whereas the bulk modulus is for solids, liquids, and gases.
The elasticity of materials is described by a stress–strain curve, which shows the relation between stress (the average restorative internal force per unit area) and strain (the relative deformation). The curve is generally nonlinear, but it can (by use of a Taylor series) be approximated as linear for sufficiently small deformations (in which higher-order terms are negligible). If the material is isotropic, the linearized stress–strain relationship is called Hooke's law, which is often presumed to apply up to the elastic limit for most metals or crystalline materials whereas nonlinear elasticity is generally required to model large deformations of rubbery materials even in the elastic range. For even higher stresses, materials exhibit plastic behavior, that is, they deform irreversibly and do not return to their original shape after stress is no longer applied. For rubber-like materials such as elastomers, the slope of the stress–strain curve increases with stress, meaning that rubbers progressively become more difficult to stretch, while for most metals, the gradient decreases at very high stresses, meaning that they progressively become easier to stretch. Elasticity is not exhibited only by solids; non-Newtonian fluids, such as viscoelastic fluids, will also exhibit elasticity in certain conditions quantified by the Deborah number. In response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, these fluids may start to flow like a viscous liquid.
Because the elasticity of a material is described in terms of a stress–strain relation, it is essential that the terms stress and strain be defined without ambiguity. Typically, two types of relation are considered. The first type deals with materials that are elastic only for small strains. The second deals with materials that are not limited to small strains. Clearly, the second type of relation is more general in the sense that it must include the first type as a special case.
For small strains, the measure of stress that is used is the Cauchy stress while the measure of strain that is used is the infinitesimal strain tensor; the resulting (predicted) material behavior is termed linear elasticity, which (for isotropic media) is called the generalized Hooke's law. Cauchy elastic materials and hypoelastic materials are models that extend Hooke's law to allow for the possibility of large rotations, large distortions, and intrinsic or induced anisotropy.
For more general situations, any of a number of stress measures can be used, and it is generally desired (but not required) that the elastic stress–strain relation be phrased in terms of a finite strain measure that is work conjugate to the selected stress measure, i.e., the time integral of the inner product of the stress measure with the rate of the strain measure should be equal to the change in internal energy for any adiabatic process that remains below the elastic limit.
Units
International System
The SI unit for elasticity and the elastic modulus is the pascal (Pa). This unit is defined as force per unit area, generally a measurement of pressure, which in mechanics corresponds to stress. The pascal and therefore elasticity have the dimension L−1⋅M⋅T−2.
For most commonly used engineering materials, the elastic modulus is on the scale of gigapascals (GPa, 109 Pa).
Linear elasticity
As noted above, for small deformations, most elastic materials such as springs exhibit linear elasticity and can be described by a linear relation between the stress and strain. This relationship is known as Hooke's law. A geometry-dependent version of the idea was first formulated by Robert Hooke in 1675 as a Latin anagram, "ceiiinosssttuv". He published the answer in 1678: "Ut tensio, sic vis" meaning "As the extension, so the force", a linear relationship commonly referred to as Hooke's law. This law can be stated as a relationship between tensile force and corresponding extension displacement ,
where is a constant known as the rate or spring constant. It can also be stated as a relationship between stress and strain :
where is known as the Young's modulus.
Although the general proportionality constant between stress and strain in three dimensions is a 4th-order tensor called stiffness, systems that exhibit symmetry, such as a one-dimensional rod, can often be reduced to applications of Hooke's law.
Finite elasticity
The elastic behavior of objects that undergo finite deformations has been described using a number of models, such as Cauchy elastic material models, Hypoelastic material models, and Hyperelastic material models. The deformation gradient (F) is the primary deformation measure used in finite strain theory.
Cauchy elastic materials
A material is said to be Cauchy-elastic if the Cauchy stress tensor σ is a function of the deformation gradient F alone:
It is generally incorrect to state that Cauchy stress is a function of merely a strain tensor, as such a model lacks crucial information about material rotation needed to produce correct results for an anisotropic medium subjected to vertical extension in comparison to the same extension applied horizontally and then subjected to a 90-degree rotation; both these deformations have the same spatial strain tensors yet must produce different values of the Cauchy stress tensor.
Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses might depend on the path of deformation. Therefore, Cauchy elasticity includes non-conservative "non-hyperelastic" models (in which work of deformation is path dependent) as well as conservative "hyperelastic material" models (for which stress can be derived from a scalar "elastic potential" function).
Hypoelastic materials
A hypoelastic material can be rigorously defined as one that is modeled using a constitutive equation satisfying the following two criteria:
The Cauchy stress at time depends only on the order in which the body has occupied its past configurations, but not on the time rate at which these past configurations were traversed. As a special case, this criterion includes a Cauchy elastic material, for which the current stress depends only on the current configuration rather than the history of past configurations.
There is a tensor-valued function such that in which is the material rate of the Cauchy stress tensor, and is the spatial velocity gradient tensor.
If only these two original criteria are used to define hypoelasticity, then hyperelasticity would be included as a special case, which prompts some constitutive modelers to append a third criterion that specifically requires a hypoelastic model to not be hyperelastic (i.e., hypoelasticity implies that stress is not derivable from an energy potential). If this third criterion is adopted, it follows that a hypoelastic material might admit nonconservative adiabatic loading paths that start and end with the same deformation gradient but do not start and end at the same internal energy.
Note that the second criterion requires only that the function exists. As detailed in the main hypoelastic material article, specific formulations of hypoelastic models typically employ so-called objective rates so that the function exists only implicitly and is typically needed explicitly only for numerical stress updates performed via direct integration of the actual (not objective) stress rate.
Hyperelastic materials
Hyperelastic materials (also called Green elastic materials) are conservative models that are derived from a strain energy density function (W). A model is hyperelastic if and only if it is possible to express the Cauchy stress tensor as a function of the deformation gradient via a relationship of the form
This formulation takes the energy potential (W) as a function of the deformation gradient (). By also requiring satisfaction of material objectivity, the energy potential may be alternatively regarded as a function of the Cauchy-Green deformation tensor (), in which case the hyperelastic model may be written alternatively as
Applications
Linear elasticity is used widely in the design and analysis of structures such as beams, plates and shells, and sandwich composites. This theory is also the basis of much of fracture mechanics.
Hyperelasticity is primarily used to determine the response of elastomer-based objects such as gaskets and of biological materials such as soft tissues and cell membranes.
Factors affecting elasticity
In a given isotropic solid, with known theoretical elasticity for the bulk material in terms of Young's modulus,the effective elasticity will be governed by porosity. Generally a more porous material will exhibit lower stiffness. More specifically, the fraction of pores, their distribution at different sizes and the nature of the fluid with which they are filled give rise to different elastic behaviours in solids.
For isotropic materials containing cracks, the presence of fractures affects the Young and the shear moduli perpendicular to the planes of the cracks, which decrease (Young's modulus faster than the shear modulus) as the fracture density increases, indicating that the presence of cracks makes bodies brittler. Microscopically, the stress–strain relationship of materials is in general governed by the Helmholtz free energy, a thermodynamic quantity. Molecules settle in the configuration which minimizes the free energy, subject to constraints derived from their structure, and, depending on whether the energy or the entropy term dominates the free energy, materials can broadly be classified as energy-elastic and entropy-elastic. As such, microscopic factors affecting the free energy, such as the equilibrium distance between molecules, can affect the elasticity of materials: for instance, in inorganic materials, as the equilibrium distance between molecules at 0 K increases, the bulk modulus decreases. The effect of temperature on elasticity is difficult to isolate, because there are numerous factors affecting it. For instance, the bulk modulus of a material is dependent on the form of its lattice, its behavior under expansion, as well as the vibrations of the molecules, all of which are dependent on temperature.
See also
Notes
References
External links
The Feynman Lectures on Physics Vol. II Ch. 38: Elasticity | Elasticity (physics) | [
"Physics",
"Materials_science"
] | 2,653 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
269,002 | https://en.wikipedia.org/wiki/Aquila%20%28constellation%29 | Aquila is a constellation on the celestial equator. Its name is Latin for 'eagle' and it represents the bird that carried Zeus/Jupiter's thunderbolts in Greek-Roman mythology.
Its brightest star, Altair, is one vertex of the Summer Triangle asterism. The constellation is best seen in the northern summer, as it is located along the Milky Way. Because of this location, many clusters and nebulae are found within its borders, but they are dim and galaxies are few.
History
Aquila was one of the 48 constellations described by the second-century astronomer Ptolemy. It had been earlier mentioned by Eudoxus in the fourth century BC and Aratus in the third century BC.
It is now one of the 88 constellations defined by the International Astronomical Union. The constellation was also known as Vultur volans (the flying vulture) to the Romans, not to be confused with Vultur cadens which was their name for Lyra. It is often held to represent the eagle which held Zeus's/Jupiter's thunderbolts in Greco-Roman mythology. Aquila is also associated with the eagle that kidnapped Ganymede, a son of one of the kings of Troy (associated with Aquarius), to Mount Olympus to serve as cup-bearer to the gods.
Ptolemy catalogued 19 stars jointly in this constellation and in the now obsolete constellation of Antinous, which was named in the reign of the emperor Hadrian (AD 117–138), but sometimes erroneously attributed to Tycho Brahe, who catalogued 12 stars in Aquila and seven in Antinous. Hevelius determined 23 stars in the first and 19 in the second.
The Greek Aquila is probably based on the Babylonian constellation of the Eagle, but is sometimes mistakenly thought as a seagull which is located in the same area as the Greek constellation.
Notable features
Stars
Aquila, which lies in the Milky Way, contains many rich starfields and has been the location of many novae.
α Aql (Altair) is the brightest star in this constellation and one of the closest naked-eye stars to Earth at a distance of 17 light-years. Its name comes from the Arabic phrase al-nasr al-tair, meaning "the flying eagle". Altair has a magnitude of 0.76. It is one of the three stars of the Summer Triangle, along with Vega and Deneb. It is an A-type main-sequence star with 1.8 times the mass of the Sun and 11 times its luminosity. The star rotates quickly, and this gives the star an oblate shape where it is flattened towards the poles.
β Aql (Alshain) is a yellow-hued star of magnitude 3.7, 45 light-years from Earth. Its name comes from the Arabic phrase shahin-i tarazu, meaning "the balance"; this name referred to Altair, Alshain, and Tarazed. The primary is a G-type subgiant star with a spectral type of G9.5 IV and the secondary is a red dwarf. The subgiant primary has three times the radius of the Sun and six times the luminosity.
γ Aql (Tarazed) is an orange-hued giant star of around magnitude 2.7, 460 light-years from Earth. Its name, like that of Alshain, comes from the Arabic for "the balance". It is the second-brightest star in the constellation and is an unconfirmed variable star.
ζ Aql (Okab) is a binary star of magnitude 3.0, 83 light-years from Earth. The primary is an A-type main sequence star, and the secondary has half the mass of the Sun.
η Aql is a yellow-white-hued supergiant star, 1200 light-years from Earth. Among the brightest Cepheid variable stars, it has a minimum magnitude of 4.4 and a maximum magnitude of 3.5 with a period of 7.2 days. The variability was originally observed by Edward Pigott in 1784. There are also two companion stars which orbit the supergiant: a B-type main sequence star and an F-type main sequence star.
ρ Aql moved across the border into neighboring Delphinus in 1992, and is an A-type star with a lower metallicity than the Sun.
15 Aql is an optical double star. The primary is an orange-hued giant of magnitude 5.41 and a spectral type of K1 III, 325 light-years from Earth. The secondary is a purple-hued star of magnitude 7.0, 550 light-years from Earth. The pair is easily resolved in small amateur telescopes.
57 Aql is a binary star. The primary is a blue-hued star of magnitude 5.7 and the secondary is a white star of magnitude 6.5. The system is approximately 350 light-years from Earth; the pair is easily resolved in small amateur telescopes. Both stars in the system rotate rapidly.
R Aql is a red-hued giant star 690 light-years from Earth. It is a Mira variable with a minimum magnitude of 12.0, a maximum magnitude of 6.0, and a period around 9 months. It has a diameter of 400 D☉.
V Aql is a typical Cool Carbon Star. It's one of the most red-colored examples of this sort of stars, observable through common amateur telescopes.
FF Aql is a yellow-white-hued supergiant star, 2500 light-years from Earth. It is a Cepheid variable star with a minimum magnitude of 5.7, a maximum magnitude of 5.2, and a period of 4.5 days. It is a spectroscopic binary with a spectral type of F6Ib. A third star is also a member of the system, and there is also a fourth star which is probably unconnected with the main system.
Novae
A bright nova was observed in Aquila in 1918 (Nova Aquilae 1918) and briefly shone brighter than Altair, the brightest star in Aquila. It was first seen by Zygmunt Laskowski and was confirmed on the night of 8 June 1918. Nova Aquilae reached a peak apparent magnitude of −0.5 and was the brightest nova recorded since the invention of the telescope.
Deep-sky objects
Three interesting planetary nebulae lie in Aquila:
NGC 6804 shows a small but bright ring.
NGC 6781 bears some resemblance with the Owl Nebula in Ursa Major. It was discovered by William Herschel in 1788.
NGC 6751, also known as the Glowing Eye, is a planetary nebula. The nebula is estimated to be roughly 0.8 light-years in diameter.
More deep-sky objects:
NGC 6709 is a loose open cluster containing roughly 40 stars, which range in magnitude from 9 to 11. It is about 3000 light-years from Earth. It has an overall magnitude of 6.7 and is about 9100 light-years from Earth. NGC 6709 appears in a rich Milky Way star field and is classified as a Shapley class d and Trumpler class III 2 m cluster. These designations mean that it does not have many stars, is loose, does not show greater concentration at the center, and has a moderate range of star magnitudes. There are 305 confirmed member stars and one candidate red giant.
NGC 6755 is an open cluster of 7.5 m; it is made up of about a dozen stars with magnitudes 12 through 13. It is located approximately 8,060 light-years from the Solar System.
NGC 6760 is a globular cluster of 9.1 m. At least two pulsars have been discovered in the globular cluster, and it has a Shapley-Sawyer Concentration Class of IX.
NGC 6749 is an open cluster.
NGC 6778 is a planetary nebula located about 10,300 light-years away from the Solar System.
NGC 6741 is a planetary nebula.
NGC 6772 is a planetary nebula.
W51 (3C400) is one of the largest stellar nurseries in the Milky Way. Located about 17,000 light-years from Earth, W51 is about 350 light-years – or about 2 quadrillion miles – across. However, it's located in an area so thick with interstellar dust that it's opaque to visible light. Observations by the Chandra X-Ray Observatory and the Spitzer Infrared Telescope reveal W51 would appear about as large as the full Moon in visible light.
Aquila also holds some extragalactic objects. One of them is what may be the largest single mass concentration of galaxies in the Universe known, the Hercules–Corona Borealis Great Wall. It was discovered in November 2013, and has the size of 10 billion light years. It is the biggest and the most massive structure in the Universe known.
Other
NASA's Pioneer 11 space probe, which flew by Jupiter and Saturn in the 1970s, is expected to pass near the star Lambda (λ) Aquilae in about 4 million years.
Illustrations
In illustrations of Aquila that represent it as an eagle, a nearly straight line of three stars symbolizes part of the wings. The center and brightest of these three stars is Altair.
Mythology
According to Gavin White, the Babylonian Eagle carried the constellation called the Dead Man in its talons. The author also draws a comparison to the classical stories of Antinous and Ganymede.
In classical Greek mythology, Aquila was identified as Αετός Δίας (Aetos Dios), the eagle that carried the thunderbolts of Zeus and was sent by him to carry the shepherd boy Ganymede, whom he desired, to Mount Olympus; the constellation of Aquarius is sometimes identified with Ganymede.
In the Chinese love story of Qi Xi, Niu Lang (Altair) and his two children (β and γ Aquilae) are separated forever from their wife and mother Zhi Nu (Vega), who is on the far side of the river, the Milky Way.
In Hinduism, the constellation Aquila is identified with the half-eagle half-human deity Garuda.
In ancient Egypt, Aquila possibly was seen as the falcon of Horus. According to Berio, the identification of Aquila as an Egyptian constellation, and not merely Graeco-Babylonian, is corroborated by the Daressy Zodiac. It depicts an outer ring showing the Sphaera Graeca, the familiar Hellenistic zodiac, while the middle ring depicts the Sphaera Barbarica or foreigner's zodiac with the zodiacal signs of the Egyptian dodekaoros which were also recorded by Teucros of Babylon. Under the sign of Sagittarius is the falcon of Horus, presumably because Aquila rises with Sagittarius.
Equivalents
In Chinese astronomy, ζ Aql is located within the Heavenly Market Enclosure (天市垣, Tiān Shì Yuán), and the other stars of the constellation are placed within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ).
Several different Polynesian equivalents to Aquila as a whole are known. On the island of Futuna, it was called Kau-amonga, meaning "Suspended Burden". Its name references the Futunan name for Orion's belt and sword, Amonga. In Hawaii, Altair was called Humu, translated to English as "to sew, to bind together parts of a fishhook." "Humu" also refers to the hole by which parts of a hook are bound together. Humu-ma was said to influence the astrologers. Pao-toa was the name for the entire constellation in the Marquesas Islands; the name meant "Fatigued Warrior". Also, Polynesian constellations incorporated the stars of modern Aquila. The Pukapuka constellation Tolu, meaning "three", was made up of Alpha, Beta, and Gamma Aquilae. Altair was commonly named among Polynesian peoples, as well. The people of Hawaii called it Humu, the people of the Tuamotus called it Tukituki ("Pound with a hammer") - they named Beta Aquilae Nga Tangata ("The Men") - and the people of Pukapuka called Altair Turu and used it as a navigational star. The Māori people named Altair Poutu-te-rangi, "Pillar of the Sky", because of its important position in their cosmology. It was used differently in different Māori calendars, being the star of February and March in one version and March and April in the other. Altair was also the star that ruled the annual sweet potato harvest.
See also
Aquila (Chinese astronomy)
References
Citations
References
External links
Ian Ridpath's Star Tales – Aquila
The Deep Photographic Guide to the Constellations: Aquila
WIKISKY.ORG: Aquila constellation
Warburg Institute Iconographic Database (medieval and early modern images of Aquila)
The clickable Aquila
Equatorial constellations
Constellations listed by Ptolemy
Articles containing video clips | Aquila (constellation) | [
"Astronomy"
] | 2,758 | [
"Aquila (constellation)",
"Equatorial constellations",
"Constellations listed by Ptolemy",
"Constellations"
] |
269,931 | https://en.wikipedia.org/wiki/Renin%E2%80%93angiotensin%20system | The renin-angiotensin system (RAS), or renin-angiotensin-aldosterone system (RAAS), is a hormone system that regulates blood pressure, fluid, and electrolyte balance, and systemic vascular resistance.
When renal blood flow is reduced, juxtaglomerular cells in the kidneys convert the precursor prorenin (already present in the blood) into renin and secrete it directly into the circulation. Plasma renin then carries out the conversion of angiotensinogen, released by the liver, to a decapeptide called angiotensin I, which has no biological function on its own. Angiotensin I is subsequently converted to the active angiotensin II (an octapeptide) by the angiotensin-converting enzyme (ACE) found on the surface of vascular endothelial cells, predominantly those of the lungs. Angiotensin II has a short life of about 1 to 2 minutes. Then, it is rapidly degraded into a heptapeptide called angiotensin III by angiotensinases which are present in red blood cells and vascular beds in many tissues.
Angiotensin III increases blood pressure and stimulates aldosterone secretion from the adrenal cortex; it has 100% adrenocortical stimulating activity and 40% vasopressor activity of angiotensin II.
Angiotensin IV also has adrenocortical and vasopressor activities.
Angiotensin II is a potent vasoconstrictive peptide that causes blood vessels to narrow, resulting in increased blood pressure. Angiotensin II also stimulates the secretion of the hormone aldosterone from the adrenal cortex. Aldosterone causes the renal tubules to increase the reabsorption of sodium which in consequence causes the reabsorption of water into the blood, while at the same time causing the excretion of potassium (to maintain electrolyte balance). This increases the volume of extracellular fluid in the body, which also increases blood pressure.
If the RAS is abnormally active, blood pressure will be too high. There are several types of drugs which includes ACE inhibitors, angiotensin II receptor blockers (ARBs), and renin inhibitors that interrupt different steps in this system to improve blood pressure. These drugs are one of the primary ways to control high blood pressure, heart failure, kidney failure, and harmful effects of diabetes.
Activation
The system can be activated when there is a loss of blood volume or a drop in blood pressure (such as in hemorrhage or dehydration). This loss of pressure is interpreted by baroreceptors in the carotid sinus. It can also be activated by a decrease in the filtrate sodium chloride (NaCl) concentration or a decreased filtrate flow rate that will stimulate the macula densa to signal the juxtaglomerular cells to release renin.
If the perfusion of the juxtaglomerular apparatus in the kidney's macula densa decreases, then the juxtaglomerular cells (granular cells, modified pericytes in the glomerular capillary) release the enzyme renin.
Renin cleaves a decapeptide from angiotensinogen, a globular protein. The decapeptide is known as angiotensin I.
Angiotensin I is then converted to an octapeptide, angiotensin II by angiotensin-converting enzyme (ACE), which is thought to be found mainly in endothelial cells of the capillaries throughout the body, within the lungs and the epithelial cells of the kidneys. One study in 1992 found ACE in all blood vessel endothelial cells.
Angiotensin II is the major bioactive product of the renin–angiotensin system, binding to receptors on intraglomerular mesangial cells, causing these cells to contract along with the blood vessels surrounding them; and to receptors on the zona glomerulosa cells, causing the release of aldosterone from the zona glomerulosa in the adrenal cortex. Angiotensin II acts as an endocrine, autocrine/paracrine, and intracrine hormone.
Cardiovascular effects
Angiotensin I may have some minor activity, but angiotensin II is the major bio-active product. Angiotensin II has a variety of effects on the body:
Throughout the body, angiotensin II is a potent vasoconstrictor of arterioles.
In the kidneys, angiotensin II constricts glomerular arterioles, having a greater effect on efferent arterioles than afferent. As with most other capillary beds in the body, the constriction of afferent arterioles increases the arteriolar resistance, raising systemic arterial blood pressure and decreasing the blood flow. However, the kidneys must continue to filter enough blood despite this drop in blood flow, necessitating mechanisms to keep glomerular blood pressure up. To do this, angiotensin II constricts efferent arterioles, which forces blood to build up in the glomerulus, increasing glomerular pressure. The glomerular filtration rate (GFR) is thus maintained, and blood filtration can continue despite lowered overall kidney blood flow. Because the filtration fraction, which is the ratio of the glomerular filtration rate (GFR) to the renal plasma flow (RPF), has increased, there is less plasma fluid in the downstream peritubular capillaries. This in turn leads to a decreased hydrostatic pressure and increased oncotic pressure (due to unfiltered plasma proteins) in the peritubular capillaries. The effect of decreased hydrostatic pressure and increased oncotic pressure in the peritubular capillaries will facilitate increased reabsorption of tubular fluid.
Angiotensin II decreases medullary blood flow through the vasa recta. This decreases the washout of NaCl and urea in the kidney medullary space. Thus, higher concentrations of NaCl and urea in the medulla facilitate increased absorption of tubular fluid. Furthermore, increased reabsorption of fluid into the medulla will increase passive reabsorption of sodium along the thick ascending limb of the Loop of Henle.
Angiotensin II stimulates / exchangers located on the apical membranes (faces the tubular lumen) of cells in the proximal tubule and thick ascending limb of the loop of Henle in addition to channels in the collecting ducts. This will ultimately lead to increased sodium reabsorption.
Angiotensin II stimulates the hypertrophy of renal tubule cells, leading to further sodium reabsorption.
In the adrenal cortex, angiotensin II acts to cause the release of aldosterone. Aldosterone acts on the tubules (e.g., the distal convoluted tubules and the cortical collecting ducts) in the kidneys, causing them to reabsorb more sodium and water from the urine. This increases blood volume and, therefore, increases blood pressure. In exchange for the reabsorbing of sodium to blood, potassium is secreted into the tubules, becomes part of urine and is excreted.
Angiotensin II causes the release of anti-diuretic hormone (ADH), also called vasopressin – ADH is made in the hypothalamus and released from the posterior pituitary gland. As its name suggests, it also exhibits vaso-constrictive properties, but its main course of action is to stimulate reabsorption of water in the kidneys. ADH also acts on the central nervous system to increase an individual's appetite for salt, and to stimulate the sensation of thirst.
These effects directly act together to increase blood pressure and are opposed by atrial natriuretic peptide (ANP).
Local renin–angiotensin systems
Locally expressed renin–angiotensin systems have been found in a number of tissues, including the kidneys, adrenal glands, the heart, vasculature and nervous system, and have a variety of functions, including local cardiovascular regulation, in association or independently of the systemic renin–angiotensin system, as well as non-cardiovascular functions. Outside the kidneys, renin is predominantly picked up from the circulation but may be secreted locally in some tissues; its precursor prorenin is highly expressed in tissues and more than half of circulating prorenin is of extrarenal origin, but its physiological role besides serving as precursor to renin is still unclear. Outside the liver, angiotensinogen is picked up from the circulation or expressed locally in some tissues; with renin they form angiotensin I, and locally expressed angiotensin-converting enzyme, chymase or other enzymes can transform it into angiotensin II. This process can be intracellular or interstitial.
In the adrenal glands, it is likely involved in the paracrine regulation of aldosterone secretion; in the heart and vasculature, it may be involved in remodeling or vascular tone; and in the brain, where it is largely independent of the circulatory RAS, it may be involved in local blood pressure regulation. In addition, both the central and peripheral nervous systems can use angiotensin for sympathetic neurotransmission. Other places of expression include the reproductive system, the skin and digestive organs. Medications aimed at the systemic system may affect the expression of those local systems, beneficially or adversely.
Fetal renin–angiotensin system
In the fetus, the renin–angiotensin system is predominantly a sodium-losing system, as angiotensin II has little or no effect on aldosterone levels. Renin levels are high in the fetus, while angiotensin II levels are significantly lower; this is due to the limited pulmonary blood flow, preventing ACE (found predominantly in the pulmonary circulation) from having its maximum effect.
Clinical significance
ACE inhibitors of angiotensin-converting enzyme inhibitors are often used to reduce the formation of the more potent angiotensin II. Captopril is an example of an ACE inhibitor. ACE cleaves a number of other peptides, and in this capacity is an important regulator of the kinin–kallikrein system, as such blocking ACE can lead to side effects.
Angiotensin II receptor antagonists, also known as angiotensin receptor blockers, can be used to prevent angiotensin II from acting on its receptors.
Direct renin inhibitors can also be used for hypertension. The drugs that inhibit renin are aliskiren and the investigational remikiren.
Vaccines against angiotensin II, for example CYT006-AngQb, have been investigated.
See also
Discovery and development of angiotensin receptor blockers
Atrial natriuretic peptide: When the atrium stretches, blood pressure is considered to be increased and sodium is excreted to lower blood pressure.
Bainbridge reflex: In response to stretching of the right atrium wall, heart rate increases, lowering venous blood pressure.
Baroreflex: When the stretch receptors in the aortic arch and carotid sinus increase, the blood pressure is considered to be elevated and the heart rate decreases to lower blood pressure.
Antidiuretic hormone: The hypothalamus detects the extracellular fluid hyperosmolality and the posterior pituitary gland secretes antidiuretic hormone to increase water reabsorption in the collecting duct.
References
Further reading
External links
Biochemical reactions
Cardiovascular physiology
Endocrinology
Human homeostasis | Renin–angiotensin system | [
"Chemistry",
"Biology"
] | 2,518 | [
"Human homeostasis",
"Biochemistry",
"Homeostasis",
"Biochemical reactions"
] |
270,041 | https://en.wikipedia.org/wiki/Prony%20equation | The Prony equation (named after Gaspard de Prony) is a historically important equation in hydraulics, used to calculate the head loss due to friction within a given run of pipe. It is an empirical equation developed by Frenchman Gaspard de Prony in the 19th century:
where hf is the head loss due to friction, calculated from: the ratio of the length to diameter of the pipe L/D, the velocity of the flow V, and two empirical factors a and b to account for friction.
This equation has been supplanted in modern hydraulics by the Darcy–Weisbach equation, which used it as a starting point.
References
. The Prony equation and its replacement by the Darcy–Weisbach equation are on pp. 11–12.
Eponymous equations of physics
Equations of fluid dynamics | Prony equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 167 | [
"Equations of fluid dynamics",
"Equations of physics",
"Applied mathematics",
"Eponymous equations of physics",
"Applied mathematics stubs",
"Fluid dynamics"
] |
270,054 | https://en.wikipedia.org/wiki/Formal%20verification | In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics.
Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods.
It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code in a programming language. Prominent examples of verified software systems include the CompCert verified C compiler and the seL4 high-assurance operating system kernel.
The verification of these systems is done by ensuring the existence of a formal proof of a mathematical model of the system. Examples of mathematical objects used to model systems are: finite-state machines, labelled transition systems, Horn clauses, Petri nets, vector addition systems, timed automata, hybrid automata, process algebra, formal semantics of programming languages such as operational semantics, denotational semantics, axiomatic semantics and Hoare logic.
Approaches
Model Checking
Model checking involves a systematic and exhaustive exploration of the mathematical model. Such exploration is possible for finite models, but also for some infinite models, where infinite sets of states can be effectively represented finitely by using abstraction or taking advantage of symmetry. Usually, this consists of exploring all states and transitions in the model, by using smart and domain-specific abstraction techniques to consider whole groups of states in a single operation and reduce computing time. Implementation techniques include state space enumeration, symbolic state space enumeration, abstract interpretation, symbolic simulation, abstraction refinement. The properties to be verified are often described in temporal logics, such as linear temporal logic (LTL), Property Specification Language (PSL), SystemVerilog Assertions (SVA), or computational tree logic (CTL). The great advantage of model checking is that it is often fully automatic; its primary disadvantage is that it does not in general scale to large systems; symbolic models are typically limited to a few hundred bits of state, while explicit state enumeration requires the state space being explored to be relatively small.
Deductive Verification
Another approach is deductive verification. It consists of generating from the system and its specifications (and possibly other annotations) a collection of mathematical proof obligations, the truth of which imply conformance of the system to its specification, and discharging these obligations using either proof assistants (interactive theorem provers) (such as HOL, ACL2, Isabelle, Coq or PVS), or automatic theorem provers, including in particular satisfiability modulo theories (SMT) solvers. This approach has the disadvantage that it may require the user to understand in detail why the system works correctly, and to convey this information to the verification system, either in the form of a sequence of theorems to be proved or in the form of specifications (invariants, preconditions, postconditions) of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures).
Application to Software
Formal verification of software programs involves proving that a program satisfies a formal specification of its behavior. Subareas of formal verification include deductive verification (see above), abstract interpretation, automated theorem proving, type systems, and lightweight formal methods. A promising type-based verification approach is dependently typed programming, in which the types of functions include (at least part of) those functions' specifications, and type-checking the code establishes its correctness against those specifications. Fully featured dependently typed languages support deductive verification as a special case.
Another complementary approach is program derivation, in which efficient code is produced from functional specifications by a series of correctness-preserving steps. An example of this approach is the Bird–Meertens formalism, and this approach can be seen as another form of program synthesis.
These techniques can be sound, meaning that the verified properties can be logically deduced from the semantics, or unsound, meaning that there is no such guarantee. A sound technique yields a result only once it has covered the entire space of possibilities. An example of an unsound technique is one that covers only a subset of the possibilities, for instance only integers up to a certain number, and give a "good-enough" result. Techniques can also be decidable, meaning that their algorithmic implementations are guaranteed to terminate with an answer, or undecidable, meaning that they may never terminate. By bounding the scope of possibilities, unsound techniques that are decidable might be able to be constructed when no decidable sound techniques are available.
Verification and validation
Verification is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process as V & V.
Validation: "Are we trying to make the right thing?", i.e., is the product specified to the user's actual needs?
Verification: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?
The verification process consists of static/structural and dynamic/behavioral aspects. E.g., for a software product one can inspect the source code (static) and run against specific test cases (dynamic). Validation usually can be done only dynamically, i.e., the product is tested by putting it through typical and atypical usages ("Does it satisfactorily meet all use cases?").
Automated program repair
Program repair is performed with respect to an oracle, encompassing the desired functionality of the program which is used for validation of the generated fix. A simple example is a test-suite—the input/output pairs specify the functionality of the program. A variety of techniques are employed, most notably using satisfiability modulo theories (SMT) solvers, and genetic programming, using evolutionary computing to generate and evaluate possible candidates for fixes. The former method is deterministic, while the latter is randomized.
Program repair combines techniques from formal verification and program synthesis. Fault-localization techniques in formal verification are used to compute program points which might be possible bug-locations, which can be targeted by the synthesis modules. Repair systems often focus on a small pre-defined class of bugs in order to reduce the search space. Industrial use is limited owing to the computational cost of existing techniques.
Industry use
The growth in complexity of designs increases the importance of formal verification techniques in the hardware industry. At present, formal verification is used by most or all leading hardware companies, but its use in the software industry is still languishing. This could be attributed to the greater need in the hardware industry, where errors have greater commercial significance. Because of the potential subtle interactions between components, it is increasingly difficult to exercise a realistic set of possibilities by simulation. Important aspects of hardware design are amenable to automated proof methods, making formal verification easier to introduce and more productive.
, several operating systems have been formally verified:
NICTA's Secure Embedded L4 microkernel, sold commercially as seL4 by OK Labs; OSEK/VDX based real-time operating system ORIENTAIS by East China Normal University; Green Hills Software's Integrity operating system; and SYSGO's PikeOS.
In 2016, a team led by Zhong Shao at Yale developed a formally verified operating system kernel called CertiKOS.
As of 2017, formal verification has been applied to the design of large computer networks through a mathematical model of the network, and as part of a new network technology category, intent-based networking. Network software vendors that offer formal verification solutions include Cisco Forward Networks and Veriflow Systems.
The SPARK programming language provides a toolset which enables software development with formal verification and is used in several high-integrity systems.
The CompCert C compiler is a formally verified C compiler implementing the majority of ISO C.
See also
Automated theorem proving
Model checking
List of model checking tools
Formal equivalence checking
Proof checker
Property Specification Language
Static code analysis
Temporal logic in finite-state verification
Post-silicon validation
Intelligent verification
Runtime verification
Software verification
Hardware verification
References
Electronic circuit verification
Formal methods
Logic in computer science
Theoretical computer science | Formal verification | [
"Mathematics",
"Engineering"
] | 1,751 | [
"Logic in computer science",
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic",
"Software engineering",
"Formal methods"
] |
271,046 | https://en.wikipedia.org/wiki/Resonance%20%28chemistry%29 | In chemistry, resonance, also called mesomerism, is a way of describing bonding in certain molecules or polyatomic ions by the combination of several contributing structures (or forms, also variously known as resonance structures or canonical structures) into a resonance hybrid (or hybrid structure) in valence bond theory. It has particular value for analyzing delocalized electrons where the bonding cannot be expressed by one single Lewis structure. The resonance hybrid is the accurate structure for a molecule or ion; it is an average of the theoretical (or hypothetical) contributing structures.
Overview
Under the framework of valence bond theory, resonance is an extension of the idea that the bonding in a chemical species can be described by a Lewis structure. For many chemical species, a single Lewis structure, consisting of atoms obeying the octet rule, possibly bearing formal charges, and connected by bonds of positive integer order, is sufficient for describing the chemical bonding and rationalizing experimentally determined molecular properties like bond lengths, angles, and dipole moment. However, in some cases, more than one Lewis structure could be drawn, and experimental properties are inconsistent with any one structure. In order to address this type of situation, several contributing structures are considered together as an average, and the molecule is said to be represented by a resonance hybrid in which several Lewis structures are used collectively to describe its true structure. For instance, in NO2–, nitrite anion, the two N–O bond lengths are equal, even though no single Lewis structure has two N–O bonds with the same formal bond order. However, its measured structure is consistent with a description as a resonance hybrid of the two major contributing structures shown above: it has two equal N–O bonds of 125 pm, intermediate in length between a typical N–O single bond (145 pm in hydroxylamine, H2N–OH) and N–O double bond (115 pm in nitronium ion, [O=N=O]+). According to the contributing structures, each N–O bond is an average of a formal single and formal double bond, leading to a true bond order of 1.5. By virtue of this averaging, the Lewis description of the bonding in NO2– is reconciled with the experimental fact that the anion has equivalent N–O bonds.
The resonance hybrid represents the actual molecule as the "average" of the contributing structures, with bond lengths and partial charges taking on intermediate values compared to those expected for the individual Lewis structures of the contributors, were they to exist as "real" chemical entities. The contributing structures differ only in the formal apportionment of electrons to the atoms, and not in the actual physically and chemically significant electron or spin density. While contributing structures may differ in formal bond orders and in formal charge assignments, all contributing structures must have the same number of valence electrons and the same spin multiplicity.
Because electron delocalization lowers the potential energy of a system, any species represented by a resonance hybrid is more stable than any of the (hypothetical) contributing structures. Electron delocalization stabilizes a molecule because the electrons are more evenly spread out over the molecule, decreasing electron-electron repulsion. The difference in potential energy between the actual species and the (computed) energy of the contributing structure with the lowest potential energy is called the resonance energy or delocalization energy. The magnitude of the resonance energy depends on assumptions made about the hypothetical "non-stabilized" species and the computational methods used and does not represent a measurable physical quantity, although comparisons of resonance energies computed under similar assumptions and conditions may be chemically meaningful.
Molecules with an extended π system such as linear polyenes and polyaromatic compounds are well described by resonance hybrids as well as by delocalised orbitals in molecular orbital theory.
Resonance vs isomerism
Resonance is to be distinguished from isomerism. Isomers are molecules with the same chemical formula but are distinct chemical species with different arrangements of atomic nuclei in space. Resonance contributors of a molecule, on the other hand, can only differ in the way electrons are formally assigned to atoms in the Lewis structure depictions of the molecule. Specifically, when a molecular structure is said to be represented by a resonance hybrid, it does not mean that electrons of the molecule are "resonating" or shifting back and forth between several sets of positions, each one represented by a Lewis structure. Rather, it means that the set of contributing structures represents an intermediate structure (a weighted average of the contributors), with a single, well-defined geometry and distribution of electrons. It is incorrect to regard resonance hybrids as rapidly interconverting isomers, even though the term "resonance" might evoke such an image. (As described below, the term "resonance" originated as a classical physics analogy for a quantum mechanical phenomenon, so it should not be construed too literally.) Symbolically, the double headed arrow A<->B is used to indicate that A and B are contributing forms of a single chemical species (as opposed to an equilibrium arrow, e.g., A <=> B; see below for details on usage).
A non-chemical analogy is illustrative: one can describe the characteristics of a real animal, the narwhal, in terms of the characteristics of two mythical creatures: the unicorn, a creature with a single horn on its head, and the leviathan, a large, whale-like creature. The narwhal is not a creature that goes back and forth between being a unicorn and being a leviathan, nor do the unicorn and leviathan have any physical existence outside the collective human imagination. Nevertheless, describing the narwhal in terms of these imaginary creatures provides a reasonably good description of its physical characteristics.
Due to confusion with the physical meaning of the word resonance, as no entities actually physically "resonate", it has been suggested that the term resonance be abandoned in favor of delocalization and resonance energy abandoned in favor of delocalization energy. A resonance structure becomes a contributing structure and the resonance hybrid becomes the hybrid structure. The double headed arrows would be replaced by commas to illustrate a set of structures, as arrows of any type may suggest that a chemical change is taking place.
Representation in diagrams
In diagrams, contributing structures are typically separated by double-headed arrows (↔). The arrow should not be confused with the right and left pointing equilibrium arrow (). All structures together may be enclosed in large square brackets, to indicate they picture one single molecule or ion, not different species in a chemical equilibrium.
Alternatively to the use of contributing structures in diagrams, a hybrid structure can be used. In a hybrid structure, pi bonds that are involved in resonance are usually pictured as curves or dashed lines, indicating that these are partial rather than normal complete pi bonds. In benzene and other aromatic rings, the delocalized pi-electrons are sometimes pictured as a solid circle.
History
The concept first appeared in 1899 in Johannes Thiele's "Partial Valence Hypothesis" to explain the unusual stability of benzene which would not be expected from August Kekulé's structure proposed in 1865 with alternating single and double bonds. Benzene undergoes substitution reactions, rather than addition reactions as typical for alkenes. He proposed that the carbon-carbon bond in benzene is intermediate of a single and double bond.
The resonance proposal also helped explain the number of isomers of benzene derivatives. For example, Kekulé's structure would predict four dibromobenzene isomers, including two ortho isomers with the brominated carbon atoms joined by either a single or a double bond. In reality there are only three dibromobenzene isomers and only one is ortho, in agreement with the idea that there is only one type of carbon-carbon bond, intermediate between a single and a double bond.
The mechanism of resonance was introduced into quantum mechanics by Werner Heisenberg in 1926 in a discussion of the quantum states of the helium atom. He compared the structure of the helium atom with the classical system of resonating coupled harmonic oscillators. In the classical system, the coupling produces two modes, one of which is lower in frequency than either of the uncoupled vibrations; quantum mechanically, this lower frequency is interpreted as a lower energy. Linus Pauling used this mechanism to explain the partial valence of molecules in 1928, and developed it further in a series of papers in 1931-1933. The alternative term mesomerism popular in German and French publications with the same meaning was introduced by C. K. Ingold in 1938, but did not catch on in the English literature. The current concept of mesomeric effect has taken on a related but different meaning. The double headed arrow was introduced by the German chemist Fritz Arndt who preferred the German phrase zwischenstufe or intermediate stage.
Resonance theory dominated over competing Hückel method for two decades thanks to being relatively easier to understand for chemists without fundamental physics background, even if they couldn't grasp the concept of quantum superposition and confused it with tautomerism. Pauling and Wheland themselves characterized Erich Hückel's approach as "cumbersome" at the time, and his lack of communication skills contributed: when Robert Robinson sent him a friendly request, he responded arrogantly that he is not interested in organic chemistry.
In the Soviet Union, resonance theory – especially as developed by Pauling – was attacked in the early 1950s as being contrary to the Marxist principles of dialectical materialism, and in June 1951 the Soviet Academy of Sciences under the leadership of Alexander Nesmeyanov convened a conference on the chemical structure of organic compounds, attended by 400 physicists, chemists, and philosophers, where "the pseudo-scientific essence of the theory of resonance was exposed and unmasked".
Major and minor contributors
One contributing structure may resemble the actual molecule more than another (in the sense of energy and stability). Structures with a low value of potential energy are more stable than those with high values and resemble the actual structure more. The most stable contributing structures are called major contributors. Energetically unfavourable and therefore less favorable structures are minor contributors. With rules listed in rough order of diminishing importance, major contributors are generally structures that
obey as much as possible the octet rule (8 valence electrons around each atom rather than having deficiencies or surplus, or 2 electrons for Period 1 elements);
have a maximum number of covalent bonds;
carry a minimum of formally charged atoms, with the separation for unlike and like charges minimized and maximized, respectively;
place negative charge, if any, on the most electronegative atoms and positive charge, if any, on the most electropositive;
do not deviate substantially from idealized bond lengths and angles (e.g., the relative unimportance of Dewar-type resonance contributors for benzene);
maintain aromatic substructures locally while avoiding anti-aromatic ones (see Clar sextet and biphenylene).
A maximum of eight valence electrons is strict for the Period 2 elements Be, B, C, N, O, and F, as is a maximum of two for H and He and effectively for Li as well. The issue of expansion of the valence shell of third period and heavier main group elements is controversial. A Lewis structure in which a central atom has a valence electron count greater than eight traditionally implies the participation of d orbitals in bonding. However, the consensus opinion is that while they may make a marginal contribution, the participation of d orbitals is unimportant, and the bonding of so-called hypervalent molecules are, for the most part, better explained by charge-separated contributing forms that depict three-center four-electron bonding. Nevertheless, by tradition, expanded octet structures are still commonly drawn for functional groups like sulfoxides, sulfones, and phosphorus ylides, for example. Regarded as a formalism that does not necessarily reflect the true electronic structure, such depictions are preferred by the IUPAC over structures featuring partial bonds, charge separation, or dative bonds.
Equivalent contributors contribute equally to the actual structure, while the importance of nonequivalent contributors is determined by the extent to which they conform to the properties listed above. A larger number of significant contributing structures and a more voluminous space available for delocalized electrons lead to stabilization (lowering of the energy) of the molecule.
Examples
Aromatic molecules
In benzene the two cyclohexatriene Kekulé structures, first proposed by Kekulé, are taken together as contributing structures to represent the total structure. In the hybrid structure on the right, the dashed hexagon replaces three double bonds, and represents six electrons in a set of three molecular orbitals of π symmetry, with a nodal plane in the plane of the molecule.
In furan a lone pair of the oxygen atom interacts with the π orbitals of the carbon atoms. The curved arrows depict the permutation of delocalized π electrons, which results in different contributors.
Electron-rich molecules
The ozone molecule is represented by two contributing structures. In reality the two terminal oxygen atoms are equivalent and the hybrid structure is drawn on the right with a charge of − on both oxygen atoms and partial double bonds with a full and dashed line and bond order .
For hypervalent molecules, the rationalization described above can be applied to generate contributing structures to explain the bonding in such molecules. Shown below are the contributing structures of a 3c-4e bond in xenon difluoride.
[\mathsf{F-XeF^- <-> F^-Xe-F}]
Electron-deficient molecules
The allyl cation has two contributing structures with a positive charge on the terminal carbon atoms. In the hybrid structure their charge is +. The full positive charge can also be depicted as delocalized among three carbon atoms.
The diborane molecule is described by contributing structures, each with electron-deficiency on different atoms. This reduces the electron-deficiency on each atom and stabilizes the molecule. Below are the contributing structures of an individual 3c-2e bond in diborane.
Reactive intermediates
Often, reactive intermediates such as carbocations and free radicals have more delocalized structure than their parent reactants, giving rise to unexpected products. The classical example is allylic rearrangement. When 1 mole of HCl adds to 1 mole of 1,3-butadiene, in addition to the ordinarily expected product 3-chloro-1-butene, we also find 1-chloro-2-butene. Isotope labelling experiments have shown that what happens here is that the additional double bond shifts from 1,2 position to 2,3 position in some of the product. This and other evidence (such as NMR in superacid solutions) shows that the intermediate carbocation must have a highly delocalized structure, different from its mostly classical (delocalization exists but is small) parent molecule. This cation (an allylic cation) can be represented using resonance, as shown above.
This observation of greater delocalization in less stable molecules is quite general. The excited states of conjugated dienes are stabilised more by conjugation than their ground states, causing them to become organic dyes.
A well-studied example of delocalization that does not involve π electrons (hyperconjugation) can be observed in the non-classical 2-Norbornyl cation Another example is methanium (). These can be viewed as containing three-center two-electron bonds and are represented either by contributing structures involving rearrangement of σ electrons or by a special notation, a Y that has the three nuclei at its three points.
Delocalized electrons are important for several reasons; a major one is that an expected chemical reaction may not occur because the electrons delocalize to a more stable configuration, resulting in a reaction that happens at a different location. An example is the Friedel–Crafts alkylation of benzene with 1-chloro-2-methylpropane; the carbocation rearranges to a tert-butyl group stabilized by hyperconjugation, a particular form of delocalization.
Benzene
Bond lengths
Comparing the two contributing structures of benzene, all single and double bonds are interchanged. Bond lengths can be measured, for example using X-ray diffraction. The average length of a C–C single bond is 154 pm; that of a C=C double bond is 133 pm. In localized cyclohexatriene, the carbon–carbon bonds should be alternating 154 and 133 pm. Instead, all carbon–carbon bonds in benzene are found to be about 139 pm, a bond length intermediate between single and double bond. This mixed single and double bond (or triple bond) character is typical for all molecules in which bonds have a different bond order in different contributing structures. Bond lengths can be compared using bond orders. For example, in cyclohexane the bond order is 1 while that in benzene is 1 + (3 ÷ 6) = . Consequently, benzene has more double bond character and hence has a shorter bond length than cyclohexane.
Resonance energy
Resonance (or delocalization) energy is the amount of energy needed to convert the true delocalized structure into that of the most stable contributing structure. The empirical resonance energy can be estimated by comparing the enthalpy change of hydrogenation of the real substance with that estimated for the contributing structure.
The complete hydrogenation of benzene to cyclohexane via 1,3-cyclohexadiene and cyclohexene is exothermic; 1 mole of benzene delivers 208.4 kJ (49.8 kcal).
Hydrogenation of one mole of double bonds delivers 119.7 kJ (28.6 kcal), as can be deduced from the last step, the hydrogenation of cyclohexene. In benzene, however, 23.4 kJ (5.6 kcal) are needed to hydrogenate one mole of double bonds. The difference, being 143.1 kJ (34.2 kcal), is the empirical resonance energy of benzene. Because 1,3-cyclohexadiene also has a small delocalization energy (7.6 kJ or 1.8 kcal/mol) the net resonance energy, relative to the localized cyclohexatriene, is a bit higher: 151 kJ or 36 kcal/mol.
This measured resonance energy is also the difference between the hydrogenation energy of three 'non-resonance' double bonds and the measured hydrogenation energy:
(3 × 119.7) − 208.4 = 150.7 kJ/mol (36 kcal).
Regardless of their exact values, resonance energies of various related compounds provide insights into their bonding. The resonance energies for pyrrole, thiophene, and furan are, respectively, 88, 121, and
67 kJ/mol (21, 29, and 16 kcal/mol). Thus, these heterocycles are far less aromatic than benzene, as is manifested in the lability of these rings.
Quantum mechanical description in valence bond (VB) theory
Resonance has a deeper significance in the mathematical formalism of valence bond theory (VB). Quantum mechanics requires that the wavefunction of a molecule obey its observed symmetry. If a single contributing structure does not achieve this, resonance is invoked.
For example, in benzene, valence bond theory begins with the two Kekulé structures which do not individually possess the sixfold symmetry of the real molecule. The theory constructs the actual wave function as a linear superposition of the wave functions representing the two structures. As both Kekulé structures have equal energy, they are equal contributors to the overall structure – the superposition is an equally weighted average, or a 1:1 linear combination of the two in the case of benzene. The symmetric combination gives the ground state, while the antisymmetric combination gives the first excited state, as shown.
In general, the superposition is written with undetermined coefficients, which are then variationally optimized to find the lowest possible energy for the given set of basis wave functions. When more contributing structures are included, the molecular wave function becomes more accurate and more excited states can be derived from different combinations of the contributing structures.
Comparison with molecular orbital (MO) theory
In molecular orbital theory, the main alternative to valence bond theory, the molecular orbitals (MOs) are approximated as sums of all the atomic orbitals (AOs) on all the atoms; there are as many MOs as AOs. Each AOi has a weighting coefficient ci that indicates the AO's contribution to a particular MO. For example, in benzene, the MO model gives us 6 π MOs which are combinations of the 2pz AOs on each of the 6 C atoms. Thus, each π MO is delocalized over the whole benzene molecule and any electron occupying an MO will be delocalized over the whole molecule. This MO interpretation has inspired the picture of the benzene ring as a hexagon with a circle inside. When describing benzene, the VB concept of localized σ bonds and the MO concept of delocalized π orbitals are frequently combined in elementary chemistry courses.
The contributing structures in the VB model are particularly useful in predicting the effect of substituents on π systems such as benzene. They lead to the models of contributing structures for an electron-withdrawing group and electron-releasing group on benzene. The utility of MO theory is that a quantitative indication of the charge from the π system on an atom can be obtained from the squares of the weighting coefficient ci on atom Ci. Charge qi ≈ c. The reason for squaring the coefficient is that if an electron is described by an AO, then the square of the AO gives the electron density. The AOs are adjusted (normalized) so that AO2 = 1, and qi ≈ (ciAOi)2 ≈ c. In benzene, qi = 1 on each C atom. With an electron-withdrawing group qi < 1 on the ortho and para C atoms and qi > 1 for an electron-releasing group.
Coefficients
Weighting of the contributing structures in terms of their contribution to the overall structure can be calculated in multiple ways, using "Ab initio" methods derived from Valence Bond theory, or else from the Natural Bond Orbitals (NBO) approaches of Weinhold NBO5 , or finally from empirical calculations based on the Hückel method. A Hückel method-based software for teaching resonance is available on the HuLiS Web site.
Charge delocalization
In the case of ions it is common to speak about delocalized charge (charge delocalization). An example of delocalized charge in ions can be found in the carboxylate group, wherein the negative charge is centered equally on the two oxygen atoms. Charge delocalization in anions is an important factor determining their reactivity (generally: the higher the extent of delocalization the lower the reactivity) and, specifically, the acid strength of their conjugate acids. As a general rule, the better delocalized is the charge in an anion the stronger is its conjugate acid. For example, the negative charge in perchlorate anion () is evenly distributed among the symmetrically oriented oxygen atoms (and a part of it is also kept by the central chlorine atom). This excellent charge delocalization combined with the high number of oxygen atoms (four) and high electronegativity of the central chlorine atom leads to perchloric acid being one of the strongest known acids with a pKa value of −10.
The extent of charge delocalization in an anion can be quantitatively expressed via the WAPS (weighted average positive sigma) parameter parameter and an analogous WANS (weighted average negative sigma) parameter is used for cations.
WAPS and WANS values are given in e/Å4. Larger values indicate more localized charge in the corresponding ion.
See also
Hückel molecular orbital theory
Conjugated system
Fluxional molecule
Avoided crossing
External links
References
Chemical bonding
Physical chemistry
Electronic structure methods | Resonance (chemistry) | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,043 | [
"Applied and interdisciplinary physics",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Physical chemistry"
] |
271,143 | https://en.wikipedia.org/wiki/Fresnel%20integral | The Fresnel integrals and are two transcendental functions named after Augustin-Jean Fresnel that are used in optics and are closely related to the error function (). They arise in the description of near-field Fresnel diffraction phenomena and are defined through the following integral representations:
The parametric curve is the Euler spiral or clothoid, a curve whose curvature varies linearly with arclength.
The term Fresnel integral may also refer to the complex definite integral
where is real and positive; this can be evaluated by closing a contour in the complex plane and applying Cauchy's integral theorem.
Definition
The Fresnel integrals admit the following power series expansions that converge for all :
Some widely used tables use instead of for the argument of the integrals defining and . This changes their limits at infinity from to and the arc length for the first spiral turn from to 2 (at ). These alternative functions are usually known as normalized Fresnel integrals.
Euler spiral
The Euler spiral, also known as a Cornu spiral or clothoid, is the curve generated by a parametric plot of against . The Euler spiral was first studied in the mid 18th century by Leonhard Euler in the context of Euler–Bernoulli beam theory. A century later, Marie Alfred Cornu constructed the same spiral as a nomogram for diffraction computations.
From the definitions of Fresnel integrals, the infinitesimals and are thus:
Thus the length of the spiral measured from the origin can be expressed as
That is, the parameter is the curve length measured from the origin , and the Euler spiral has infinite length. The vector also expresses the unit tangent vector along the spiral, giving . Since is the curve length, the curvature can be expressed as
Thus the rate of change of curvature with respect to the curve length is
An Euler spiral has the property that its curvature at any point is proportional to the distance along the spiral, measured from the origin. This property makes it useful as a transition curve in highway and railway engineering: if a vehicle follows the spiral at unit speed, the parameter in the above derivatives also represents the time. Consequently, a vehicle following the spiral at constant speed will have a constant rate of angular acceleration.
Sections from Euler spirals are commonly incorporated into the shape of rollercoaster loops to make what are known as clothoid loops.
Properties
and are odd functions of ,
which can be readily seen from the fact that their power series expansions have only odd-degree terms, or alternatively because they are antiderivatives of even functions that also are zero at the origin.
Asymptotics of the Fresnel integrals as are given by the formulas:
Using the power series expansions above, the Fresnel integrals can be extended to the domain of complex numbers, where they become entire functions of the complex variable .
The Fresnel integrals can be expressed using the error function as follows:
or
Limits as approaches infinity
The integrals defining and cannot be evaluated in the closed form in terms of elementary functions, except in special cases. The limits of these functions as goes to infinity are known:
This can be derived with any one of several methods. One of them uses a contour integral of the function around the boundary of the sector-shaped region in the complex plane formed by the positive -axis, the bisector of the first quadrant with , and a circular arc of radius centered at the origin.
As goes to infinity, the integral along the circular arc tends to
where polar coordinates were used and Jordan's inequality was utilised for the second inequality. The integral along the real axis tends to the half Gaussian integral
Note too that because the integrand is an entire function on the complex plane, its integral along the whole contour is zero. Overall, we must have
where denotes the bisector of the first quadrant, as in the diagram. To evaluate the left hand side, parametrize the bisector as
where ranges from 0 to . Note that the square of this expression is just . Therefore, substitution gives the left hand side as
Using Euler's formula to take real and imaginary parts of gives this as
where we have written to emphasize that the original Gaussian integral's value is completely real with zero imaginary part. Letting
and then equating real and imaginary parts produces the following system of two equations in the two unknowns and :
Solving this for and gives the desired result.
Generalization
The integral
is a confluent hypergeometric function and also an incomplete gamma function
which reduces to Fresnel integrals if real or imaginary parts are taken:
The leading term in the asymptotic expansion is
and therefore
For , the imaginary part of this equation in particular is
with the left-hand side converging for and the right-hand side being its analytical extension to the whole plane less where lie the poles of .
The Kummer transformation of the confluent hypergeometric function is
with
Numerical approximation
For computation to arbitrary precision, the power series is suitable for small argument. For large argument, asymptotic expansions converge faster. Continued fraction methods may also be used.
For computation to particular target precision, other approximations have been developed. Cody developed a set of efficient approximations based on rational functions that give relative errors down to . A FORTRAN implementation of the Cody approximation that includes the values of the coefficients needed for implementation in other languages was published by van Snyder. Boersma developed an approximation with error less than .
Applications
The Fresnel integrals were originally used in the calculation of the electromagnetic field intensity in an environment where light bends around opaque objects. More recently, they have been used in the design of highways and railways, specifically their curvature transition zones, see track transition curve. Other applications are rollercoasters or calculating the transitions on a velodrome track to allow rapid entry to the bends and gradual exit.
Gallery
See also
Böhmer integral
Fresnel zone
Track transition curve
Euler spiral
Zone plate
Dirichlet integral
Notes
References
(Uses instead of .)
External links
Cephes, free/open-source C++/C code to compute Fresnel integrals among other special functions. Used in SciPy and ALGLIB.
Faddeeva Package, free/open-source C++/C code to compute complex error functions (from which the Fresnel integrals can be obtained), with wrappers for Matlab, Python, and other languages.
Integral calculus
Spirals
Physical optics
Special functions
Special hypergeometric functions
Analytic functions
Diffraction | Fresnel integral | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,359 | [
"Special functions",
"Spectrum (physical sciences)",
"Calculus",
"Combinatorics",
"Crystallography",
"Diffraction",
"Spectroscopy",
"Integral calculus"
] |
3,570,657 | https://en.wikipedia.org/wiki/Wigner%27s%20theorem | Wigner's theorem, proved by Eugene Wigner in 1931, is a cornerstone of the mathematical formulation of quantum mechanics. The theorem specifies how physical symmetries such as rotations, translations, and CPT transformations are represented on the Hilbert space of states.
The physical states in a quantum theory are represented by unit vectors in Hilbert space up to a phase factor, i.e. by the complex line or ray the vector spans. In addition, by the Born rule the absolute value of the unit vector's inner product with a unit eigenvector, or equivalently the cosine squared of the angle between the lines the vectors span, corresponds to the transition probability. Ray space, in mathematics known as projective Hilbert space, is the space of all unit vectors in Hilbert space up to the equivalence relation of differing by a phase factor. By Wigner's theorem, any transformation of ray space that preserves the absolute value of the inner products can be represented by a unitary or antiunitary transformation of Hilbert space, which is unique up to a phase factor. As a consequence, the representation of a symmetry group on ray space can be lifted to a projective representation or sometimes even an ordinary representation on Hilbert space.
Rays and ray space
It is a postulate of quantum mechanics that state vectors in complex separable Hilbert space that are scalar nonzero multiples of each other represent the same pure state, i.e., the vectors and , with , represent the same state. By multiplying the state vectors with the phase factor, one obtains a set of vectors called the ray
Two nonzero vectors define the same ray, if and only if they differ by some nonzero complex number: .
Alternatively, we can consider a ray as a set of vectors with norm 1, a unit ray, by intersecting the line with the unit sphere
.
Two unit vectors then define the same unit ray if they differ by a phase factor: .
This is the more usual picture in physics.
The set of rays is in one to one correspondence with the set of unit rays and we can identify them.
There is also a one-to-one correspondence between physical pure states and (unit) rays given by
where is the orthogonal projection on the line . In either interpretation, if or then is a representative of .
The space of all rays is a projective Hilbert space called the ray space. It can be defined in several ways. One may define an equivalence relation on by
and define ray space as the quotient set
.
Alternatively, for an equivalence relation on the sphere , the unit ray space is an incarnation of ray space defined (making no notational distinction with ray space) as the set of equivalence classes
.
A third equivalent definition of ray space is as pure state ray space i.e. as density matrices that are orthogonal projections of rank 1
.
If is -dimensional, i.e., , then is isomorphic to the complex projective space . For example
generate points on the Bloch sphere; isomorphic to the Riemann sphere .
Ray space (i.e. projective space) is not a vector space but rather a set of vector lines (vector subspaces of dimension one) in a vector space of dimension . For example, for every two vectors and ratio of complex numbers (i.e. element of ) there is a well defined ray . As such, for distinct rays (i.e. linearly independent lines) there is a projective line of rays of the form in : all 1-dimensional complex lines in the 2-dimensional complex plane spanned by and . Contrarily to the case of vector spaces, however, an independent spanning set does not suffice for defining coordinates (see: projective frame).
The Hilbert space structure on defines additional structure on ray space. Define the ray correlation (or ray product)
where is the Hilbert space inner product, and are representatives of and . Note that the righthand side is independent of the choice of representatives.
The physical significance of this definition is that according to the Born rule, another postulate of quantum mechanics, the transition probabilities between normalised states and in Hilbert space is given by
i.e. we can define Born's rule on ray space by.
Geometrically, we can define an angle with between the lines
and by . The angle then turns out to satisfy the triangle inequality and defines a metric structure on ray space which comes from a Riemannian metric, the Fubini-Study metric.
Symmetry transformations
Loosely speaking, a symmetry transformation is a change in which "nothing happens" or a "change in our point of view" that does not change the outcomes of possible experiments. For example, translating a system in a homogeneous environment should have no qualitative effect on the outcomes of experiments made on the system. Likewise for rotating a system in an isotropic environment. This becomes even clearer when one considers the mathematically equivalent passive transformations, i.e. simply changes of coordinates and let the system be. Usually, the domain and range Hilbert spaces are the same. An exception would be (in a non-relativistic theory) the Hilbert space of electron states that is subjected to a charge conjugation transformation. In this case the electron states are mapped to the Hilbert space of positron states and vice versa. However this means that the symmetry acts on the direct sum of the Hilbert spaces.
A transformation of a physical system is a transformation of states, hence mathematically a transformation, not of the Hilbert space, but of its ray space. Hence, in quantum mechanics, a transformation of a physical system gives rise to a bijective ray transformation
Since the composition of two physical transformations and the reversal of a physical transformation are also physical transformations, the set of all ray transformations so obtained is a group acting on . Not all bijections of are permissible as symmetry transformations, however. Physical transformations must preserve Born's rule.
For a physical transformation, the transition probabilities in the transformed and untransformed systems should be preserved:
A bijective ray transformation is called a symmetry transformation iff:.
A geometric interpretation is that a symmetry transformation is an isometry of ray space.
Some facts about symmetry transformations that can be verified using the definition:
The product of two symmetry transformations, i.e. two symmetry transformations applied in succession, is a symmetry transformation.
Any symmetry transformation has an inverse.
The identity transformation is a symmetry transformation.
Multiplication of symmetry transformations is associative.
The set of symmetry transformations thus forms a group, the symmetry group of the system. Some important frequently occurring subgroups in the symmetry group of a system are realizations of
The symmetric group with its subgroups. This is important on the exchange of particle labels.
The Poincaré group. It encodes the fundamental symmetries of spacetime [NB: a symmetry is defined above as a map on the ray space describing a given system, the notion of symmetry of spacetime has not been defined and is not clear].
Internal symmetry groups like SU(2) and SU(3). They describe so called internal symmetries, like isospin and color charge peculiar to quantum mechanical systems.
These groups are also referred to as symmetry groups of the system.
Statement of Wigner's theorem
Preliminaries
Some preliminary definitions are needed to state the theorem. A transformation between Hilbert spaces is unitary if it is bijective and
If then reduces to a unitary operator whose inverse is equal to its adjoint .
Likewise, a transformation is antiunitary if it is bijective and
Given a unitary transformation between Hilbert spaces, define
This is a symmetry transformation since
In the same way an antiunitary transformation between Hilbert space induces a symmetry transformation. One says that a transformation between Hilbert spaces is compatible with the transformation between ray spaces if or equivalently
for all .
Statement
Wigner's theorem states a converse of the above:
Proofs can be found in , and .
Antiunitary transformations are less prominent in physics. They are all related to a reversal of the direction of the flow of time.
Remark 1: The significance of the uniqueness part of the theorem is that it specifies the degree of uniqueness of the representation on . For example, one might be tempted to believe that
would be admissible, with for but this is not the case according to the theorem. In fact such a would not be additive.
Remark 2: Whether must be represented by a unitary or antiunitary operator is determined by topology. If , the second cohomology has a unique generator such that for a (equivalently for every) complex projective line , one has . Since is a homeomorphism, also generates and so we have . If is unitary, then while if is anti linear then .
Remark 3: Wigner's theorem is in close connection with the fundamental theorem of projective geometry
Representations and projective representations
If is a symmetry group (in this latter sense of being embedded as a subgroup of the symmetry group of the system acting on ray space), and if with , then
where the are ray transformations. From the uniqueness part of Wigner's theorem, one has for the compatible representatives ,
where is a phase factor.
The function is called a -cocycle or Schur multiplier. A map satisfying the above relation for some vector space is called a projective representation or a ray representation. If , then it is called a representation.
One should note that the terminology differs between mathematics and physics. In the linked article, term projective representation has a slightly different meaning, but the term as presented here enters as an ingredient and the mathematics per se is of course the same. If the realization of the symmetry group, , is given in terms of action on the space of unit rays , then it is a projective representation in the mathematical sense, while its representative on Hilbert space is a projective representation in the physical sense.
Applying the last relation (several times) to the product and appealing to the known associativity of multiplication of operators on , one finds
They also satisfy
Upon redefinition of the phases,
which is allowed by last theorem, one finds
where the hatted quantities are defined by
Utility of phase freedom
The following rather technical theorems and many more can be found, with accessible proofs, in .
The freedom of choice of phases can be used to simplify the phase factors. For some groups the phase can be eliminated altogether.
In the case of the Lorentz group and its subgroup the rotation group SO(3), phases can, for projective representations, be chosen such that . For their respective universal covering groups, SL(2,C) and Spin(3), it is according to the theorem possible to have , i.e. they are proper representations.
The study of redefinition of phases involves group cohomology. Two functions related as the hatted and non-hatted versions of above are said to be cohomologous. They belong to the same second cohomology class, i.e. they are represented by the same element in , the second cohomology group of . If an element of contains the trivial function , then it is said to be trivial. The topic can be studied at the level of Lie algebras and Lie algebra cohomology as well.
Assuming the projective representation is weakly continuous, two relevant theorems can be stated. An immediate consequence of (weak) continuity is that the identity component is represented by unitary operators.
Modifications and generalizations
Wigner's theorem applies to automorphisms on the Hilbert space of pure states. Theorems by Kadison and Simon apply to the space of mixed states (trace-class positive operators) and use slight different notions of symmetry.
See also
Particle physics and representation theory
Remarks
Notes
References
Further reading
Hilbert spaces
Theorems in quantum mechanics | Wigner's theorem | [
"Physics",
"Mathematics"
] | 2,403 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Hilbert spaces",
"Physics theorems"
] |
3,573,402 | https://en.wikipedia.org/wiki/Choke%20valve | In internal combustion engines with carburetors, a choke valve or choke modifies the air pressure in the intake manifold, thereby altering the air–fuel ratio entering the engine. Choke valves are generally used in naturally aspirated engines to supply a richer fuel mixture when starting the engine. Most choke valves in engines are butterfly valves mounted upstream of the carburetor jet to produce a higher partial vacuum, which increases the fuel draw.
In heavy industrial or fluid engineering contexts, including oil and gas production, a choke valve or choke is a particular design of valve with a solid cylinder placed inside another slotted or perforated cylinder.
Carburetor
A choke valve is sometimes installed in the carburetor of internal combustion engines. Its purpose is to restrict the flow of air, thereby enriching the fuel-air mixture while starting the engine. Depending on engine design and application, the valve can be activated manually by the operator of the engine (via a lever or pull handle) or automatically by a temperature-sensitive mechanism called an automatic choke.
Choke valves are important for naturally-aspirated gasoline engines because small droplets of gasoline do not evaporate well within a cold engine. By restricting the flow of air into the throat of the carburetor, the choke valve reduces the pressure inside the throat, which causes a proportionally greater amount of fuel to be pushed from the main jet into the combustion chamber during cold-running operation. Once the engine is warm (from combustion), opening the choke valve restores the carburetor to normal operation, supplying fuel and air in the correct stoichiometric ratio for clean, efficient combustion.
The term "choke" is applied to the carburetor's enrichment device even when it works by a totally different method. Commonly, SU carburettors have "chokes" that work by lowering the fuel jet to a narrower part of the needle. Some others work by introducing an additional fuel route to the constant depression chamber.
Chokes were nearly universal in automobiles until fuel injection began to supplant carburetors. Choke valves are still common in other internal-combustion engines, including most small portable engines, motorcycles, small propeller-driven airplanes, riding lawn mowers, and normally-aspirated marine engines.
Industrial
In the extraction of petroleum (and other heavy-duty fluid handling contexts), a choke valve (or "choke") is an adjustable flow limiter that is designed to operate at a large pressure drop, at a large flow rate, for a long time. A choke is often a part of the "Christmas tree" at the wellhead.
The most familiar choke design is a solid cylinder (called a "plug" or "stem") that closely fits inside another cylinder that has multiple small holes through it (the "cage"). Gradually withdrawing the plug uncovers more and more holes, progressively reducing the resistance to flow. If the holes are regularly placed, then the relationship between the position of the valve and the flow coefficient (Cv) (the flow rate per unit pressure) is roughly linear. Another design places a closely fitted cylindrical "sleeve" around the outside of the cage rather than a plug inside the cage. A choke may also include a conical valve and valve seat, to ensure complete shutoff.
Fluids flowing into the cage (through all uncovered holes) enter from all sides, producing fluid jets. The jets collide at the center of the cage cylinder, dissipating most of their energy through fluid impinging on fluid, producing less friction and cavitation erosion of the metal valve body. For highly erosive or corrosive fluids, chokes can be made of tungsten carbide or inconel.
References
External links
Short video of a choke valve operating on a chainsaw carburetor from HowStuffWorks
Automotive engine technologies
Carburettors
Valves | Choke valve | [
"Physics",
"Chemistry"
] | 795 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
4,819,374 | https://en.wikipedia.org/wiki/Standard%20time%20%28manufacturing%29 | In industrial engineering, the standard time is the time required by an average skilled operator, working at
a normal pace, to perform a specified task using a prescribed method. It includes appropriate allowances to allow the person to recover from fatigue and, where necessary, an additional allowance to cover contingent elements which may occur but have not been observed.
Standard time = normal time + allowance
Where;
Normal time = average time × rating factor (take rating factor between 1.1 and 1.2)
Usage of the standard time
Time times for all operations are known.
Staffing (or workforce planning): the number of workers required cannot accurately be determined unless the time required to process the existing work is known.
Line balancing (or production leveling): the correct number of workstations for optimum work flow depends on the processing time, or standard, at each workstation.
Materials requirement planning (MRP): MRP systems cannot operate properly without accurate work standards.
System simulation: simulation models cannot accurately simulate operation unless times for all operations are known.
Wage payment: comparing expected performance with actual performance requires the use of work standards.
Cost accounting: work standards are necessary for determining not only the labor component of costs, but also the correct allocation of production costs to specific products.
Employee evaluation: in order to assess whether individual employees are performing as well as they should, a performance standard is necessary against which to measure the level of performance.
Techniques to establish a standard time
The standard time can be determined using the following techniques:
Time study
Predetermined motion time system aka PMTS or PTS
Standard data system
Work sampling
Method of calculation
The Standard Time is the product of three factors:
Observed time: The time measured to complete the task.
Performance rating factor: The number pace the person is working at. 90% is working slower than normal, 110% is working faster than normal, 100% is normal. This factor is calculated by an experienced worker who is trained to observe and determine the rating.
Personal, fatigue, and delay (PFD) allowance.
The standard time can then be calculated by using:
References
Citations
Groover, M. P. (2007). Work systems: the methods, measurement and management of work, Prentice Hall,
Salvendy, G. (Ed.) (2001). Handbook of Industrial Engineering: Technology and Operations Management, third edition, John Wiley & Sons, Hoboken, NJ.
Zandin, K. (Ed.) (2001). Maynard's Industrial Engineering Handbook, fifth edition, McGraw-Hill, New York, NY.
External links
Standard Performance
Industrial engineering
Time and motion study | Standard time (manufacturing) | [
"Engineering"
] | 533 | [
"Time and motion study",
"Industrial engineering"
] |
4,819,510 | https://en.wikipedia.org/wiki/Integral%20linearity | A measurement system consists of a sensor, to input the physical parameter that is of interest, and an output to a medium that is suitable for reading by the system that needs to know the value of the parameter. (This could be a device to convert the temperature of the surrounding air or water into the visually readable height of a column of mercury in a small tube, for example; but the conversion could also be made to an electronic encoding of the parameter, for reading by a computer system.)
The integral linearity is then a measure of the fidelity of the conversion that is performed by the measuring system. It is the relation of the output to the input over a range expressed as a percentage of the full-scale measurements. Integral linearity is a measure of the device's deviation from ideal linear behaviour.
The most common denotation of integral linearity is independent linearity.
In the context of a digital-to-analog converter (DAC) or an analog-to-digital converter (ADC), independent linearity is fitted to minimize the deviation with respect to the ideal behaviour with no constraints. Other types of integral linearity place constraints on the symmetry or end points of the linear fit with respect to the actual data.
In the case of position sensors, two general types exist. Differences between the two regarding independent linearity essentially relate to the type of mechanical interface - linear or rotary. For rotary position sensors, as a shaft (or in the case of magnetic sensors, a magnet) is turned over a defined mechanical range in a direction causing an increasing response, an output voltage changes from a minimum to maximum value. The variation from an ideal linear relationship as this device is changed from minimum to maximum range end-points is the independent linearity error. It is measured in a practical sense as deviation of output voltage as a percentage of input voltage with the maximum value as the range is traversed, usually being referred to in a device's specifications. The same description holds for linear position sensors except that a straight rod (or magnet) is moved along the length of the sensor or as it extends from the end of a linear position sensor.
Notes
Measurement | Integral linearity | [
"Physics",
"Mathematics"
] | 439 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
4,821,621 | https://en.wikipedia.org/wiki/PAX3 | The PAX3 (paired box gene 3) gene encodes a member of the paired box or PAX family of transcription factors. The PAX family consists of nine human (PAX1-PAX9) and nine mouse (Pax1-Pax9) members arranged into four subfamilies. Human PAX3 and mouse Pax3 are present in a subfamily along with the highly homologous human PAX7 and mouse Pax7 genes. The human PAX3 gene is located in the 2q36.1 chromosomal region, and contains 10 exons within a 100 kb region.
Transcript splicing
Alternative splicing and processing generates multiple PAX3 isoforms that have been detected at the mRNA level. PAX3e is the longest isoform and consists of 10 exons that encode a 505 amino acid protein. In other mammalian species, including mouse, the longest mRNAs correspond to the human PAX3c and PAX3d isoforms, which consist of the first 8 or 9 exons of the PAX3 gene, respectively. Shorter PAX3 isoforms include mRNAs that skip exon 8 (PAX3g and PAX3h) and mRNAs containing 4 or 5 exons (PAX3a and PAX3b). In limited studies comparing isoform expression, PAX3d is expressed at the highest levels. From a functional standpoint, PAX3c, PAX3d, and PAX3h stimulate activities such as cell growth whereas PAX3e and PAX3g inhibit these activities, and PAX3a and PAX3b show no activity or inhibit these endpoints.
A common alternative splice affecting the PAX3 mRNA involves the sequence CAG at the 5’ end of exon 3. This splice either includes or excludes these three bases, thus resulting in the presence or absence of a glutamine residue in the paired box motif. Limited sequencing studies of full-length human cDNAs identified this splicing event as a variant of the PAX3d isoform, and this spliced isoform has been separately termed the PAX3i isoform. The Q+ and Q− isoforms of PAX3 are generally co-expressed in cells. At the functional level, the Q+ isoform shows similar or less DNA binding and transcriptional activation than the Q− isoform.
Protein structure and function
PAX3 encodes a transcription factor with an N-terminal DNA binding domain consisting of a paired box (PD) encoded by exons 2, 3, and 4, and an octapeptide and complete homeodomain (HD) encoded by exons 5 and 6. In addition, the PAX3 protein has a C-terminal transcriptional activation domain encoded by exons 7 and 8. The highly conserved PD consists of a 128 amino acid region that binds to DNA sequences related to the TCACGC/G motif. The HD motif usually consists of 60 amino acids and binds to sequences containing a TAAT core motif. The combination of these two DNA binding domains enable the PAX3 protein to recognize longer sequences containing PD and HD binding sites. In the C-terminus of PAX3, there is a proline, serine and threonine (PST)-rich region measuring 78 amino acids that functions to stimulate transcriptional activity. There are also transcriptional repression domains in the HD and N-terminal region (including the first half of the PD) that repress the C-terminal transcriptional activation domain.
PAX3 functions as a transcriptional activator for most target genes, but also may repress a smaller subset of target genes. These expression changes are effected through binding of PAX3 to specific recognition sites, which are situated in various genomic locations. Some binding sites are located in or near target genes, such as the 5’ promoter, first intron and 3’ untranslated region. A substantial number of PAX3 binding sites are located at larger distances upstream and downstream of target genes. Among the PAX3 target genes, there is one group associated with muscle development and a second group associated with neural and melanocyte development. The proteins encoded by these target genes regulate various functional activities in these lineages, including differentiation, proliferation, migration, adhesion, and apoptosis.
PAX3 interacts with other nuclear proteins, which modulate PAX3 transcriptional activity. Dimerization of PAX3 with another PAX3 molecule or a PAX7 molecule enables binding to a palindromic HD binding site (TAATCAATTA). Interaction of PAX3 with other transcription factors (such as SOX10) or chromatin factors (such as PAX3/7BP) enables synergistic activation of PAX3 target genes. In contrast, binding of PAX3 to co-repressors, such as calmyrin, inhibits activation of PAX3 target genes. These co-repressors may function by altering chromatin structure at target genes, inhibiting PAX3 recognition of its DNA binding site or directly altering PAX3 transcriptional activity.
Finally, PAX3 protein expression and function can be modulated by post-translational modifications. PAX3 can be phosphorylated at serines 201, 205 and 209 by kinases such as GSK3b, which in some settings will increase PAX3 protein stability. In addition, PAX3 can also undergo ubiquitination and acetylation at lysines 437 and 475, which regulates protein stability and function.
Table 1. Representative PAX3 transcriptional target genes.
Expression during development
During development, one of the major lineages expressing Pax3 is the skeletal muscle lineage. Pax3 expression is first seen in the pre-somitic paraxial mesoderm, and then ultimately becomes restricted to the dermomyotome, which forms from the dorsal region of the somites. To form skeletal muscle in central body segments, PAX3-expressing cells detach from the dermomyotome and then Pax3 expression is turned off as Myf5 and MyoD1 expression is activated. To form other skeletal muscles, Pax3-expressing cells detach from the dermomyotome and migrate to more distant sites, such as the limbs and diaphragm. A subset of these Pax3-expressing dermomyotome-derived cells also serves as an ongoing progenitor pool for skeletal muscle growth during fetal development. During later developmental stages, myogenic precursors expressing Pax3 and/or Pax7 form satellite cells within the skeletal muscle, which contribute to postnatal muscle growth and muscle regeneration. These adult satellite cells remain quiescent until injury occurs, and then are stimulated to divide and regenerate the injured muscle.
Pax3 is also involved in the development of the nervous system. Expression of Pax3 is first detected in the dorsal region of the neural groove and, as this neural groove deepens to form the neural tube, Pax3 is expressed in the dorsal portion of the neural tube. As the neural tube enlarges, Pax3 expression is localized to proliferative cells in the inner ventricular zone and then this expression is turned off as these cells migrate to more superficial regions. Pax3 is expressed along the length of the neural tube and throughout much of the developing brain, and this expression is subsequently turned off during later developmental stages in a rostral to caudal direction.
During early development, Pax3 expression also occurs at the lateral and posterior margins of the neural plate, which is the region from which the neural crest arises. Pax3 is later expressed by various cell types and structures arising from the neural crest, such as melanoblasts, Schwann cell precursors, and dorsal root ganglia. In addition, Pax3-expressing cells derived from the neural crest contribute to the formation of other structures, such as the inner ear, mandible and maxilla.
Pax3 controls the location of the nasion (a facial feature between the eyes and at the top of the nose), and is associated with the presence of a unibrow.
Germline mutations in disease
Germline mutations of the Pax3 gene cause the splotch phenotype in mice. At the molecular level, this phenotype is caused by point mutations or deletions that alter or abolish Pax3 transcriptional function. In the heterozygous state, the splotch phenotype is characterized by white patches in the belly, tail and feet. These white spots are attributed to localized deficiencies in pigment-forming melanocytes resulting from neural crest cell defects. In the homozygous state, these Pax3 mutations cause embryonic lethality, which is associated with prominent neural tube closure defects and abnormalities of neural crest-derived structures, such as melanocytes, dorsal root ganglia and enteric ganglia. Heart malformations also result from the loss of cardiac neural crest cells, which normally contribute to the cardiac outflow tract and innervation of the heart. Finally, limb musculature does not develop in the homozygotes and axial musculature demonstrates varying abnormalities. These myogenic effects are caused by increased cell death of myogenic precursors in the dermomyotome and diminished migration from the dermomyotome.
Germline mutations of the PAX3 gene occur in the human disease Waardenburg syndrome, which consists of four autosomal dominant genetic disorders (WS1, WS2, WS3 and WS4). Of the four subtypes, WS1 and WS3 are usually caused by PAX3 mutations. All four subtypes are characterized by hearing loss, eye abnormalities and pigmentation disorders. In addition, WS1 is frequently associated with a midfacial alteration called dystopia canthorum, while WS3 (Klein-Waardenburg syndrome) is frequently distinguished by musculoskeletal abnormalities affecting the upper limbs. Most WS1 cases are caused by heterozygous PAX3 mutations while WS3 is caused by either partial or total deletion of PAX3 and contiguous genes or by smaller PAX3 mutations in the heterozygous or homozygous state. These PAX3 mutations in WS1 and WS3 include missense, nonsense and splicing mutations; small insertions; and small or gross deletions. Though these changes are usually not recurrent, the mutations generally occur in exons 2 through 6 with exon 2 mutations being most common. As these exons encode the paired box and homeodomain, these mutations often affect DNA binding function.
Mutations in human cancer
Alveolar rhabdomyosarcoma (ARMS) is an aggressive soft tissue sarcoma that occurs in children and is usually characterized by a recurrent t(2;13)(q35;q14) chromosomal translocation. This 2;13 translocation breaks and rejoins portions of the PAX3 and FOXO1 genes to generate a PAX3-FOXO1 fusion gene that expresses a PAX3-FOXO1 fusion transcript encoding a PAX3-FOXO1 fusion protein. PAX3 and FOXO1 encode transcription factors, and the translocation results in a fusion transcription factor containing the N-terminal PAX3 DNA-binding domain and the C-terminal FOXO1 transactivation domain. A smaller subset of ARMS cases is associated with less common fusions of PAX7 to FOXO1 or rare fusions of PAX3 to other transcription factors, such as NCOA1. Compared to the wild-type PAX3 protein, the PAX3-FOXO1 fusion protein more potently activates PAX3 target genes. In ARMS cells, PAX3-FOXO1 usually functions as a transcriptional activator and excessively increases expression of downstream target genes. In addition, PAX3-FOXO1 binds along with MYOD1, MYOG and MYCN as well as chromatin structural proteins, such as CHD4 and BRD4, to contribute to the formation of super enhancers in the vicinity of a subset of these target genes. These dysregulated target genes contribute to tumorigenesis by altering signaling pathways that affect proliferation, cell death, myogenic differentiation, and migration.
A t(2;4)(q35;q31.1) chromosomal translocation that fuses the PAX3 and MAML3 genes occurs in biphenotypic sinonasal sarcoma (BSNS), a low-grade adult malignancy associated with both myogenic and neural differentiation. MAML3 encodes a transcriptional coactivator involved in Notch signaling. The PAX3-MAML3 fusion juxtaposes the N-terminal PAX3 DNA binding domain with the C-terminal MAML3 transactivation domain to create another potent activator of target genes with PAX3 binding sites. Of note, PAX3 is rearranged without MAML3 involvement in a smaller subset of BSNS cases, and some of these variant cases contain a PAX3-NCOA1 or PAX3-FOXO1 fusion. Though PAX3-FOXO1 and PAX3-NCOA1 fusions can be formed in both ARMS and BSNS, there are differences in the pattern of activated downstream target genes suggesting that the cell environment has an important role in modulating the output of these fusion transcription factors.
In addition to tumors with PAX3-related fusion genes, there are several other tumor categories that express the wild-type PAX3 gene. The presence of PAX3 expression in some tumors can be explained by their derivation from developmental lineages normally expressing wild-type PAX3. For example, PAX3 is expressed in cancers associated with neural tube-derived lineages, (e.g., glioblastoma), neural crest-derived lineages (e.g., melanoma) and myogenic lineages (e.g., embryonal rhabdomyosarcoma). However, PAX3 is also expressed in other cancer types without a clear relationship to a PAX3-expressing developmental lineages, such as breast carcinoma and osteosarcoma. In these wild-type PAX3-expressing cancers, PAX3 function impacts on the control of proliferation, apoptosis, differentiation and motility. Therefore, wild-type PAX3 exerts a regulatory role in tumorigenesis and tumor progression, which may be related to its role in normal development.
Notes
References
Further reading
External links
Developmental genes and proteins | PAX3 | [
"Biology"
] | 3,102 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
4,821,932 | https://en.wikipedia.org/wiki/Product%20structure%20modeling | Product structure is a hierarchical decomposition of a product, typically known as the bill of materials (BOM).
As business becomes more responsive to unique consumer tastes and derivative products grow to meet the unique configurations, BOM management can become unmanageable. For manufacturers, a bill of materials (BOM) is a critical product information record that lists the raw materials, assemblies, components, parts and the quantities of each needed to manufacture a product.
Advanced modeling techniques are necessary to cope with configurable products where changing a small part of a product can have multiple impacts on other product structure models. Concepts within this entry are in capital letters in order to indicate these concepts.
Several concepts are related to the subject of product structure modeling. All these concepts are discussed in this section. These concepts are divided into two main aspects. First the product breakdown is discussed which involves all the physical aspects of a product. Second, different views at the product structure are indicated.
Product breakdown
Figure 1 illustrates the concepts that are important to the structure of a product. This is a meta-data model, which can be used for modeling the instances in a specific case of product structuring.
The core of the product structure is illustrated by the product components (items) and their relationships. Thus, this involves the linking between items related to the product.
The assembly can consist of subassemblies and parts, whereas subassemblies can also consist of other subassemblies or part. Thus, this is typically hierarchically ordered. These concepts are generalized into the concept of item. This classification is overlapping, because a subassembly could be a part in another assembly configuration.
Due to differentiation and variation of items several concepts must be indicated into the product breakdown structure. Three concepts are involved in this differentiation, namely alternatives, variants and revisions. An alternative of an item is considered as a substitute for that particular item, whereas a variant is another option of an item which the consumer can choose. When an error occurs at a part or subassembly, it needs to be revised. This revision indicates the change history of the item.
Product structure views
Product structure views are made upon several activity domains within the company. Due to the fact not everyone in the company has to have a detailed overview of the product several components with their attributes can be extracted.
When the Master Structure is made out of the several items of the product assembly, multiple views can be made upon this Master Structure. Thus this Master Structure contains every item in detail, which is important to the Assembly of the product.
The modeling process
The process of constructing the product model consists of six main activities, which can be decomposed in several sub-activities. The next table describes these activities and the sub-activities within them provided with a description about this activity.
Process-data model
When combining the activities with the concepts of the product structure model it will result in a process-data diagram. This diagram displays the steps which need to be taken within the process of product structure modeling together with the deliverables, at the right side, which are outcomes of these activities.
Example
This example discusses the product structure modeling within car manufacturing. This will be discussed through the main activities which are identified within the process of product structure modeling.
Define product components
First, all components are identified and indicated. In the area of car manufacturing, the product components are as follows. A car (ASSEMBLY) consists of several SUBASSEMBLIES such as the body and the engine of the car. The engine for example is assembled in several parts such as screws and small pipes.
Define product assortment
In case of car manufacturing instances of these concepts can be made. For example an engine has several alternatives. For example a car manufacturer can choose between an engine made in America or Japan.
Within these different engines, variants exist. Initially an engine can be made as a 1.6 engine, but a variant, such as a 1.8 engine, can be made of this engine. Thus the 1.6 engine is used as base concept for the new 1.8 engine.
Product structuring
An example of a correlation between items within car manufacturing can be indicated as follows. The engine is connected to the body with several screws. Thus, these two items must be linked by the concept of a relationship.
Create master structure
After structuring the product with all the listed items and relationship between them this must be combined into one MASTER STRUCTURE which contains all of the details of the product. In case of the car, all items from engine to screw must be documented in one MASTER STRUCTURE.
Documenting
When the MASTER STRUCTURE of the car is created one must link this structure with documents which contains the product definition of this specific car. Primarily, this consists of an extensive description of the car which is linked to the MASTER STRUCTURE of this product.
Define product structure views
In case of the car manufacturer multiple views can be derived from the car assembly. For example a structure from a sales point of view will need more detail about the functions and characteristics of the car rather than detailed information about the body. Thus a sales manager needs information about the color of the car or the type of gear (automatic of manual).
From a purchasing view more information is needed about the mixing of the paint instead of the general color, which is only needed for the customer. Purchasing department also needs more information about the suppliers of the used components within the manufacturing of the car, so they can easily overview where which component is used and which supplier it comes from.
See also
ISO 10303
Assembly modeling
Bill of materials (BOM)
Product breakdown structure
Bibliography
Hvam, L. (1999). A procedure for building product models. Robotics and Computer-Integrated Manufacturing, 15, pp. 77-87
Peltonen, H. (2000), Concepts and an Implementation for Product Data Management. Acta Polytechnica Scandinavica, Mathematics and Computing Series No. 105, pp. 188
Rampersad, H.K. (1995). Concentric Design of Robotic Assembly Systems. Journal of Manufacturing Systems, 14(4), pp. 230-243
Svensson, D., & Malmqvist, J. (2002). Strategies for Product Structure Management at Manufacturing Firms. Journal of Computing and Information Science in Engineering, 2(1), 50-58.
References
Computer-aided design
Product lifecycle management | Product structure modeling | [
"Engineering"
] | 1,293 | [
"Computer-aided design",
"Design engineering"
] |
4,824,021 | https://en.wikipedia.org/wiki/Clover%20%28detector%29 | A clover detector is a gamma-ray detector that consists of 4 coaxial N-type high purity germanium (Ge) crystals each machined to shape and mounted in a common cryostat to form a structure resembling a four-leaf clover.
The clover is the first composite Ge detector. It remains widely used in the detectors of particle accelerators, where multiple clover modules form an array all around the target to capture the rays. More complex composite detectors, such as the 7-element hexagonal cluster detector used on the Euroball, offer even better data.
Operation
A gamma ray may interact with a single Ge crystal and deposit its full energy. The resulting charge collected will then be proportional to this energy. However, through the process of Compton scattering, a gamma ray may interact with two (or possibly more) crystals resulting in the energy (and thus the liberated charge) being shared by the crystals. In this case, a process known as add-back, where the charge collected by each of the crystals is summed, can be used to determine the energy of the incident gamma ray.
In addition to add-back, the Clover also uses Compton suppression using a fence of BGO detectors. A gamma ray can escape the Clover array via Compton scattering, causing incorrectly low charge values. A BGO detector can detect the escaped ray and have the computer ignore the wrong reading from the Clover.
Advantages
There are a number of advantages offered by using clover detectors as opposed to the more conventional single crystal germanium detectors. Large volume high purity single crystals of Ge can be expensive. By mounting four smaller crystals in a common cryostat a detector of a given volume can be created at a reduced cost. In addition, the individual smaller Ge crystals present a smaller solid angle than a large volume Ge detector thus significantly reducing the effects of Doppler broadening on the resulting spectra. Doppler broadening is further reduced using techniques that exploit the individual readings from the smaller crystals.
A clover detector can also be used to determine the electric or magnetic nature of the incident photons (e.g. if the gamma ray is an electric quadrupole or a magnetic dipole) as the Compton scattering process for these two types of radiation is different. This is called Compton polarimetry.
References
External links
CLover Array for Radioactive ION beams
Spectrometers
Particle detectors | Clover (detector) | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 476 | [
"Spectrum (physical sciences)",
"Particle detectors",
"Measuring instruments",
"Spectrometers",
"Spectroscopy"
] |
4,825,691 | https://en.wikipedia.org/wiki/Reverse%20osmosis%20plant | A reverse osmosis plant is a manufacturing plant where the process of reverse osmosis takes place. Reverse osmosis is a common process to purify or desalinate contaminated water by forcing water through a membrane. Water produced by reverse osmosis may be used for a variety of purposes, including desalination, wastewater treatment, concentration of contaminants, and the reclamation of dissolved minerals. An average modern reverse osmosis plant needs six kilowatt-hours of electricity to desalinate one cubic metre of water. The process also results in an amount of salty briny waste. The challenge for these plants is to find ways to reduce energy consumption, use sustainable energy sources, improve the process of desalination and to innovate in the area of waste management to deal with the waste. Self-contained water treatment plants using reverse osmosis, called reverse osmosis water purification units, are normally used in a military context.
System Operation
Reverse osmosis plants require a variety of pre-treatment techniques including softening, dechlorination, and anti-scalent treatment. Following pre-treatment, high levels of pressure send water through a semi-permeable membrane, which retains all contaminants but lets pure water pass through. Energy requirements depend on the concentration of salts and contaminants in the influent water; higher concentrations requires more energy to treat.
In operation
In 1977 Cape Coral, Florida became the first municipality in the United States to use the RO process on a large scale with an initial operating capacity of 11,356 m³ (3 million gallons) per day. By 1985, due to the rapid growth in population of Cape Coral, the city had the largest low pressure reverse osmosis plant in the world, capable of producing 56,782 m³ (15 million gallons) per day.
In Israel at Ashkelon on the Mediterranean coast, the world's largest reverse osmosis plant is producing 396,000 m³ of water a day at around possibly US$0.50 per m³.
In western Saudi Arabia at Yanbu, production started in 1999 at 106,904 m³ of water a day. Later in 2009 with some expansion the production reached to 132,000 m³ of water a day.
In Sindh Province Pakistan the provincial government has installed 382 reverse osmosis plants in the province out of which 207 are installed in backward areas of Sindh which includes districts of Thar, Thatta, Badin, Sukkur, Shaheed, Benazirabad, Noshero, Feroz, and others while 726 are on the final stage of their completion.
In China a desalination plant was planned for Tianjin in 2010, to produce 100,000 m³ of desalinated seawater a day. In Spain in 2004, 20 reverse osmosis plants were planned to be built along the Costas, expecting to meet slightly over 1% of Spain's total water needs.
Nearly 17% of drinking water in Perth, Australia comes from a reverse osmosis plant that desalinates sea water. Perth is an ideal candidate for reverse osmosis plants as it has a relatively dry and arid climate where conventional freshwater resources are scarce, yet it is surrounded by ocean. Western Australia's Water Corporation announced the Perth desalination plant in April 2005. At the time, it was the largest desalination plant using reverse osmosis technology in the southern hemisphere.
References
External links
"Water, water, everywhere" by Fred Pearce for Prospect Magazine
Water treatment
Membrane technology | Reverse osmosis plant | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 728 | [
"Separation processes",
"Water treatment",
"Water pollution",
"Membrane technology",
"Environmental engineering",
"Water technology"
] |
4,826,806 | https://en.wikipedia.org/wiki/Semi-log%20plot | In science and engineering, a semi-log plot/graph or semi-logarithmic plot/graph has one axis on a logarithmic scale, the other on a linear scale. It is useful for data with exponential relationships, where one variable covers a large range of values.
All equations of the form form straight lines when plotted semi-logarithmically, since taking logs of both sides gives
This is a line with slope and vertical intercept. The logarithmic scale is usually labeled in base 10; occasionally in base 2:
A log–linear (sometimes log–lin) plot has the logarithmic scale on the y-axis, and a linear scale on the x-axis; a linear–log (sometimes lin–log) is the opposite. The naming is output–input (y–x), the opposite order from (x, y).
On a semi-log plot the spacing of the scale on the y-axis (or x-axis) is proportional to the logarithm of the number, not the number itself. It is equivalent to converting the y values (or x values) to their log, and plotting the data on linear scales. A log–log plot uses the logarithmic scale for both axes, and hence is not a semi-log plot.
Equations
The equation of a line on a linear–log plot, where the abscissa axis is scaled logarithmically (with a logarithmic base of n), would be
The equation for a line on a log–linear plot, with an ordinate axis logarithmically scaled (with a logarithmic base of n), would be:
Finding the function from the semi–log plot
Linear–log plot
On a linear–log plot, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. The slope formula of the plot is:
which leads to
or
which means that
In other words, F is proportional to the logarithm of x times the slope of the straight line of its lin–log graph, plus a constant. Specifically, a straight line on a lin–log plot containing points (F0, x0) and (F1, x1) will have the function:
log–linear plot
On a log–linear plot (logarithmic scale on the y-axis), pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. The slope formula of the plot is:
which leads to
Notice that nlogn(F1) = F1. Therefore, the logs can be inverted to find:
or
This can be generalized for any point, instead of just F1:
Real-world examples
Phase diagram of water
In physics and chemistry, a plot of logarithm of pressure against temperature can be used to illustrate the various phases of a substance, as in the following for water:
2009 "swine flu" progression
While ten is the most common base, there are times when other bases are more appropriate, as in this example:
Notice that while the horizontal (time) axis is linear, with the dates evenly spaced, the vertical (cases) axis is logarithmic, with the evenly spaced divisions being labelled with successive powers of two. The semi-log plot makes it easier to see when the infection has stopped spreading at its maximum rate, i.e. the straight line on this exponential plot, and starts to curve to indicate a slower rate. This might indicate that some form of mitigation action is working, e.g. social distancing.
Microbial growth
In biology and biological engineering, the change in numbers of microbes due to asexual reproduction and nutrient exhaustion is commonly illustrated by a semi-log plot. Time is usually the independent axis, with the logarithm of the number or mass of bacteria or other microbe as the dependent variable. This forms a plot with four distinct phases, as shown below.
See also
Nomograph, more complicated graphs
Nonlinear regression#Transformation, for converting a nonlinear form to a semi-log form amenable to non-iterative calculation
Log–log plot
References
Charts
Technical drawing
Statistical charts and diagrams
Non-Newtonian calculus | Semi-log plot | [
"Mathematics",
"Engineering"
] | 937 | [
"Design engineering",
"Calculus",
"Non-Newtonian calculus",
"Civil engineering",
"Technical drawing"
] |
4,826,846 | https://en.wikipedia.org/wiki/Wavefront%20coding | In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera.
Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field.
Encoding
The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus.
Linear phase mask
Wavefront coding with linear phase masks works by creating an optical transfer function that encodes distance information.
Cubic phase mask
Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise depending upon the physical characteristics of the processor. Dynamic range is sacrificed to extend the depth of field depending upon the type of filter used. It can also correct optical aberration.
The mask was developed by using the ambiguity function and the stationary phase method
History
The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. The University filed a patent on the invention. Cathey, Dowski and Merc Mercure founded a company to commercialize the method called CDM-Optics, and licensed the invention from the University. The company was acquired in 2005 by OmniVision Technologies, which has released wavefront-coding-based mobile camera chips as TrueFocus sensors.
TrueFocus sensors are able to simulate older autofocus technologies that use rangefinders and narrow depth of fields. In fact, the technology theoretically allows for any number of combinations of focal points per pixel for effect. It is the only technology not limited to EDoF (Extended-Depth-of-Field).
References
External links
Wavefront coding finds increasing use (Laser Focus World)
Signal processing | Wavefront coding | [
"Technology",
"Engineering"
] | 480 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
4,827,920 | https://en.wikipedia.org/wiki/JUICE%20%28software%29 | JUICE is a widely used non-commercial software package for editing and analysing phytosociological data.
It was developed at the Masaryk University in Brno, Czech Republic in 1998, and is fully described in English manual. It makes use of the previously-developed TURBOVEG software for entering and storing such data) and it offers a quite powerful tool for vegetation data analysis, including:
creation of synoptic tables
determination of diagnostic species according to their fidelity
calculation of Ellenberg indicator values for relevés, various indices of alpha and beta diversity
classification of relevés using TWINSPAN or cluster analysis
expert system for vegetation classification based on COCKTAIL method etc.
See also
Phytosociology
Phytogeography
Biogeography
External links
Tichy, L. 2002. JUICE, software for vegetation classification. J. Veg. Sci. 13: 451-453. (Basic scientific article on the program).
Uses in scientific journals
Pyšek P., Jarošík V., Chytrý M., Kropáč Z., Tichý L. & Wild J. 2005. Alien plants in temperate weed communities: prehistoric and recent invaders occupy different habitats. Ecology 86: 772–785.
Ewald, J A critique for phytosociology Journal of Vegetation Science (April 2003) 14(2)291-296
Science software
Botany
Biogeography
Ecological data | JUICE (software) | [
"Biology"
] | 287 | [
"Biogeography",
"Plants",
"Botany"
] |
8,280,217 | https://en.wikipedia.org/wiki/Renner%E2%80%93Teller%20effect | The Renner-Teller effect is a phenomenon in molecular spectroscopy where a pair of electronic states that become degenerate at linearity are coupled by rovibrational motion.
The Renner-Teller effect is observed in the spectra of molecules that have electronic states that allow vibration through a linear configuration. For such molecules electronic states that are doubly degenerate at linearity (Π, Δ, ..., etc.) will split into two close-lying nondegenerate states for non-linear configurations. As part of the Renner–Teller effect, the rovibronic levels of such a pair of states will be strongly Coriolis coupled by the rotational kinetic energy operator causing a breakdown of the Born–Oppenheimer approximation. This is to be contrasted with the Jahn–Teller effect which occurs for polyatomic molecules in electronic states that allow vibration through a symmetric nonlinear configuration, where the electronic state is degenerate, and which further involves a breakdown of the Born-Oppenheimer approximation but here caused by the vibrational kinetic energy operator.
In its original formulation, the Renner–Teller effect was discussed for a triatomic molecule in an electronic state that is a linear Π-state at equilibrium. The 1934 article by Rudolf Renner was one of the first that considered dynamic effects that go beyond the Born–Oppenheimer approximation, in which the nuclear and electronic motions in a molecule are uncoupled. Renner chose an electronically excited state of the carbon dioxide molecule (CO2) that is a linear Π-state at equilibrium for his studies. The products of purely electronic and purely nuclear rovibrational states served as the zeroth-order (no rovibronic coupling) wave functions in Renner's study. The rovibronic coupling acts as a perturbation.
Renner is the only author of the 1934 paper that first described the effect, so it can be called simply the Renner effect. Renner did this work as a PhD student under the supervision of Edward Teller and presumably Teller was perfectly happy not to be a coauthor. However, in 1933 Gerhard Herzberg and Teller had recognized that the potential of a triatomic linear molecule in a degenerate electronic state at linearity splits into two when the molecule is bent. A year later this effect was worked out in detail by Renner. Herzberg refers to this as the "Renner–Teller" effect in one of his influential books, and this name is most commonly used.
While Renner's theoretical study concerns an excited electronic state of carbon dioxide that is linear at equilibrium, the first observation of the Renner–Teller effect was in an electronic state of the NH2 molecule that is bent at equilibrium.
Much has been published about the Renner–Teller effect since its first experimental observation in 1959; see the bibliography on pages 412-413 of the textbook by Bunker and Jensen. Section 13.4 of this textbook discusses both the Renner–Teller effect (called the Renner effect) and the Jahn–Teller effect.
See also
References
External links
English translation of Renner's paper (1934).
H. Hettema's English translation of Renner's paper (1934) on Google books
The original Renner–Teller effect, Paul E. S. Wormer, University of Nijmegen (2003)
Molecular physics
Spectroscopy
Edward Teller | Renner–Teller effect | [
"Physics",
"Chemistry",
"Astronomy"
] | 699 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
" molecular",
"nan",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
8,282,634 | https://en.wikipedia.org/wiki/Carbon%20film%20%28technology%29 | Carbon films are thin film coatings which consist predominantly of the chemical element carbon. They include plasma polymer films, amorphous carbon films (diamond-like carbon, DLC), CVD diamond films as well as graphite films.
Carbon films are produced by deposition using gas-phase deposition processes, in most cases taking place in a vacuum: chemical vapor deposition, CVD or physical vapor deposition, PVD. They are deposited in the form of thin films with film thicknesses of just a few micrometres.
Carbon films make it possible to implement a large number of surface functions, especially in the field of tribology - in other words, in applications where wear is a major factor.
Further reading
Standards
ISO 20523:2017 Carbon based films — Classification and designations
External links
Website of ISO with details about ISO 20523 including list of content
Webportal with basic information about the different types of carbon films
Allotropes of carbon
Thin films | Carbon film (technology) | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 194 | [
"Allotropes of carbon",
"Allotropes",
"Materials science",
"Nanotechnology",
"Planes (geometry)",
"Thin films"
] |
8,283,008 | https://en.wikipedia.org/wiki/GoPubMed | GoPubMed was a knowledge-based search engine for biomedical texts. The
Gene Ontology (GO) and Medical Subject Headings (MeSH) served as "Table of contents" in order to structure the millions of articles in the MEDLINE database. MeshPubMed was at one point a separate project, but the two were merged.
The technologies used in GoPubMed were generic and could in general be applied to any kind of texts and any kind of knowledge bases. The system was developed at the Technische Universität Dresden by Michael Schroeder and his team at Transinsight.
GoPubMed was recognized with the 2009 red dot: best of the best award in the category communication design – graphical user interfaces and interactive tool. Transinsight was recognized with the German Innovation Prize IT for its developments in Enterprise Semantic Intelligence at CeBIT 2011.
References
External links
GoPubMed
Defunct internet search engines
Health informatics
Bioinformatics | GoPubMed | [
"Chemistry",
"Engineering",
"Biology"
] | 198 | [
"Biological engineering",
"Bioinformatics stubs",
"Biotechnology stubs",
"Health informatics",
"Biochemistry stubs",
"Bioinformatics",
"Medical technology"
] |
8,286,430 | https://en.wikipedia.org/wiki/Adaptive%20algorithm | An adaptive algorithm is an algorithm that changes its behavior at the time it is run, based on information available and on a priori defined reward mechanism (or criterion). Such information could be the story of recently received data, information on the available computational resources, or other run-time acquired (or a priori known) information related to the environment in which it operates.
Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares (LMS), which represents a class of stochastic gradient-descent algorithms used in adaptive filtering and machine learning. In adaptive filtering the LMS is used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal).
For example, stable partition, using no additional memory is O(n lg n) but given O(n) memory, it can be O(n) in time. As implemented by the C++ Standard Library, stable_partition is adaptive and so it acquires as much memory as it can get (up to what it would need at most) and applies the algorithm using that available memory. Another example is adaptive sort, whose behavior changes upon the presortedness of its input.
An example of an adaptive algorithm in radar systems is the constant false alarm rate (CFAR) detector.
In machine learning and optimization, many algorithms are adaptive or have adaptive variants, which usually means that the algorithm parameters such as learning rate are automatically adjusted according to statistics about the optimisation thus far (e.g. the rate of convergence). Examples include adaptive simulated annealing, adaptive coordinate descent, adaptive quadrature, AdaBoost, Adagrad, Adadelta, RMSprop, and Adam.
In data compression, adaptive coding algorithms such as Adaptive Huffman coding or Prediction by partial matching can take a stream of data as input, and adapt their compression technique based on the symbols that they have already encountered.
In signal processing, the Adaptive Transform Acoustic Coding (ATRAC) codec used in MiniDisc recorders is called "adaptive" because the window length (the size of an audio "chunk") can change according to the nature of the sound being compressed, to try to achieve the best-sounding compression strategy.
See also
Adaptation (computer science)
Adaptive filter
Adaptive grammar
Adaptive optimization
References
Algorithms | Adaptive algorithm | [
"Mathematics",
"Engineering"
] | 489 | [
"Algorithms",
"Mathematical logic",
"Software engineering stubs",
"Applied mathematics",
"Software engineering"
] |
15,788,537 | https://en.wikipedia.org/wiki/Supercell%20%28crystal%29 | In solid-state physics and crystallography, a crystal structure is described by a unit cell repeating periodically over space. There are an infinite number of choices for unit cells, with different shapes and sizes, which can describe the same crystal, and different choices can be useful for different purposes.
Say that a crystal structure is described by a unit cell U. Another unit cell S is a supercell of unit cell U, if S is a cell which describes the same crystal, but has a larger volume than cell U. Many methods which use a supercell perturbate it somehow to determine properties which cannot be determined by the initial cell. For example, during phonon calculations by the small displacement method, phonon frequencies in crystals are calculated using force values on slightly displaced atoms in the supercell. Another very important example of a supercell is the conventional cell of body-centered (bcc) or face-centered (fcc) cubic crystals.
Unit cell transformation
The basis vectors of unit cell U can be transformed to basis vectors of supercell S by linear transformation
where is a transformation matrix. All elements should be integers with (with the transformation preserves volume). For example, the matrix
transforms a primitive cell to body-centered. Another particular case of the transformation is a diagonal matrix (i.e., ). This called diagonal supercell expansion and can be represented as repeating of the initial cell over crystallographic axes of the initial cell.
Application
Supercells are also commonly used in computational models of crystal defects to allow the use of periodic boundary conditions.
See also
Crystal structure
Bravais lattice
Primitive cell
Space group
References
External links
IUCR online dictionary of crystallography
Crystallography | Supercell (crystal) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 351 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
22,390,444 | https://en.wikipedia.org/wiki/Glass%20coloring%20and%20color%20marking | Glass coloring and color marking may be obtained in several ways.
by the addition of coloring ions,
by precipitation of nanometer-sized colloids (so-called striking glasses such as "gold ruby" or red "selenium ruby"),
by colored inclusions (as in milk glass and smoked glass)
by light scattering (as in phase separated glass)
by dichroic coatings (see dichroic glass), or
by colored coatings
Coloring ions
Ordinary soda-lime glass appears colorless to the naked eye when it is thin, although iron oxide impurities produce a green tint which can be viewed in thick pieces or with the aid of scientific instruments. Further metals and metal oxides can be added to glass during its manufacture to change its color which can enhance its aesthetic appeal. Examples of these additives are listed below:
Iron(II) oxide may be added to glass resulting in bluish-green glass which is frequently used in beer bottles. Together with chromium it gives a richer green color, used for wine bottles.
Sulfur, together with carbon and iron salts, is used to form iron polysulfides and produce amber glass ranging from yellowish to almost black. In borosilicate glasses rich in boron, sulfur imparts a blue color. With calcium it yields a deep yellow color.
Manganese can be added in small amounts to remove the green tint given by iron, or in higher concentrations to give glass an amethyst color. Manganese is one of the oldest glass additives, and purple manganese glass was used since early Egyptian history.
Manganese dioxide, which is black, is used to remove the green color from the glass; in a very slow process this is converted to sodium permanganate, a dark purple compound. In New England some houses built more than 300 years ago have window glass which is lightly tinted violet because of this chemical change, and such glass panes are prized as antiques. This process is widely confused with the formation of "desert amethyst glass", in which glass exposed to desert sunshine with a high ultraviolet component develops a delicate violet tint. Details of the process and the composition of the glass vary and so do the results, because it is not a simple matter to obtain or produce properly controlled specimens.
Small concentrations of cobalt (0.025 to 0.1%) yield blue glass. The best results are achieved when using glass containing potash. Very small amounts can be used for decolorizing.
2 to 3% of copper oxide produces a turquoise color.
Nickel, depending on the concentration, produces blue, or violet, or even black glass. Lead crystal with added nickel acquires purplish color. Nickel together with a small amount of cobalt was used for decolorizing of lead glass.
Chromium is a very powerful colorizing agent, yielding dark green or in higher concentrations even black color. Together with tin oxide and arsenic it yields emerald green glass. Chromium aventurine, in which aventurescence is achieved by growth of large parallel chromium(III) oxide plates during cooling, is made from glass with added chromium oxide in amount above its solubility limit in glass.
Cadmium together with sulphur forms cadmium sulfide and results in deep yellow color, often used in glazes. However, cadmium is toxic. Together with selenium and sulphur it yields shades of bright red and orange.
Adding titanium produces yellowish-brown glass. Titanium, rarely used on its own, is more often employed to intensify and brighten other colorizing additives.
Uranium (0.1 to 2%) can be added to give glass a fluorescent yellow or green color. Uranium glass is typically not radioactive enough to be dangerous, but if ground into a powder, such as by polishing with sandpaper, and inhaled, it can be carcinogenic. When used with lead glass with very high proportion of lead, produces a deep red color.
Didymium gives green color (used in UV filters) or lilac red.
Striking glasses
Selenium, like manganese, can be used in small concentrations to decolorize glass, or in higher concentrations to impart a reddish color, caused by selenium nanoparticles dispersed in glass. It is a very important agent to make pink and red glass. When used together with cadmium sulfide, it yields a brilliant red color known as "Selenium Ruby".
Pure metallic copper produces a very dark red, opaque glass, which is sometimes used as a substitute for gold in the production of ruby-colored glass.
Metallic gold, in very small concentrations (around 0.001%, or 10 ppm), produces a rich ruby-colored glass ("Ruby Gold" or "Rubino Oro"), while lower concentrations produces a less intense red, often marketed as "cranberry". The color is caused by the size and dispersion of gold particles. Ruby gold glass is usually made of lead glass with added tin.
Silver compounds such as silver nitrate and silver halides can produce a range of colors from orange-red to yellow. The way the glass is heated and cooled can significantly affect the colors produced by these compounds. Also photochromic lenses and photosensitive glass are based on silver.
Purple of Cassius is a purple pigment formed by the reaction of gold salts with tin(II) chloride.
Coloring added to glass
The principal methods of this are enamelled glass, essentially a technique for painting patterns or images, used for both glass vessels and on stained glass, and glass paint, typically in black, and silver stain, giving yellows to oranges on stained glass. All of these are fired in a kiln or furnace to fix them, and can be extremely durable when properly applied. This is not true of "cold-painted" glass, using oil paint or other mixtures, which rarely last more than a few centuries.
Colored inclusions
Tin oxide with antimony and arsenic oxides produce an opaque white glass (milk glass), first used in Venice to produce an imitation porcelain, very often then painted with enamels. Similarly, some smoked glasses may be based on dark-colored inclusions, but with ionic coloring it is also possible to produce dark colors (see above).
Color caused by scattering
Glass containing two or more phases with different refractive indices shows coloring based on the Tyndall effect and explained by the Mie theory, if the dimensions of the phases are similar or larger than the wavelength of visible light. The scattered light is blue and violet as seen in the image, while the transmitted light is yellow and red.
Dichroic glass
Dichroic glass has one or several coatings in the nanometer-range (for example metals, metal oxides, or nitrides) which give the glass dichroic optical properties. Also the blue appearance of some automobile windshields is caused by dichroism.
See also
Crystal field theory - physical explanation coloring
Color of medieval stained glass
Hydrogen darkening
Hydroxyl ion absorption
Transparent materials
References
Glass engineering and science
Glass chemistry | Glass coloring and color marking | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,459 | [
"Glass engineering and science",
"Glass chemistry",
"Materials science"
] |
22,398,341 | https://en.wikipedia.org/wiki/Fire-safe%20polymers | Fire-safe polymers are polymers that are resistant to degradation at high temperatures. There is need for fire-resistant polymers in the construction of small, enclosed spaces such as skyscrapers, boats, and airplane cabins. In these tight spaces, ability to escape in the event of a fire is compromised, increasing fire risk. In fact, some studies report that about 20% of victims of airplane crashes are killed not by the crash itself but by ensuing fires. Fire-safe polymers also find application as adhesives in aerospace materials, insulation for electronics, and in military materials such as canvas tenting.
Some fire-safe polymers naturally exhibit an intrinsic resistance to decomposition, while others are synthesized by incorporating fire-resistant additives and fillers. Current research in developing fire-safe polymers is focused on modifying various properties of the polymers such as ease of ignition, rate of heat release, and the evolution of smoke and toxic gases. Standard methods for testing polymer flammability vary among countries; in the United States common fire tests include the UL 94 small-flame test, the ASTM E 84 Steiner Tunnel, and the ASTM E 622 National Institute of Standards and Technology (NIST) smoke chamber. Research on developing fire-safe polymers with more desirable properties is concentrated at the University of Massachusetts Amherst and at the Federal Aviation Administration where a long-term research program on developing fire-safe polymers was begun in 1995. The Center for UMass/Industry Research on Polymers (CUMIRP) was established in 1980 in Amherst, MA as a concentrated cluster of scientists from both academia and industry for the purpose of polymer science and engineering research.
History
Early history
Controlling the flammability of different materials has been a subject of interest since 450 B.C. when Egyptians attempted to reduce the flammability of wood by soaking it in potassium aluminum sulfate (alum). Between 450 B.C. and the early 20th century, other materials used to reduce the flammability of different materials included mixtures of alum and vinegar; clay and hair; clay and gypsum; alum, ferrous sulfate, and gypsum; and ammonium chloride, ammonium phosphate, borax, and various acids. These early attempts found application in reducing the flammability of wood for military materials, theater curtains, and other textiles, for example. Important milestones during this early work include the first patent for a mixture for controlling flammability issued to Obadiah Wyld in 1735, and the first scientific exploration of controlling flammability, which was undertaken by Joseph Louis Gay-Lussac in 1821.
Developments since WWII
Research on fire-retardant polymers was bolstered by the need for new types of synthetic polymers in World War II. The combination of a halogenated paraffin and antimony oxide was found to be successful as a fire retardant for canvas tenting. Synthesis of polymers, such as polyesters, with fire retardant monomers were also developed around this time. Incorporating flame-resistant additives into polymers became a common and relatively cheap way to reduce the flammability of polymers, while synthesizing intrinsically fire-resistant polymers has remained a more expensive alternative, although the properties of these polymers are usually more efficient at deterring combustion.
Polymer combustion
General mechanistic scheme
Traditional polymers decompose under heat and produce combustible products; thus, they are able to originate and easily propagate fire (as shown in Figure 1).
The combustion process begins when heating a polymer yields volatile products. If these products are sufficiently concentrated, within the flammability limits, and at a temperature above the ignition temperature, then combustion proceeds. As long as the heat supplied to the polymer remains sufficient to sustain its thermal decomposition at a rate exceeding that required to feed the flame, combustion will continue.
Purpose and methods of fire-retardant systems
The purpose is to control heat below the critical level. To achieve this, one can create an endothermic environment, produce non-combustible products, or add chemicals that would remove fire-propagating radicals (H and OH), to name a few. These specific chemicals can be added into the polymer molecules permanently (see Intrinsically Fire-Resistant Polymers) or as additives and fillers (see Flame-Retardant Additives and Fillers).
Role of oxygen
Oxygen catalyzes the pyrolysis of polymers at low concentration and initiates oxidation at high concentration. Transition concentrations are different for different polymers. (e.g., polypropylene, between 5% and 15%). Additionally, polymers exhibit a structural-dependent relationship with oxygen. Some structures are intrinsically more sensitive to decomposition upon reaction with oxygen. The amount of access that oxygen has to the surface of the polymer also plays a role in polymer combustion. Oxygen is better able to interact with the polymer before a flame has actually been ignited.
Role of heating rate
In most cases, results from a typical heating rate (e.g. 10°C/min for mechanical thermal degradation studies) do not differ significantly from those obtained at higher heating rates. The extent of reaction can, however, be influenced by the heating rate. For example, some reactions may not occur with a low heating rate due to evaporation of the products.
Role of pressure
Volatile products are removed more efficiently under low pressure, which means the stability of the polymer might have been compromised. Decreased pressure also slows down decomposition of high boiling products.
Intrinsically fire-resistant polymers
The polymers that are most efficient at resisting combustion are those that are synthesized as intrinsically fire-resistant. However, these types of polymers can be difficult as well as costly to synthesize. Modifying different properties of the polymers can increase their intrinsic fire-resistance; increasing rigidity or stiffness, the use of polar monomers, and/or hydrogen bonding between the polymer chains can all enhance fire-resistance.
Linear, single-stranded polymers with cyclic aromatic components
Most intrinsically fire-resistant polymers are made by incorporation of aromatic cycles or heterocycles, which lend rigidity and stability to the polymers. Polyimides, polybenzoxazoles (PBOs), polybenzimidazoles, and polybenzthiazoles (PBTs) are examples of polymers made with aromatic heterocycles (Figure 2). Polymers made with aromatic monomers have a tendency to condense into chars upon combustion, decreasing the amount of flammable gas that is released. Syntheses of these types of polymers generally employ prepolymers which are further reacted to form the fire-resistant polymers.
Ladder polymers
Ladder polymers are a subclass of polymers made with aromatic cycles or heterocycles. Ladder polymers generally have one of two types of general structures, as shown in Figure 3.One type of ladder polymer links two polymer chains with periodic covalent bonds. In another type, the ladder polymer consists of a single chain that is double-stranded. Both types of ladder polymers exhibit good resistance to decomposition from heat because the chains do not necessarily fall apart if one covalent bond is broken. However, this makes the processing of ladder polymers difficult because they are not easily melted. These difficulties are compounded because ladder polymers are often highly insoluble.
Inorganic and semiorganic polymers
Inorganic and semiorganic polymers often employ silicon-nitrogen, boron-nitrogen, and phosphorus-nitrogen monomers. The non-burning characteristics of the inorganic components of these polymers contribute to their controlled flammability. For example, instead of forming toxic, flammable gasses in abundance, polymers prepared with incorporation of cyclotriphosphazene rings give a high char yield upon combustion. Polysialates (polymers containing frameworks of aluminum, oxygen, and silicon) are another type of inorganic polymer that can be thermally stable up to temperatures of 1300-1400 °C.
Flame-retardant additives and fillers
Additives are divided into two basic types depending on the interaction of the additive and polymer. Reactive flame retardants are compounds that are chemically built into the polymer. They usually contain heteroatoms. Additive flame retardants, on the other hand, are compounds that are not covalently bound to the polymer; the flame retardant and the polymer are just physically mixed together.
Only a few elements are being widely used in this field: aluminum, phosphorus, nitrogen, antimony, chlorine, bromine, and in specific applications magnesium, zinc and carbon. One prominent advantage of the flame retardants (FRs) derived from these elements is that they are relatively easy to manufacture. They are used in important quantities: in 2013, the world consumption of FRs amounted to around 1.8/2.1 Mio t for 2013 with sales of 4.9/5.2 billion USD. Market studies estimate FRs demand to rise between 5/7 % pa to 2.4/2.6 Mio t until 2016/2018 with estimated sales of 6.1/7.1 billion USD.
The most important flame retardants systems used act either in the gas phase where they remove the high energy radicals H and OH from the flame or in the solid phase, where they shield the polymer by forming a charred layer and thus protect the polymer from being attacked by oxygen and heat.
Flame retardants based on bromine or chlorine, as well as a number of phosphorus compounds act chemically in the gas phase and are very efficient. Others only act in the condensed phase such as metal hydroxides (aluminum trihydrate, or ATH, magnesium hydroxide, or MDH, and boehmite), metal oxides and salts (zinc borate and zinc oxide, zinc hydroxystannate), as well as expandable graphite and some nanocomposites (see below). Phosphorus and nitrogen compounds are also effective in the condensed phase, and as they also may act in the gas phase, they are quite efficient flame retardants. Overviews of the main flame retardants families, their mode of action and applications are given in. Further handbooks on these topics are
A good example for a very efficient phosphorus-based flame retardant system acting in the gas and condensed phases is aluminium diethyl phosphinate in conjunction with synergists such as melamine polyphosphate (MPP) and others. These phosphinates are mainly used to flame retard polyamides (PA) and polybutylene terephthalate (PBT) for flame retardant applications in electrical engineering/electronics (E&E).
Natural fiber-containing composites
Besides providing satisfactory mechanical properties and renewability, natural fibers are easier to obtain and much cheaper than man-made materials. Moreover, they are more environmentally friendly. Recent research focuses on application of different types of fire retardants during the manufacturing process as well as applications of fire retardants (especially intumescent coatings) at the finishing stage.
Nanocomposites
Nanocomposites have become a hotspot in the research of fire-safe polymers because of their relatively low cost and high flexibility for multifunctional properties. Gilman and colleagues did the pioneering work by demonstrating the improvement of fire-retardancy by having nanodispersed montmorillonite clay in the polymer matrix. Later, organomodified clays, TiO2 nanoparticles, silica nanoparticles, layered double hydroxides, carbon nanotubes and polyhedral silsesquioxanes were proved to work as well. Recent research has suggested that combining nanoparticles with traditional fire retardants (e.g., intumescents) or with surface treatment (e.g., plasma treatment) effectively decreases flammability.
Problems with additives and fillers
Although effective at reducing flammability, flame-retardant additives and fillers have disadvantages as well. Their poor compatibility, high volatility and other deleterious effects can change properties of polymers. Besides, addition of many fire-retardants produces soot and carbon monoxide during combustion. Halogen-containing materials cause even more concerns on environmental pollution.
See also
Plastics
Fireproofing
Phenol formaldehyde resin
Pyrolysis
Combustion
Fire-retardant gel
Fire test
References
External links
Fire-Safety Branch of the Federal Aviation Administration
Polymers
Fire protection
Flame retardants | Fire-safe polymers | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,583 | [
"Building engineering",
"Polymers",
"Fire protection",
"Polymer chemistry"
] |
22,399,366 | https://en.wikipedia.org/wiki/Symmetric%20Boolean%20function | In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the order of its input bits, i.e., it depends only on the number of ones (or zeros) in the input. For this reason they are also known as Boolean counting functions.
There are 2n+1 symmetric n-ary Boolean functions. Instead of the truth table, traditionally used to represent Boolean functions, one may use a more compact representation for an n-variable symmetric Boolean function: the (n + 1)-vector, whose i-th entry (i = 0, ..., n) is the value of the function on an input vector with i ones. Mathematically, the symmetric Boolean functions correspond one-to-one with the functions that map n+1 elements to two elements, .
Symmetric Boolean functions are used to classify Boolean satisfiability problems.
Special cases
A number of special cases are recognized:
Majority function: their value is 1 on input vectors with more than n/2 ones
Threshold functions: their value is 1 on input vectors with k or more ones for a fixed k
All-equal and not-all-equal function: their values is 1 when the inputs do (not) all have the same value
Exact-count functions: their value is 1 on input vectors with k ones for a fixed k
One-hot or 1-in-n function: their value is 1 on input vectors with exactly one one
One-cold function: their value is 1 on input vectors with exactly one zero
Congruence functions: their value is 1 on input vectors with the number of ones congruent to k mod m for fixed k, m
Parity function: their value is 1 if the input vector has odd number of ones
The n-ary versions of AND, OR, XOR, NAND, NOR and XNOR are also symmetric Boolean functions.
Properties
In the following, denotes the value of the function when applied to an input vector of weight .
Weight
The weight of the function can be calculated from its value vector:
Algebraic normal form
The algebraic normal form either contains all monomials of certain order , or none of them; i.e. the Möbius transform of the function is also a symmetric function. It can thus also be described by a simple (n+1) bit vector, the ANF vector . The ANF and value vectors are related by a Möbius relation:where denotes all the weights k whose base-2 representation is covered by the base-2 representation of m (a consequence of Lucas’ theorem). Effectively, an n-variable symmetric Boolean function corresponds to a log(n)-variable ordinary Boolean function acting on the base-2 representation of the input weight.
For example, for three-variable functions:
So the three variable majority function with value vector (0, 0, 1, 1) has ANF vector (0, 0, 1, 0), i.e.:
Unit hypercube polynomial
The coefficients of the real polynomial agreeing with the function on are given by:For example, the three variable majority function polynomial has coefficients (0, 0, 1, -2):
Examples
See also
Symmetric function
References
Boolean algebra
Cryptography | Symmetric Boolean function | [
"Mathematics",
"Engineering"
] | 669 | [
"Boolean algebra",
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Mathematical logic",
"Fields of abstract algebra"
] |
22,399,369 | https://en.wikipedia.org/wiki/Parity%20function | In Boolean algebra, a parity function is a Boolean function whose value is one if and only if the input vector has an odd number of ones. The parity function of two inputs is also known as the XOR function.
The parity function is notable for its role in theoretical investigation of circuit complexity of Boolean functions.
The output of the parity function is the parity bit.
Definition
The -variable parity function is the Boolean function with the property that if and only if the number of ones in the vector is odd.
In other words, is defined as follows:
where denotes exclusive or.
Properties
Parity only depends on the number of ones and is therefore a symmetric Boolean function.
The n-variable parity function and its negation are the only Boolean functions for which all disjunctive normal forms have the maximal number of 2 n − 1 monomials of length n and all conjunctive normal forms have the maximal number of 2 n − 1 clauses of length n.
Computational complexity
Some of the earliest work in computational complexity was 1961 bound of Bella Subbotovskaya showing the size of a Boolean formula computing parity must be at least . This work uses the method of random restrictions. This exponent of has been increased through careful analysis to by Paterson and Zwick (1993) and then to by Håstad (1998).
In the early 1980s, Merrick Furst, James Saxe and Michael Sipser and independently Miklós Ajtai established super-polynomial lower bounds on the size of constant-depth Boolean circuits for the parity function, i.e., they showed that polynomial-size constant-depth circuits cannot compute the parity function. Similar results were also established for the majority, multiplication and transitive closure functions, by reduction from the parity function.
established tight exponential lower bounds on the size of constant-depth Boolean circuits for the parity function. Håstad's Switching Lemma is the key technical tool used for these lower bounds and Johan Håstad was awarded the Gödel Prize for this work in 1994.
The precise result is that depth- circuits with AND, OR, and NOT gates require size to compute the parity function.
This is asymptotically almost optimal as there are depth- circuits computing parity which have size .
Infinite version
An infinite parity function is a function mapping every infinite binary string to 0 or 1, having the following property: if and are infinite binary strings differing only on finite number of coordinates then if and only if and differ on even number of coordinates.
Assuming axiom of choice it can be proved that parity functions exist and there are many of them; as many as the number of all functions from to . It is enough to take one representative per equivalence class of relation defined as follows: if and differ at finite number of coordinates. Having such representatives, we can map all of them to ; the rest of values are deducted unambiguously.
Another construction of an infinite parity function can be done using a non-principal ultrafilter on . The existence of non-principal ultrafilters on follows from – and is strictly weaker than – the axiom of choice. For any we consider the set . The infinite parity function is defined by mapping to if and only if is an element of the ultrafilter.
It is necessary to assume at least some amount of choice to prove that infinite parity functions exist. If is an infinite parity function and we consider the inverse image as a subset of the Cantor space , then is a non-measurable set and does not have the property of Baire. Without the axiom of choice, it is consistent (relative to ZF) that all subsets of the Cantor space are measurable and have the property of Baire and thus that no infinite parity function exists; this holds in the Solovay model, for instance.
See also
Walsh function, a continuous equivalent
Parity bit, the output of the function
Piling-up lemma, a statistical property for independent inputs
Multiway switching, a physical implementation often used to control lighting
Related topics:
Error Correction
Error Detection
References
Boolean algebra
Circuit complexity
Functions and mappings | Parity function | [
"Mathematics"
] | 856 | [
"Boolean algebra",
"Functions and mappings",
"Mathematical analysis",
"Mathematical logic",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations"
] |
12,926,469 | https://en.wikipedia.org/wiki/Co-modality | Co-modality is a notion introduced by the European Commission in 2006 in the field of the transport policy to define an approach of the globality of the transport modes and of their combinations.
Description
For the European Commission co-modality refers to a "use of different modes on their own and in combination" in the aim to obtain "an optimal and sustainable utilisation of resources".
This notion introduces a new approach to the European transport policy in which one do not seek, like in the 2001 white paper, to oppose transport modes one to another, i.e. opposing roads to its alternatives, but rather to find an optimum exploiting the domains of relevance of the various transport modes and of their combinations.
Controversy
The transition from the support of intermodality and multimodality as exposed in the 2001 white paper to the notion of co-modality has been seen by many observers of the sector of transport as the sign of the abandonment of a policy oriented towards the development of the alternatives to the road mode.
See also
Cycle lane
References
External links
White paper European transport policy for 2010 : time to decide
Mid-term review of the 2001 Transport White Paper Keep Europe Moving
Opinion of the Committee of the Regions on the mid-term review of the European Commission's 2001 transport white paper
Transportation planning
Intermodal transport | Co-modality | [
"Physics"
] | 270 | [
"Physical systems",
"Transport",
"Intermodal transport"
] |
12,928,115 | https://en.wikipedia.org/wiki/Hydroelasticity | In fluid dynamics and elasticity, hydroelasticity or flexible fluid-structure interaction (FSI), is a branch of science which is concerned with the motion of deformable bodies through liquids. The theory of hydroelasticity has been adapted from aeroelasticity, to describe the effect of structural response of the body on the fluid around it.
Definition
It is the analysis of the time-dependent interaction of hydrodynamic and elastic structural forces. Vibration of floating and submerged ocean structures/vessels encompasses this field of naval architecture.
Importance
Hydroelasticity is of concern in various areas of marine technology such as:
High-speed craft.
Ships with the phenomena springing and whipping affecting fatigue and extreme loading
Large scale floating structures such as floating airports, floating bridges and buoyant tunnels.
Marine Risers.
Cable systems and umbilicals for remotely operated or tethered underwater vehicles.
Seismic cable systems.
Flexible containers for water transport, oil spill recovery and other purposes.
Areas of research
Analytical and numerical methods in FSI.
Techniques for laboratory and in-service investigations.
Stochastic methods.
Hydroelasticity-based prediction of Wave Loads and Responses.
Impact, sloshing and shock.
Flow induced vibration (FIV).
Tsunami and seaquake induced responses of large marine structures.
Devices for energy extraction.
Current research
Analysis and design of marine structures or systems necessitates integration of hydrodynamics and structural mechanics; i.e. hydroelasticity plays the key role. There has been significant recent progress in research into the hydroelastic phenomena, and the topic of hydroelasticity is of considerable current interest.
Institutes and laboratories
Norwegian University of Science and Technology (NTNU), Trondheim, Norway
University of Southampton, Southampton, UK.
MARINTEK : Marine Technology Centre, Trondheim, Norway
MARIN : Maritime Research Institute Netherlands.
MIT
University of Michigan.
Indian Institute of Technology Kharagpur, India.
Saint Petersburg State University, Russia.
National Maritime Research Institute, Japan.
Research Institute of Applied Mechanics, Kyushu University, Japan.
Computational Fluid Dynamics Laboratory, National Taiwan University of Science and Technology, Taiwan.
Lee Dynamics, Houston, Texas, USA
Conferences
HYDROELAS : International conference on Hydroelasticity in marine technology.
FSI : International conference on fluid-structure interaction.
OT : Offshore Technology Conference.
ISOPE : International Society of Offshore and Polar Engineers conference.
Journals
Journal of Sound and Vibration.
Journal of Ship Research.
Applied Ocean research.
Journal of Engineering Mechanics.
IEEE Journal of Oceanic Engineering.
Journal of Fluids and Structures
References
R.E.D.Bishop and W.G.Price, "Hydroelasticity of ships"; Cambridge University Press, 1979, .
Fumiki Kitō, "Principles of hydro-elasticity", Tokyo : Memorial Committee for Retirement of Dr. F. Kito; Distributed by Yokendo Co., 1970, LCCN 79566961.
Edited by S.K.Chakrabarti and C.A.Brebbia, "Fluid structure interaction", Southampton; Boston: WIT, c2001, .
Edited by S.K.Chakrabarti and C.A.Brebbia, "Fluid structure interaction and moving boundary problems IV", Southampton : WIT, c2007, .
Edited by Subrata K. Chakrabarti, "Handbook of offshore engineering", Amsterdam; London : Elsevier, 2005, .
Subrata K. Chakrabarti, "Hydrodynamics of offshore structures", Southampton : Computational Mechanics; Berlin : Springer Verlag, c1987, .
Subrata K. Chakrabarti, "Nonlinear methods in offshore engineering", Amsterdam; New York : Elsevier, 1990, .
Edited by S.K. Chakrabarti, "Numerical models in fluid-structure interaction", Southampton, UK; Boston : WIT, c2005, .
Subrata Kumar Chakrabarti, "Offshore structure modeling", Singapore; River Edge, N.J. : World Scientific, c1994, (OCoLC)ocm30491315.
Subrata K. Chakrabarti, "The theory and practice of hydrodynamics and vibration", River Edge, N.J. : World Scientific, c2002, .
D. Karmakar, J. Bhattacharjee and T. Sahoo, "Expansion formulae for wave structure interaction problems with applications in hydroelasticity ", Intl. J. Engng. Science, 2007: 45(10), 807–828.
Storhaug, Gaute, "Experimental investigation of wave induced vibrations and their effect on the fatigue loading of ships", PhD dissertation, NTNU, 2007:133, .
Storhaug, Gaute et al. "Measurements of wave induced hull girder vibrations of an ore carrier in different trades", Journal of Offshore Mechanics and Arctic Engineering, Nov. 2007.
Ottó Haszpra, "Modelling hydroelastic vibrations", London; San Francisco : Pitman, 1979, .
Hirdaris, S.E., Price, W.G and Temarel, P. (2003). Two- and three-dimensional hydroelastic modelling of a bulker in regular waves. Marine Structures 16(8):627-658, doi:10.1016/j.marstruc.2004.01.005
Hirdaris, S.E. and Temarel, P. (2009). Hydroelasticity of Ships - recent advances and future trends. Proceedings (Part M) of the Institution of Mechanical Engineers : Journal of Engineering for the Maritime Environment, 223(3):305-330, doi:10.1243/14750902JEME160
Temarel, P. and Hirdaris, S.E. Eds.(2009). Hydroelasticity in Marine Technology - Proceedings of the 5th International Conference HYELAS'09, Published by the University of Southampton - UK,
Fluid dynamics | Hydroelasticity | [
"Chemistry",
"Engineering"
] | 1,249 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
12,928,483 | https://en.wikipedia.org/wiki/Infra%20Corporation | Infra Corporation is a division of EMC Corporation that produces infraEnterprise, which is a multi-tier web-based IT Service Management software tool. The software is based on ITIL and it implements a number of ITIL processes, including Service Desk management (including Incident Management and Problem Management), Change Management, Release Management, Configuration Management (including Federated CMDB), Availability Management and Service Level Management. The tool also includes a knowledge base module (known as the "knowledge bank"), which complies with principles of Knowledge-Centered Support (KCS).
History
Infra Corporation was first established in 1991 in Australia, and now has regional head offices in North America, Australia, the UK and Europe and a worldwide network of partners and distributors.
Merger and acquisitions
Infra was acquired by Hopkinton, Massachusettsbased EMC Corporation on 10 March 2008, in a move viewed by analysts as part of EMC's ongoing strategy to establish itself as an IT management solution provider.
VMware acquired Ionix Service Manager (formerly Infra) in 2010 and subsequently re-branded the tool VMware Service Manager. Support for this product began on 1 July 2010 and end of support has been announced for its latest and believed last version 9.x as of 8 March 2017.
In July 2014, it was announced that VMware and IT Service Management software company Alemba had entered into an agreement to hand control of the support and development of VMware Service Manager to Alemba.
Under the terms of this agreement, Alemba has taken over all operational aspects of VMware Service Manager, including support, account management and consultancy. Full product support and a development roadmap will now continue past the previous end of availability date of March 2017.
Alemba rebranded and relicensed the VMware Service Manager product as vFire Core. In December 2014, Alemba announced the release of vFire Core 9.2.0, the first major release of the tool under Alemba's ownership.
Awards and recognition
In 2002, infraEnterprise was awarded PinkVerify ITIL certification from Pink Elephant, an independent consulting firm specialising in ITIL and PRINCE2. Infra has won a number of awards. In 2007, they were awarded the Network Computing magazine "Helpdesk Product of the Year" for infraEnterprise, and were awarded HDI's Best Business Use of Business Support Technology in 2006 at the 11th Annual Help Desk and IT Support Excellence Awards.
References
External links
Infra Corporation
Infra Benelux
Infra France
VMware Alemba Agreement.
Alemba Release vFire Core
Alemba
Information technology management
Dell EMC
Defunct software companies of the United States | Infra Corporation | [
"Technology"
] | 554 | [
"Information technology",
"Information technology management"
] |
25,229,064 | https://en.wikipedia.org/wiki/Evolution%20of%20cells | Evolution of cells refers to the evolutionary origin and subsequent evolutionary development of cells. Cells first emerged at least 3.8 billion years ago approximately 750 million years after Earth was formed.
The first cells
The initial development of the cell marked the passage from prebiotic chemistry to partitioned units resembling modern cells. The final transition to living entities that fulfill all the definitions of modern cells depended on the ability to evolve effectively by natural selection. This transition has been called the Darwinian transition.
If life is viewed from the point of view of replicator molecules, cells satisfy two fundamental conditions: protection from the outside environment and confinement of biochemical activity. The former condition is needed to keep complex molecules stable in a varying and sometimes aggressive environment; the latter is fundamental for the evolution of biocomplexity. If freely floating molecules that code for enzymes are not enclosed in cells, the enzymes will automatically benefit neighboring replicator molecules as well. Thus, the consequences of diffusion in non-partitioned lifeforms would result in "parasitism by default." Therefore, the selection pressure on replicator molecules will be lower, as the 'lucky' molecule that produces the better enzyme does not fully leverage its advantage over its close neighbors. In contrast, if the molecule is enclosed in a cell membrane, the enzymes coded will be available only to itself. That molecule will uniquely benefit from the enzymes it codes for, increasing individuality and thus accelerating natural selection.
Partitioning may have begun from cell-like spheroids formed by proteinoids, which are observed by heating amino acids with phosphoric acid as a catalyst. They bear much of the basic features provided by cell membranes. Proteinoid-based protocells enclosing RNA molecules could have been the first cellular life forms on Earth.
Another possibility is that the shores of the ancient coastal waters may have been a suitable environment for the initial development of cells. Waves breaking on the shore create a delicate foam composed of bubbles. Shallow coastal waters also tend to be warmer, further concentrating the molecules through evaporation. While bubbles made mostly of water tend to burst quickly, oily bubbles are much more stable. The phospholipid, the primary material of cell membranes, is an example of a common oily compound prevalent in the prebiotic seas.
Both of these options require the presence of massive amounts of chemicals and organic material in order to form cells. A large gathering of organic molecules most likely came from what scientists now call the prebiotic soup. The prebiotic soup refers to the collection of every organic compound that appeared on Earth after it was formed. This soup would have most likely contained the compounds necessary to form early cells.
Phospholipids are composed of a hydrophilic head on one end and a hydrophobic tail on the other. They can come together to form a bilayer membrane. A lipid monolayer bubble can only contain oil and is not conducive to harboring water-soluble organic molecules. On the other hand, a lipid bilayer bubble can contain water and was a likely precursor to the modern cell membrane. If a protein was introduced that increased the integrity of its parent bubble, then that bubble had an advantage. Primitive reproduction may have occurred when the bubbles burst, releasing the results of the experiment into the surrounding medium. Once enough of the right compounds were released into the medium, the development of the first prokaryotes, eukaryotes, and multi-cellular organisms could be achieved.
However, the first cell membrane could not have been composed of phospholipids due its low permeability, as ions would not able to pass through the membrane. Rather it is suggested they were composed of fatty acids, as they can freely exchange ions, allowing geochemically sustained proton gradients at alkaline hydrothermal vents that might lead to prebiotic chemical reactions via CO2 fixation.
Community metabolism
The common ancestor of the now existing cellular lineages (eukaryotes, bacteria, and archaea) may have been a community of organisms that readily exchanged components and genes. It would have contained:
Autotrophs that produced organic compounds from CO2, either photosynthetically or by inorganic chemical reactions;
Heterotrophs that obtained organics from leakage of other organisms
Saprotrophs that absorbed nutrients from decaying organisms
Phagotrophs that were sufficiently complex to envelop and digest particulate nutrients, including other organisms.
The eukaryotic cell seems to have evolved from a symbiotic community of prokaryotic cells. DNA-bearing organelles like mitochondria and chloroplasts are remnants of ancient symbiotic oxygen-breathing bacteria and cyanobacteria, respectively, where at least part of the rest of the cell may have been derived from an ancestral archaean prokaryote cell. The archean prokaryote cell concept is often termed as the endosymbiotic theory. There is still debate about whether organelles like the hydrogenosome predated the origin of mitochondria, or vice versa: see the hydrogen hypothesis for the origin of eukaryotic cells.
How the current lineages of microbes evolved from this postulated community is currently unsolved, but subject to intense research by biologists, stimulated by the great flow of new discoveries in genome science.
Genetic code and the RNA world
Modern evidence suggests that early cellular evolution occurred in a biological realm radically distinct from modern biology. It is thought that in this ancient realm, the current genetic role of DNA was largely filled by RNA, and catalysis was also largely mediated by RNA (that is, by ribozyme counterparts of enzymes). This concept is known as the RNA world hypothesis.
According to this hypothesis, the ancient RNA world transitioned into the modern cellular world via the evolution of protein synthesis, followed by replacement of many cellular ribozyme catalysts by protein-based enzymes. Proteins are much more flexible in catalysis than RNA due to the existence of diverse amino acid side chains with distinct chemical characteristics. The RNA record in existing cells appears to preserve some 'molecular fossils' from this RNA world. These RNA fossils include the ribosome itself (in which RNA catalyzes peptide-bond formation), the modern ribozyme catalyst RNase P, and RNAs.
The nearly universal genetic code preserves some evidence for the RNA world. For instance, recent studies of transfer RNAs, the enzymes that charge them with amino acids (the first step in protein synthesis) and the way these components recognize and exploit the genetic code, have been used to suggest that the universal genetic code emerged before the evolution of the modern amino acid activation method for protein synthesis. The first RNA polymers probably emerged prior to 4.17 Gya if life originated at freshwater environments similar to Darwin's warm little pond.
Sexual reproduction
The evolution of sexual reproduction may be a primordial and fundamental characteristic of eukaryotes, including single cell eukaryotes. Based on a phylogenetic analysis, Dacks and Roger proposed that facultative sex was present in the common ancestor of all eukaryotes. Hofstatter and Lehr reviewed evidence supporting the hypothesis that all eukaryotes can be regarded as sexual, unless proven otherwise.
Sexual reproduction may have arisen in early protocells with RNA genomes (RNA world). Initially, each protocell would likely have contained one RNA genome (rather than multiple) since this maximizes the growth rate. However, the occurrence of damages to the RNA which block RNA replication or interfere with ribozyme function would make it advantageous to fuse periodically with another protocell to restore reproductive ability. This early, simple form of genetic recovery is similar to that occurring in extant segmented single-stranded RNA viruses (see influenza A virus).
As duplex DNA became the predominant form of the genetic material, the mechanism of genetic recovery evolved into the more complex process of meiotic recombination, found today in most species. It thus appears likely that sexual reproduction arose early in the evolution of cells and has had a continuous evolutionary history.
Horizontal gene transfer
Horizontal gene transfer (HGT) is the movement of genetic information between different organisms of the same species mainly being bacteria. This is not the movement of genetic information between a parent and their offspring but by other factors. In contrast to how animals reproduce and evolve from sexual reproduction, bacteria evolve by sharing DNA with other bacteria or their environment.
There are three common mechanisms of transferring genetic material by HGT:
Transformation: The bacteria assimilates DNA from the environment into their own
Conjugation: Bacteria directly transfer genes from one cell to another
Transduction: Bacteriophages (virus) move genes from one bacterial cell to another
Once one of these mechanisms has occurred the bacteria will continue to multiply and grow resistance and evolve by natural selection. HGT is the main cause of the assimilation of certain genetic material and the passing down of antibiotic resistance genes (ARGs).
Canonical patterns
Although the evolutionary origins of the major lineages of modern cells are disputed, the primary distinctions between the three major lineages of cellular life (called domains) are firmly established.
In each of these three domains, DNA replication, transcription, and translation all display distinctive features. There are three versions of ribosomal RNAs, and generally three versions of each ribosomal protein, one for each domain of life. These three versions of the protein synthesis apparatus are called the canonical patterns, and the existence of these canonical patterns provides the basis for a definition of the three domains - Bacteria, Archaea, and Eukarya (or Eukaryota) - of currently existing cells.
Using genomics to infer early lines of evolution
Instead of relying on a single gene such as the small-subunit ribosomal RNA (SSU rRNA) gene to reconstruct early evolution, or a few genes, scientific effort has shifted to analyzing complete genome sequences.
Evolutionary trees based only on SSU rRNA alone do not capture the events of early eukaryote evolution accurately, and the progenitors of the first nucleated cells are still uncertain. For instance, analysis of the complete genome of the eukaryote yeast shows that many of its genes are more closely related to bacterial genes than they are to archaea, and it is now clear that archaea were not the simple progenitors of the eukaryotes, in contradiction to earlier findings based on SSU rRNA and limited samples of other genes.
One hypothesis is that the first nucleated cell arose from two distinctly different ancient prokaryotic (non-nucleated) species that had formed a symbiotic relationship with one another to carry out different aspects of metabolism. One partner of this symbiosis is proposed to be a bacterial cell, and the other an archaeal cell. It is postulated that this symbiotic partnership progressed via the cellular fusion of the partners to generate a chimeric or hybrid cell with a membrane bound internal structure that was the forerunner of the nucleus. The next stage in this scheme was transfer of both partner genomes into the nucleus and their fusion with one another. Several variations of this hypothesis for the origin of nucleated cells have been suggested. Other biologists dispute this conception and emphasize the community metabolism theme, the idea that early living communities would comprise many different entities to extant cells, and would have shared their genetic material more extensively than current microbes.
Quotes
"The First Cell arose in the previously prebiotic world with the coming together of several entities that gave a single vesicle the unique chance to carry out three essential and quite different life processes. These were: (a) to copy informational macromolecules, (b) to carry out specific catalytic functions, and (c) to couple energy from the environment into usable chemical forms. These would foster subsequent cellular evolution and metabolism. Each of these three essential processes probably originated and was lost many times prior to The First Cell, but only when these three occurred together was life jump-started and Darwinian evolution of organisms began." (Koch and Silver, 2005)
"The evolution of modern cells is arguably the most challenging and important problem the field of Biology has ever faced. In Darwin's day the problem could hardly be imagined. For much of the 20th century it was intractable. In any case, the problem lay buried in the catch-all rubric "origin of life"--where, because it is a biological not a (bio)chemical problem, it was effectively ignored. Scientific interest in cellular evolution started to pick up once the universal phylogenetic tree, the framework within which the problem had to be addressed, was determined. But it was not until microbial genomics arrived on the scene that biologists could actually do much about the problem of cellular evolution." (Carl Woese, 2002)
References
Further reading
External links
Life on Earth
The universal nature of biochemistry
Endosymbiosis and The Origin of Eukaryotes
Origins of the Eukarya.
Cell biology
Evolutionary biology | Evolution of cells | [
"Biology"
] | 2,658 | [
"Evolutionary biology",
"Cell biology"
] |
25,230,405 | https://en.wikipedia.org/wiki/C8H15NO3 | The molecular formula C8H15NO3 (molar mass: 173.21 g/mol, exact mass: 173.1052 u) may refer to:
Acetylleucine
Levacetylleucine
Swainsonine
SCH-50911
Molecular formulas | C8H15NO3 | [
"Physics",
"Chemistry"
] | 60 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.