id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
3,702,470 | https://en.wikipedia.org/wiki/Curtin%E2%80%93Hammett%20principle | The Curtin–Hammett principle is a principle in chemical kinetics proposed by David Yarrow Curtin and Louis Plack Hammett. It states that, for a reaction that has a pair of reactive intermediates or reactants that interconvert rapidly (as is usually the case for conformational isomers), each going irreversibly to a different product, the product ratio will depend both on the difference in energy between the two conformers and the energy barriers from each of the rapidly equilibrating isomers to their respective products. Stated another way, the product distribution reflects the difference in energy between the two rate-limiting transition states. As a result, the product distribution will not necessarily reflect the equilibrium distribution of the two intermediates. The Curtin–Hammett principle has been invoked to explain selectivity in a variety of stereo- and regioselective reactions. The relationship between the (apparent) rate constants and equilibrium constant is known as the Winstein-Holness equation.
Definition
The Curtin–Hammett principle applies to systems in which different products are formed from two substrates in equilibrium with one another. The rapidly interconverting reactants can have any relationship between themselves (stereoisomers, constitutional isomers, conformational isomers, etc.). Product formation must be irreversible, and the different products must be unable to interconvert.
For example, given species A and B that equilibrate rapidly while A turns irreversibly into C, and B turns irreversibly into D:
\bf{C}\ \it <-[k_{\rm 1}] \bf{A} \it\ <=> [{K}]\ \bf{B}\ \it-> [k_{\rm 2}]\ \bf D
K is the equilibrium constant between A and B, and k1 and k2 are the rate constants for the formation of C and D, respectively. When the rate of interconversion between A and B is much faster than either k1 or k2, then the Curtin–Hammett principle tells us that the C:D product ratio is not equal to the equilibrium A:B reactant ratio, but is instead determined by the relative energies of the transition states (i.e., difference in the absolute energies of the transition states). If reactants A and B were at identical energies, the product ratio would depend only on the activation barriers of the reactions leading to each respective product. However, in a real-world scenario, the two reactants are likely at somewhat different energy levels, although the barrier to their interconversion must be low for the Curtin–Hammett scenario to apply. In this case, the product distribution depends both on the equilibrium ratio of A to B and on the relative activation barriers going to the corresponding products C and D. Both factors are taken into account by the difference in the energies of the transition states (ΔΔG‡ in the figure below).
The reaction coordinate free energy profile of a typical reaction under Curtin-Hammett control is represented by the following figure:
The ratio of products only depends on the value labeled ΔΔG‡ in the figure: C will be the major product, because the energy of TS1 is lower than the energy of TS2. A common but false assertion is that the product distribution does not in any way reflect the relative free energies of substrates A and B; in fact, it reflects the relative free energies of the substrates and the relative activation energies. This misunderstanding may stem from failing to appreciate the distinction between "the difference of energies of activation" and "the difference in transition state energies". Although these quantities may at first appear synonymous, the latter takes into account the equilibrium constant for interconversion of A and B, while the former does not.
Mathematically, the product ratio can be expressed as a function of K, k1, and k2 or in terms of the corresponding energies ΔG°, ΔG1‡, and ΔG2‡. By combining terms, the product ratio can be rewritten in terms of the quantity ΔΔG‡ alone, where ΔΔG‡ = (ΔG2‡ – ΔG1‡) + ΔG°. Inspection of the energy diagram (shown above) makes it apparent that ΔΔG‡ is precisely the difference in transition state energies.
Derivation
A generic reaction under Curtin–Hammett can be described by the following parameters:
\bf{C}\ \it <-[k_{\rm 1}] \bf{A} \it\ <=> [{K}]\ \bf{B}\ \it-> [k_{\rm 2}]\ \bf D
In order for rapid equilibration to be a good assumption, the rate of conversion from the less stable of A or B to the product C or D must be at least 10 times slower than the rate of equilibration between A and B.
The rate of formation for compound C from A is given as
,
and that of D from B as
,
with the second approximate equality following from the assumption of rapid equilibration. Under this assumption, the ratio of the products is then
.
In other words, because equilibration is fast compared to product formation, throughout the reaction. As a result, also remains roughly constant throughout the reaction. In turn, integration with respect to time implies that likewise takes on an approximately constant value through the course of the reaction, namely .
In terms of the ground state and transition state energies, the product ratio can therefore be written as:
.
Importantly, inspection of the energy diagram above allows us to identify
with the energy difference of the transition states, giving us a simplified equation that captures the essence of the Curtin-Hammett principle:
Thus, although the product ratio depends on the equilibrium constant between A and B and the difference in energy between the barriers from A to C and from B to D, both of these factors are automatically taken into account by the energy difference of the transition states leading to the products, ΔΔG‡.
Classes of reactions under Curtin–Hammett control
Three main classes of reactions can be explained by the Curtin–Hammett principle: either the more or less stable conformer may react more quickly, or they may both react at the same rate.
Case I: More stable conformer reacts more quickly
One category of reactions under Curtin–Hammett control includes transformations in which the more stable conformer reacts more quickly. This occurs when the transition state from the major intermediate to its respective product is lower in energy than the transition state from the minor intermediate to the other possible product. The major product is then derived from the major conformer, and the product distribution does not mirror the equilibrium conformer distribution.
Example: piperidine oxidation
An example of a Curtin–Hammett scenario in which the more stable conformational isomer reacts more quickly is observed during the oxidation of piperidines. In the case of N-methyl piperidine, inversion at nitrogen between diastereomeric conformers is much faster than the rate of amine oxidation. The conformation which places the methyl group in the equatorial position is 3.16 kcal/mol more stable than the axial conformation. The product ratio of 95:5 indicates that the more stable conformer leads to the major product.
Case II: Less stable conformer reacts more quickly
A second category of reactions under Curtin–Hammett control includes those in which the less stable conformer reacts more quickly. In this case, despite an energetic preference for the less reactive species, the major product is derived from the higher-energy species. An important implication is that the product of a reaction can be derived from a conformer that is at sufficiently low concentration as to be unobservable in the ground state.
Example: tropane alkylation
The alkylation of tropanes with methyl iodide is a classic example of a Curtin–Hammett scenario in which a major product can arise from a less stable conformation. Here, the less stable conformer reacts via a more stable transition state to form the major product. Therefore, the ground state conformational distribution does not reflect the product distribution.
Case III: both conformers react at the same rate
It is hypothetically possible that two different conformers in equilibrium could react through transition states that are equal in energy. In this case, product selectivity would depend only on the distribution of ground-state conformers. In this case, both conformers would react at the same rate.
Example: SN2 reaction of cyclohexyl iodide
Ernest L. Eliel has proposed that the hypothetical reaction of cyclohexyl iodide with radiolabeled iodide would result in a completely symmetric transition state. Because both the equatorial and axial-substituted conformers would react through the same transition state, ΔΔG‡ would equal zero. By the Curtin–Hammett principle, the distribution of products should then be 50% axial substituted and 50% equatorial substituted. However, equilibration of the products precludes observation of this phenomenon.
Example: radical methylation
When ground state energies are different but transition state energies are similar, selectivity will be degraded in the transition state, and poor overall selectivity may be observed. For instance, high selectivity for one ground state conformer is observed in the following radical methylation reaction.
The conformer in which A(1,3) strain is minimized is at an energy minimum, giving 99:1 selectivity in the ground state. However, transition state energies depend both on the presence of A(1,3) strain and on steric hindrance associated with the incoming methyl radical. In this case, these two factors are in opposition, and the difference in transition state energies is small compared to the difference in ground state energies. As a result, poor overall selectivity is observed in the reaction.
Application to stereoselective and regioselective reactions
The Curtin–Hammett principle is used to explain the selectivity ratios for some stereoselective reactions.
Application to dynamic kinetic resolution
The Curtin–Hammett principle can explain the observed dynamics in transformations employing dynamic kinetic resolution, such as the Noyori asymmetric hydrogenation and enantioselective lithiation.
Noyori asymmetric hydrogenation
Rapid equilibration between enantiomeric conformers and irreversible hydrogenation place the reaction under Curtin–Hammett control. The use of a chiral catalyst results in a higher-energy and a lower-energy transition state for hydrogenation of the two enantiomers. The transformation occurs via the lower-energy transition state to form the product as a single enantiomer.
Consistent with the Curtin–Hammett principle, the ratio of products depends on the absolute energetic barrier of the irreversible step of the reaction, and does not reflect the equilibrium distribution of substrate conformers. The relative free energy profile of one example of the Noyori asymmetric hydrogenation is shown below:
Enantioselective lithiation
Dynamic kinetic resolution under Curtin–Hammett conditions has also been applied to enantioselective lithiation reactions. In the reaction below, it was observed that product enantioselectivities were independent of the chirality of the starting material. The use of (−)-sparteine is essential to enantioselectivity, with racemic product being formed in its absence.
Equilibration between the two alkyllithium complexes was demonstrated by the observation that enantioselectivity remained constant over the course of the reaction. Were the two reactant complexes not rapidly interconverting, enantioselectivity would erode over time as the faster-reacting conformer was depleted.
Application to regioselective acylation
The Curtin–Hammett principle has been invoked to explain regioselectivity in the acylation of 1,2-diols. Ordinarily, the less-hindered site of an asymmetric 1,2-diol would experience more rapid esterification due to reduced steric hindrance between the diol and the acylating reagent. Developing a selective esterification of the most substituted hydroxyl group is a useful transformation in synthetic organic chemistry, particularly in the synthesis of carbohydrates and other polyhydroxylated compounds. Stannylene acetals have been used to efficiently achieve this transformation.
The asymmetric diol is first treated with a tin reagent to produce the dibutylstannylene acetal. This compound is then treated with one equivalent of acyl chloride to produce the stannyl monoester. Two isomers of the stannyl ester are accessible, and can undergo rapid interconversion through a tetrahedral intermediate. Initially, the less stable isomer predominates, as it is formed more quickly from the stannyl acetal. However, allowing the two isomers to equilibrate results in an excess of the more stable primary alkoxy stannane in solution. The reaction is then quenched irreversibly, with the less hindered primary alkoxy stannane reacting more rapidly. This results in selective production of the more-substituted monoester. This is a Curtin–Hammett scenario in which the more stable isomer also reacts more rapidly.
Application to asymmetric epoxidation
The epoxidation of asymmetric alkenes has also been studied as an example of Curtin–Hammett kinetics. In a computational study of the diastereoselective epoxidation of chiral allylic alcohols by titanium peroxy complexes, the computed difference in transition state energies between the two conformers was 1.43 kcal/mol. Experimentally, the observed product ratio was 91:9 in favor of the product derived from the lower-energy transition state. This product ratio is consistent with the computed difference in transition state energies. This is an example in which the conformer favored in the ground state, which experiences reduced A(1,3) strain, reacts through a lower-energy transition state to form the major product.
Synthetic applications
Synthesis of AT2433-A1
The Curtin–Hammett principle has been invoked to explain selectivity in a variety of synthetic pathways. One example is observed en route to the antitumor antibiotic AT2433-A1, in which a Mannich-type cyclization proceeds with excellent regioselectivity. Studies demonstrate that the cyclization step is irreversible in the solvent used to run the reaction, suggesting that Curtin–Hammett kinetics can explain the product selectivity.
Synthesis of kapakahines B and F
A Curtin–Hammett scenario was invoked to explain selectivity in the syntheses of kapakahines B and F, two cyclic peptides isolated from marine sponges. The structure of each of the two compounds contains a twisted 16-membered macrocycle.
A key step in the syntheses is selective amide bond formation to produce the correct macrocycle. In Phil Baran's enantioselective synthesis of kapakahines B and F, macrocycle formation was proposed to occur via two isomers of the substrate. The more easily accessible, lower energy isomer led to the undesired product, whereas the less stable isomer formed the desired product. However, because the amide-bond-forming step was irreversible and the barrier to isomerization was low, the major product was derived from the faster-reacting intermediate. This is an example of a Curtin–Hammett scenario in which the less-stable intermediate is significantly more reactive than the more stable intermediate that predominates in solution. Because substrate isomerization is fast, throughout the course of the reaction excess substrate of the more stable form can be converted into the less stable form, which then undergoes rapid and irreversible amide bond formation to produce the desired macrocycle. This strategy provided the desired product in >10:1 selectivity. (I think there's an error in the Scheme. See Talk pages.)
Synthesis of (+)-griseofulvin
In the first enantioselective synthesis of (+)-Griseofulvin, a potent antifungal agent, a Curtin–Hammett situation was observed. A key step in the synthesis is the rhodium-catalyzed formation of an oxonium ylide, which then undergoes a [2,3] sigmatropic rearrangement en route to the desired product. However, the substrate contains two ortho-alkoxy groups, either of which could presumably participate in oxonium ylide generation.
Obtaining high selectivity for the desired product was possible, however, due to differences in the activation barriers for the step following ylide formation. If the ortho-methoxy group undergoes oxonium ylide formation, a 1,4-methyl shift can then generate an undesired product. The oxonium ylide formed from the other ortho-alkoxy group is primed to undergo a [2,3] sigmatropic rearrangement to yield the desired compound. Pirrung and coworkers reported complete selectivity for the desired product over the product resulting from a 1,4-methyl shift. This result suggests that oxonium ylide formation is reversible, but that the subsequent step is irreversible. The symmetry-allowed [2,3] sigmatropic rearrangement must follow a pathway that is lower in activation energy than the 1,4-methyl shift, explaining the exclusive formation of the desired product.
Synthesis of (+)-allocyathin B2
A potential Curtin-Hammett scenario was also encountered during the enantioselective total synthesis of (+)-allocyathin B2 by the Trost group. The pivotal step in the synthesis was a Ru-catalyzed diastereoselective cycloisomerization. The reaction could result in the formation of two possible double bond isomers. The reaction provided good selectivity for the desired isomer, with results consistent with a Curtin-Hammett scenario. Initial oxidative cycloruthenation and beta-hydride elimination produce a vinyl-ruthenium hydride. Hydride insertion allows for facile alkene isomerization. It is unlikely that the reaction outcome mirrors the stability of the intermediates, as the large CpRu group experiences unfavorable steric interactions with the nearby isopropyl group. Instead, a Curtin–Hammett situation applies, in which the isomer favored in equilibrium does not lead to the major product. Reductive elimination is favored from the more reactive, less stable intermediate, as strain relief is maximized in the transition state. This produces the desired double bond isomer.
See also
Transition state theory
Chemical kinetics
Gibbs free energy
References
External links
https://web.archive.org/web/20111005191716/http://www.joe-harrity.staff.shef.ac.uk/meetings/CurtinHammettreview.pdf
https://web.archive.org/web/20120402124752/http://evans.harvard.edu/pdf/smnr_2009_WZOREK_JOSEPH.pdf
Chemical kinetics
Physical organic chemistry | Curtin–Hammett principle | [
"Chemistry"
] | 4,094 | [
"Chemical reaction engineering",
"Chemical kinetics",
"Physical organic chemistry"
] |
3,707,854 | https://en.wikipedia.org/wiki/Wigner%20crystal | A Wigner crystal is the solid (crystalline) phase of electrons first predicted by Eugene Wigner in 1934. A gas of electrons moving in a uniform, inert, neutralizing background (i.e. Jellium Model) will crystallize and form a lattice if the electron density is less than a critical value. This is because the potential energy dominates the kinetic energy at low densities, so the detailed spatial arrangement of the electrons becomes important. To minimize the potential energy, the electrons form a bcc (body-centered cubic) lattice in 3D, a triangular lattice in 2D and an evenly spaced lattice in 1D. Most experimentally observed Wigner clusters exist due to the presence of the external confinement, i.e. external potential trap. As a consequence, deviations from the b.c.c or triangular lattice are observed. A crystalline state of the 2D electron gas can also be realized by applying a sufficiently strong magnetic field. However, it is still not clear whether it is the Wigner crystallization that has led to observation of insulating behaviour in magnetotransport measurements on 2D electron systems, since other candidates are present, such as Anderson localization.
More generally, a Wigner crystal phase can also refer to a crystal phase occurring in non-electronic systems at low density. In contrast, most crystals melt as the density is lowered. Examples seen in the laboratory are charged colloids or charged plastic spheres.
Description
A uniform electron gas at zero temperature is characterised by a single dimensionless parameter, the so-called Wigner–Seitz radius rs = a / ab, where a is the average inter-particle spacing and ab is the Bohr radius. The kinetic energy of an electron gas scales as 1/rs2, this can be seen for instance by considering a simple Fermi gas. The potential energy, on the other hand, is proportional to 1/rs. When rs becomes larger at low density, the latter becomes dominant and forces the electrons as far apart as possible. As a consequence, they condense into a close-packed lattice. The resulting electron crystal is called the Wigner crystal.
Based on the Lindemann criterion one can find an estimate for the critical rs. The criterion states that the crystal melts when the root-mean-square displacement of the electrons is about a quarter of the lattice spacing a. On the assumption that vibrations of the electrons are approximately harmonic, one can use that for a quantum harmonic oscillator the root mean square displacement in the ground state (in 3D) is given by
with the Planck constant, the electron mass and ω the characteristic frequency of the oscillations. The latter can be estimated by considering the electrostatic potential energy for an electron displaced by r from its lattice point. Say that the Wigner–Seitz cell associated to the lattice point is approximately a sphere of radius a/2. The uniform, neutralizing background then gives rise to a smeared positive charge of density with the electron charge. The electric potential felt by the displaced electron as a result of this is given by
with ε0 the vacuum permittivity. Comparing to the energy of a harmonic oscillator, one can read off
or, combining this with the result from the quantum harmonic oscillator for the root-mean-square displacement
The Lindemann criterion than gives us the estimate that rs > 40 is required to give a stable Wigner crystal. Quantum Monte Carlo simulations indicate that the uniform electron gas actually crystallizes at rs = 106 in 3D and rs = 31 in 2D.
For classical systems at elevated temperatures one uses the average interparticle interaction in units of the temperature: .. The Wigner transition occurs at G = 170 in 3D and G = 125 in 2D. It is believed that ions, such as those of iron, form a Wigner crystal in the interiors of white dwarf stars.
Experimental realisation
In practice, it is difficult to experimentally realize a Wigner crystal because quantum mechanical fluctuations overpower the Coulomb repulsion and quickly cause disorder. Low electron density is needed. One notable example occurs in quantum dots with low electron densities or high magnetic fields where electrons will spontaneously localize in some situations, forming a so-called rotating "Wigner molecule", a crystalline-like state adapted to the finite size of the quantum dot.
Wigner crystallization in a two-dimensional electron gas under high magnetic fields was predicted (and was observed experimentally) to occur for small filling factors (less than ) of the lowest Landau level. For larger fractional fillings, the Wigner crystal was thought to be unstable relative to the fractional quantum Hall effect (FQHE) liquid states. A Wigner crystal was observed in the immediate neighborhood of the large fractional filling , and led to a new understanding (based on the pinning of a rotating Wigner molecule) for the interplay between quantum-liquid and pinned-solid phases in the lowest Landau level.
Another experimental realisation of the Wigner crystal occurred in single-electron transistors with very low currents, where a 1D Wigner crystal formed. The current due to each electron can be directly detected experimentally.
Additionally, experiments using quantum wires (short quantum wires are sometimes referred to as ‘quantum point contacts’, (QPCs)) have led to suggestions of Wigner crystallization in 1D systems. In the experiment performed by Hew et al., a 1D channel was formed by confining electrons in both directions transverse to the electron transport, by the band structure of the GaAs/AlGaAs heterojunction and the potential from the QPC. The device design allowed the electron density in the 1D channel to vary relatively independently of the strength of transverse confining potential, thus allowing experiments to be performed in the regime in which Coulomb interactions between electrons dominate the kinetic energy. Conductance through a QPC shows a series of plateaux quantized in units of the conductance quantum, 2e2/h However, this experiment reported a disappearance of the first plateau (resulting in a jump in conductance of 4e2/h), which was attributed to the formation of two parallel rows of electrons. In a strictly 1D system, electrons occupy equidistant points along a line, i.e. a 1D Wigner crystal. As the electron density increases, the Coulomb repulsion becomes large enough to overcome the electrostatic potential confining the 1D Wigner crystal in the transverse direction, leading to a lateral rearrangement of the electrons into a double-row structure. The evidence of a double row observed by Hew et al. may point towards the beginnings of a Wigner crystal in a 1D system.
In 2018, a transverse magnetic focusing that combines charge and spin detection was used to directly probe a Wigner crystal and its spin properties in 1D quantum wires with tunable width. It provides direct evidence and a better understanding of the nature of zigzag Wigner crystallization by unveiling both the structural and the spin phase diagrams.
Direct evidence for the formation of small Wigner crystals was reported in 2019.
In 2024, physicists managed to directly image a Wigner crystal with a scanning tunneling microscope.
Wigner crystal materials
Some layered Van der Waals materials, such as transition metal dichalcogenides have intrinsically large rs values which exceed the 2D theoretical Wigner crystal limit rs=31~38. The origin of the large rs is partly due to the suppressed kinetic energy arising from a strong electron phonon interaction which leads to polaronic band narrowing, and partly due to the low carrier density n at low temperatures. The charge density wave (CDW) state in such materials, such as 1T-TaS2, with a sparsely filled √13x√13 superlattice and rs=70~100 may be considered to be better described in terms of a Wigner crystal than the more traditional charge density wave. This viewpoint is supported both by modelling and systematic scanning tunnelling microscopy measurements. Thus, Wigner crystal superlattices in so-called CDW systems may be considered to be the first direct observation of ordered electron states localised by mutual Coulomb interaction. An important criterion for is the depth of charge modulation, which depends on the material, and only systems where rs exceeds the theoretical limit can be regarded as Wigner crystals.
In 2020, a direct image of a Wigner crystal observed by microscopy was obtained in molybdenum diselenide/molybdenum disulfide (MoSe2/MoS2) moiré heterostructures.
A 2021 experiment created a Wigner crystal near 0K by confining electrons using a monolayer sheet of molybdenum diselenide. The sheet was sandwiched between two graphene electrodes and a voltage was applied. The resulting electron spacing was around 20 nanometers, as measured by the stationary appearance of light-agitated excitons.
Another 2021 experiment reported quantum Wigner crystals where quantum fluctuations dominate over the thermal fluctuation in two coupled layers of molybdenum diselenide without any magnetic field. The researchers documented both thermal and quantum melting of the Wigner crystal in this experiment.
References
Condensed matter physics | Wigner crystal | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,901 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
21,186,771 | https://en.wikipedia.org/wiki/Collision/reaction%20cell | A collision/reaction cell is a device used in inductively coupled plasma mass spectrometry to remove interfering ions through ion/neutral reactions.
Dynamic reaction cell
The collision reaction cell, also known by the trade name "dynamic reaction cell" (DRC), was introduced by Perkin-Elmer on their Elan DRC (followed by Elan DRC II and Elan DRC-e) instrument. The dynamic reaction cell is a chamber placed before the traditional quadrupole chamber of an ICP-MS device, for eliminating isobaric interferences. The chamber has a quadrupole and can be filled with reaction (or collision) gases (ammonia, methane, oxygen or hydrogen), with one gas type at a time or a mixture of two of them, which reacts with the introduced sample, eliminating some of the interference.
The DRC is characterized by to main parameters that can be modified: RPq (the corresponding q parameter from the Mathieu equation) and RPa (the corresponding a parameter from the Mathieu equation). These parameters refer to the voltage applied to the quadrupole rods and the gas flow of the reaction gas.
Ammonia gas is typically chosen to mitigate the majority of interferences. However, for specific isotopes, other gasses may be required for better results, or mathematical correction if no gas offers a satisfactory advantage.
Collisional reaction interface (CRI) or mini-Collision/Reaction Cell
The proprietary collisional reaction interface (CRI) used in the Bruker ICP-MS Aurora M90 destroys interfering ions. These ions are removed by injecting a collisional gas (He), or a reactive gas (H2), or a mixture of the two, directly into the plasma as it flows through the skimmer cone and/or the sampler cone. Supplying the reactive/collisional gas into the tip of the skimmer cone induces extra collisions and reactions that destroy polyatomic ions in the passing plasma. Fundamentally CRI is a mini- Collision/Reaction Cell installed in front of the parabolic Ion Mirror optics.
Axial field technology
Axial field technology (AFT) is a patented improvement of DRC made by Perkin-Elmer, which consists in two supplementary rods placed in the DRC cell, smaller than normal quadrupole's rods, with the purpose of "pushing" the ions faster to the exit by generating a supplementary electric potential, minimizing the time needed for the gas to be in the DRC and improving analysis speed. The suplimetary potential of the AFT rods does not contribute significantly to the global energy, but drastically improve ion passage time.
Collision cell technology with kinetic energy discrimination
Thermo Scientific's XSeries2 instrument utilizes a collision/reaction cell for interference removal, consisting of a non-consumable hexapole and chicane ion deflector, which takes the ion beam off-axis and leads to low instrument backgrounds of <0.5 integrated counts per second (icps) at vacant masses such as 5 and 220. This hexapole is inherently part of the Thermo lens system and is present in the ion path, regardless of the use of the collision cell. The collision/reaction gas mixtures can be 1% NH3 in He, 7% H2 in He and 100% H2, where the NH3 and H2 are reactive gasses and the He is a collisional gas. The 3rd generation cell utilizes kinetic energy discrimination, which employs running the quadrupole bias slightly less negative (more positive) than the hexapole bias. Polyatomic ions generated within the plasma can have larger atomic radii than analyte ions of similar mass, i.e. the interferent NaAr+ (mass 63) is larger than the analyte Cu+ (mass 63). Thus, when using a collisional/reactive gas mixture, these larger species undergo more collisions/reactions in the cell, in which they lose increasingly more energy, and are then excluded from the quadrupole mass filter by the kinetic energy barrier.
Octopole reaction system
Another implementation of this type of interference removal is an octopole (instead of a quadrupole) collision cell, implemented by Agilent's 7500 series. The octopole reaction system (ORS)) uses only helium or hydrogen and the volume of the cell is smaller than that of a DRC. The small molecules of helium and hydrogen collide with the large, unwanted polyatomic ions formed in the plasma and break them up into other ions that can be separated in the quadrupole mass analyser. However, unlike the DRC the OCR system is based only on collision reactions and not on chemical reactions.
References
Mass spectrometry | Collision/reaction cell | [
"Physics",
"Chemistry"
] | 975 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
21,188,237 | https://en.wikipedia.org/wiki/Hybrid%20mass%20spectrometer | A hybrid mass spectrometer is a device for tandem mass spectrometry that consists of a combination of two or more m/z separation devices of different types.
Notation
The different m/z separation elements of a hybrid mass spectrometer can be represented by a shorthand notation. The symbol Q represents a quadrupole mass analyzer, q is a radio frequency collision quadrupole, TOF is a time-of-flight mass spectrometer, B is a magnetic sector and E is an electric sector.
Sector quadrupole
A sector instrument can be combined with a collision quadrupole and quadrupole mass analyzer to form a hybrid instrument. A BEqQ configuration with a magnetic sector (B), electric sector (E), collision quadrupole (q) and m/z selection quadrupole (Q) have been constructed and an instrument with two electric sectors (BEEQ) has been described.
Quadrupole time-of-flight
A triple quadrupole mass spectrometer with the final quadrupole replaced by a time-of-flight device is known as a quadrupole time-of-flight instrument. Such an instrument can be represented as QqTOF.
Ion trap time-of-flight
In an ion trap instrument, ions are trapped in a quadrupole ion trap and then injected into the TOF. The trap can be 3-D or a linear trap.
Linear ion trap and Fourier transform mass analyzers
A linear ion trap combined with a Fourier transform ion cyclotron resonance or Orbitrap mass spectrometer is marketed by Thermo Scientific as the LTQ FT and LTQ Orbitrap, respectively.
References
Mass spectrometry
Tandem mass spectrometry | Hybrid mass spectrometer | [
"Physics",
"Chemistry"
] | 358 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Tandem mass spectrometry",
"Mass spectrometry",
"Matter"
] |
21,188,370 | https://en.wikipedia.org/wiki/Fuel | A fuel is any material that can be made to react with other substances so that it releases energy as thermal energy or to be used for work. The concept was originally applied solely to those materials capable of releasing chemical energy but has since also been applied to other sources of heat energy, such as nuclear energy (via nuclear fission and nuclear fusion).
The heat energy released by reactions of fuels can be converted into mechanical energy via a heat engine. Other times, the heat itself is valued for warmth, cooking, or industrial processes, as well as the illumination that accompanies combustion. Fuels are also used in the cells of organisms in a process known as cellular respiration, where organic molecules are oxidized to release usable energy. Hydrocarbons and related organic molecules are by far the most common source of fuel used by humans, but other substances, including radioactive metals, are also utilized.
Fuels are contrasted with other substances or devices storing potential energy, such as those that directly release electrical energy (such as batteries and capacitors) or mechanical energy (such as flywheels, springs, compressed air, or water in a reservoir).
History
The first known use of fuel was the combustion of firewood by Homo erectus nearly two million years ago. Throughout most of human history only fuels derived from plants or animal fat were used by humans. Charcoal, a wood derivative, has been used since at least 6,000 BCE for melting metals. It was only supplanted by coke, derived from coal, as European forests started to become depleted around the 18th century. Charcoal briquettes are now commonly used as a fuel for barbecue cooking.
Crude oil was distilled by Persian chemists, with clear descriptions given in Arabic handbooks such as those of Muhammad ibn Zakarīya Rāzi. He described the process of distilling crude oil/petroleum into kerosene, as well as other hydrocarbon compounds, in his Kitab al-Asrar (Book of Secrets). Kerosene was also produced during the same period from oil shale and bitumen by heating the rock to extract the oil, which was then distilled. Rāzi also gave the first description of a kerosene lamp using crude mineral oil, referring to it as the "naffatah".
The streets of Baghdad were paved with tar, derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan. These fields were described by the Arab geographer Abu al-Hasan 'Alī al-Mas'ūdī in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads.
With the development of the steam engine in the United Kingdom in 1769, coal came into more common use, the combustion of which releases chemical energy that can be used to turn water into steam. Coal was later used to drive ships and locomotives. By the 19th century, gas extracted from coal was being used for street lighting in London. In the 20th and 21st centuries, the primary use of coal is to generate electricity, providing 40% of the world's electrical power supply in 2005.
Fossil fuels were rapidly adopted during the Industrial Revolution, because they were more concentrated and flexible than traditional energy sources, such as water power. They have become a pivotal part of our contemporary society, with most countries in the world burning fossil fuels in order to produce power, but are falling out of favor due to the global warming and related effects that are caused by burning them.
Currently the trend has been towards renewable fuels, such as biofuels like alcohols.
Chemical
Chemical fuels are substances that release energy by reacting with substances around them, most notably by the process of combustion.
Chemical fuels are divided in two ways. First, by their physical properties, as a solid, liquid or gas. Secondly, on the basis of their occurrence: primary (natural fuel) and secondary (artificial fuel). Thus, a general classification of chemical fuels is:
Solid fuel
Solid fuel refers to various types of solid material that are used as fuel to produce energy and provide heating, usually released through combustion. Solid fuels include wood, charcoal, peat, coal, hexamine fuel tablets, and pellets made from wood (see wood pellets), corn, wheat, rye and other grains. Solid-fuel rocket technology also uses solid fuel (see solid propellants). Solid fuels have been used by humanity for many years to create fire. Coal was the fuel source which enabled the Industrial Revolution, from firing furnaces, to running steam engines. Wood was also extensively used to run steam locomotives. Both peat and coal are still used in electricity generation today.
The use of some solid fuels (e.g. coal) is restricted or prohibited in some urban areas, due to unsafe levels of toxic emissions. The use of other solid fuels as wood is decreasing as heating technology and the availability of good quality fuel improves. In some areas, smokeless coal is often the only solid fuel used. In Ireland, peat briquettes are used as smokeless fuel. They are also used to start a coal fire.
Liquid fuels
Liquid fuels are combustible or energy-generating molecules that can be harnessed to create mechanical energy, usually producing kinetic energy. They must also take the shape of their container; the fumes of liquid fuels are flammable, not the fluids.
Most liquid fuels in widespread use are derived from the fossilized remains of dead plants and animals by exposure to heat and pressure inside the Earth's crust. However, there are several types, such as hydrogen fuel (for automotive uses), ethanol, jet fuel and bio-diesel, which are all categorized as liquid fuels. Emulsified fuels of oil in water, such as orimulsion, have been developed as a way to make heavy oil fractions usable as liquid fuels. Many liquid fuels play a primary role in transportation and the economy.
Some common properties of liquid fuels are that they are easy to transport and can be handled easily. They are also relatively easy to use for all engineering applications and in home use. Fuels like kerosene are rationed in some countries, for example in government-subsidized shops in India for home use.
Conventional diesel is similar to gasoline in that it is a mixture of aliphatic hydrocarbons extracted from petroleum. Kerosene is used in kerosene lamps and as a fuel for cooking, heating, and small engines. Natural gas, composed chiefly of methane, can only exist as a liquid at very low temperatures (regardless of pressure), which limits its direct use as a liquid fuel in most applications. LP gas is a mixture of propane and butane, both of which are easily compressible gases under standard atmospheric conditions. It offers many of the advantages of compressed natural gas (CNG) but is denser than air, does not burn as cleanly, and is much more easily compressed. Commonly used for cooking and space heating, LP gas and compressed propane are seeing increased use in motorized vehicles. Propane is the third most commonly used motor fuel globally.
Fuel gas
Fuel gas is any one of a number of fuels that are gaseous under ordinary conditions. Many fuel gases are composed of hydrocarbons (such as methane or propane), hydrogen, carbon monoxide, or mixtures thereof. Such gases are sources of potential heat energy or light energy that can be readily transmitted and distributed through pipes from the point of origin directly to the place of consumption. Fuel gas is contrasted with liquid fuels and from solid fuels, though some fuel gases are liquefied for storage or transport. While their gaseous nature can be advantageous, avoiding the difficulty of transporting solid fuel and the dangers of spillage inherent in liquid fuels, it can also be dangerous. It is possible for a fuel gas to be undetected and collect in certain areas, leading to the risk of a gas explosion. For this reason, odorizers are added to most fuel gases so that they may be detected by a distinct smell. The most common type of fuel gas in current use is natural gas.
Biofuels
Biofuel can be broadly defined as solid, liquid, or gas fuel consisting of, or derived from biomass. Biomass can also be used directly for heating or power—known as biomass fuel. Biofuel can be produced from any carbon source that can be replenished rapidly e.g. plants. Many different plants and plant-derived materials are used for biofuel manufacture.
Perhaps the earliest fuel employed by humans is wood. Evidence shows controlled fire was used up to 1.5 million years ago at Swartkrans, South Africa. It is unknown which hominid species first used fire, as both Australopithecus and an early species of Homo were present at the sites. As a fuel, wood has remained in use up until the present day, although it has been superseded for many purposes by other sources. Wood has an energy density of 10–20 MJ/kg.
Recently biofuels have been developed for use in automotive transport (for example bioethanol and biodiesel), but there is widespread public debate about how carbon neutral these fuels are.
Fossil fuels
Fossil fuels are hydrocarbons, primarily coal and petroleum (liquid petroleum or natural gas), formed from the fossilized remains of ancient plants and animals by exposure to high heat and pressure in the absence of oxygen in the Earth's crust over hundreds of millions of years. Commonly, the term fossil fuel also includes hydrocarbon-containing natural resources that are not derived entirely from biological sources, such as tar sands. These latter sources are properly known as mineral fuels.
Fossil fuels contain high percentages of carbon and include coal, petroleum, and natural gas.
They range from volatile materials with low carbon:hydrogen ratios like methane, to liquid petroleum to nonvolatile materials composed of almost pure carbon, like anthracite coal. Methane can be found in hydrocarbon fields, alone, associated with oil, or in the form of methane clathrates. Fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years. This biogenic theory was first introduced by German scholar Georg Agricola in 1556 and later by Mikhail Lomonosov in the 18th century.
It was estimated by the Energy Information Administration that in 2007 primary sources of energy consisted of petroleum 36.0%, coal 27.4%, natural gas 23.0%, amounting to an 86.4% share for fossil fuels in primary energy consumption in the world. Non-fossil sources in 2006 included hydroelectric 6.3%, nuclear 8.5%, and others (geothermal, solar, tidal, wind, wood, waste) amounting to 0.9%. World energy consumption was growing about 2.3% per year.
Fossil fuels are non-renewable resources because they take millions of years to form, and reserves are being depleted much faster than new ones are being made. So we must conserve these fuels and use them judiciously. The production and use of fossil fuels raise environmental concerns. A global movement toward the generation of renewable energy is therefore under way to help meet increased energy needs. The burning of fossil fuels produces around 21.3 billion tonnes (21.3 gigatonnes) of carbon dioxide (CO2) per year, but it is estimated that natural processes can only absorb about half of that amount, so there is a net increase of 10.65 billion tonnes of atmospheric carbon dioxide per year (one tonne of atmospheric carbon is equivalent to (this is the ratio of the molecular/atomic weights) or 3.7 tonnes of CO2. Carbon dioxide is one of the greenhouse gases that enhances radiative forcing and contributes to global warming, causing the average surface temperature of the Earth to rise in response, which the vast majority of climate scientists agree will cause major adverse effects.
Fuels are a source of energy.
The International Energy Agency (IEA) predicts that fossil fuel prices will decline, with oil stabilizing around $75 to $80 per barrel as electric vehicle adoption surges and renewable energy expands. Additionally, the IEA anticipates a notable increase in liquefied natural gas capacity, enhancing Europe’s energy diversification.
Energy
The amount of energy from different types of fuel depends on the stoichiometric ratio, the chemically correct air and fuel ratio to ensure complete combustion of fuel, and its specific energy, the energy per unit mass.
Notes
1 MJ ≈ 0.28 kWh ≈ 0.37 HPh.
(The fuel-air ratio (FAR) is the reciprocal of the air-fuel ratio (AFR).)
λ is the air-fuel equivalence ratio, and λ=1 means that it is assumed that the fuel and the oxidising agent (oxygen in air) are present in exactly the correct proportions so that they are both fully consumed in the reaction.
Nuclear
Nuclear fuel is any material that is consumed to derive nuclear energy. In theory, a wide variety of substances could be a nuclear fuel, as they can be made to release nuclear energy under the right conditions. However, the materials commonly referred to as nuclear fuels are those that will produce energy without being placed under extreme duress. Nuclear fuel can be "burned" by nuclear fission (splitting nuclei apart) or fusion (combining nuclei together) to derive nuclear energy. "Nuclear fuel" can refer to the fuel itself, or to physical objects (for example bundles composed of fuel rods) composed of the fuel material, mixed with structural, neutron moderating, or neutron-reflecting materials.
Nuclear fuel has the highest energy density of all practical fuel sources.
Fission
The most common type of nuclear fuel used by humans is heavy fissile elements that can be made to undergo nuclear fission chain reactions in a nuclear fission reactor; nuclear fuel can refer to the material or to physical objects (for example fuel bundles composed of fuel rods) composed of the fuel material, perhaps mixed with structural, neutron moderating, or neutron reflecting materials.
When some of these fuels are struck by neutrons, they are in turn capable of emitting neutrons when they break apart. This makes possible a self-sustaining chain reaction that releases energy at a controlled rate in a nuclear reactor, or at a very rapid uncontrolled rate in a nuclear weapon.
The most common fissile nuclear fuels are uranium-235 (235U) and plutonium-239 (239Pu). The actions of mining, refining, purifying, using, and ultimately disposing of nuclear fuel together make up the nuclear fuel cycle. Not all types of nuclear fuels create energy from nuclear fission. Plutonium-238 and some other elements are used to produce small amounts of nuclear energy by radioactive decay in radioisotope thermoelectric generators and other types of atomic batteries.
Fusion
In contrast to fission, some light nuclides such as tritium (3H) can be used as fuel for nuclear fusion. This involves two or more nuclei combining into larger nuclei.
Fuels that produce energy by this method are currently not utilized by humans, but they are the main source of fuel for stars. Fusion fuels are light elements such as hydrogen whose nucleii will combine easily. Energy is required to start fusion by raising the temperature so high that nuclei can collide together with enough energy that they stick together before repelling due to electric charge. This process is called fusion and it can give out energy.
In stars that undergo nuclear fusion, fuel consists of atomic nuclei that can release energy by the absorption of a proton or neutron. In most stars the fuel is provided by hydrogen, which can combine to form helium through the proton-proton chain reaction or by the CNO cycle. When the hydrogen fuel is exhausted, nuclear fusion can continue with progressively heavier elements, although the net energy released is lower because of the smaller difference in nuclear binding energy. Once iron-56 or nickel-56 nuclei are produced, no further energy can be obtained by nuclear fusion as these have the highest nuclear binding energies. Any nucleii heavier than 56Fe and 56Ni would thus absorb energy instead of giving it off when fused. Therefore, fusion stops and the star dies. In attempts by humans, fusion is only carried out with hydrogen (2H (deuterium) or 3H (tritium)) to form helium-4 as this reaction gives out the most net energy. Electric confinement (ITER), inertial confinement (heating by laser) and heating by strong electric currents are the popular methods.
Liquid fuels for transportation
Most transportation fuels are liquids, because vehicles usually require high energy density. This occurs naturally in liquids and solids. High energy density can also be provided by an internal combustion engine. These engines require clean-burning fuels. The fuels that are easiest to burn cleanly are typically liquids and gases. Thus, liquids meet the requirements of being both energy-dense and clean-burning. In addition, liquids (and gases) can be pumped, which means handling is easily mechanized, and thus less laborious. As there is a general movement towards a low carbon economy, the use of liquid fuels such as hydrocarbons is coming under scrutiny.
See also
Alcohol fuel
Algae fuel
Alternative fuels
Ammonia
Bitumen-based fuel
Cryogenic fuel
Electrofuel
Filling station
Fossil fuel phase-out
Fuel card
Fuel cell
Fuel container
Fuel management systems
Fuel oil
Fuel poverty
Hydrogen economy
Hydrotreated vegetable oil
Hypergolic fuel
List of energy topics
Low-carbon economy
Marine fuel management
Propellant
Recycled fuel
Refuse-derived fuel
World energy resources and consumption
Footnotes
Works cited
. AR5 Climate Change 2013: The Physical Science Basis — IPCC
Global Warming of 1.5 °C —.
References
Further reading
.
Council Directive 80/1268/EEC Fuel consumption of motor vehicles.
Energy development | Fuel | [
"Chemistry"
] | 3,649 | [
"Fuels",
"Chemical energy sources"
] |
21,189,621 | https://en.wikipedia.org/wiki/SMD%20LED | The light from white LED lamps and LED strip lights is usually provided by industry standard surface-mounted device LEDs (SMD LEDs). Non-SMD types of LED lighting also exist, such as COB (chip on board) and MCOB (multi-COB).
Surface-mounted device LED modules are described by the dimensions of the LED package. A single multicolor module may have three individual LEDs within that package, one each of red, green and blue, to allow many colors or shades of white to be selected, by varying the brightness of the individual LEDs. LED brightness may be increased by using a higher driving current, at the cost of reducing the device's lifespan.
References
Electronic design
Electronics manufacturing
LED lamps | SMD LED | [
"Engineering"
] | 153 | [
"Electronic design",
"Electronic engineering",
"Electronics manufacturing",
"Design"
] |
21,191,343 | https://en.wikipedia.org/wiki/Variadic%20template | In computer programming, variadic templates are templates that take a variable number of arguments.
Variadic templates are supported by C++ (since the C++11 standard), and the D programming language.
C++
The variadic template feature of C++ was designed by Douglas Gregor and Jaakko Järvi and was later standardized in C++11.
Prior to C++11, templates (classes and functions) could only take a fixed number of arguments, which had to be specified when a template was first declared. C++11 allows template definitions to take an arbitrary number of arguments of any type.
template<typename... Values> class tuple; // takes zero or more arguments
The above template class will take any number of typenames as its template parameters. Here, an instance of the above template class is instantiated with three type arguments:
tuple<int, std::vector<int>, std::map<std::string, std::vector<int>>> some_instance_name;
The number of arguments can be zero, so will also work.
If the variadic template should only allow a positive number of arguments, then this definition can be used:
template<typename First, typename... Rest> class tuple; // takes one or more arguments
Variadic templates may also apply to functions, thus not only providing a type-safe add-on to variadic functions (such as printf), but also allowing a function called with printf-like syntax to process non-trivial objects.
template<typename... Params> void my_printf(const std::string &str_format, Params... parameters);
The ellipsis (...) operator has two roles. When it occurs to the left of the name of a parameter, it declares a parameter pack. Using the parameter pack, the user can bind zero or more arguments to the variadic template parameters. Parameter packs can also be used for non-type parameters. By contrast, when the ellipsis operator occurs to the right of a template or function call argument, it unpacks the parameter packs into separate arguments, like the in the body of below. In practice, the use of an ellipsis operator in the code causes the whole expression that precedes the ellipsis to be repeated for every subsequent argument unpacked from the argument pack, with the expressions separated by commas.
The use of variadic templates is often recursive. The variadic parameters themselves are not readily available to the implementation of a function or class. Therefore, the typical mechanism for defining something like a C++11 variadic replacement would be as follows:
// base case
void my_printf(const char *s)
{
while (*s)
{
if (*s == '%')
{
if (*(s + 1) == '%')
++s;
else
throw std::runtime_error("invalid format string: missing arguments");
}
std::cout << *s++;
}
}
// recursive
template<typename T, typename... Args>
void my_printf(const char *s, T value, Args... args)
{
while (*s)
{
if (*s == '%')
{
if (*(s + 1) != '%')
{
// pretend to parse the format: only works on 2-character format strings ( %d, %f, etc ); fails with %5.4f
s += 2;
// print the value
std::cout << value;
// called even when *s is 0 but does nothing in that case (and ignores extra arguments)
my_printf(s, args...);
return;
}
++s;
}
std::cout << *s++;
}
}
This is a recursive template. Notice that the variadic template version of calls itself, or (in the event that is empty) calls the base case.
There is no simple mechanism to iterate over the values of the variadic template. However, there are several ways to translate the argument pack into a single argument that can be evaluated separately for each parameter. Usually this will rely on function overloading, or — if the function can simply pick one argument at a time — using a dumb expansion marker:
template<typename... Args> inline void pass(Args&&...) {}
which can be used as follows:
template<typename... Args> inline void expand(Args&&... args)
{
pass(some_function(args)...);
}
expand(42, "answer", true);
which will expand to something like:
pass(some_function(arg1), some_function(arg2), some_function(arg3) /* etc... */ );
The use of this "pass" function is necessary, since the expansion of the argument pack proceeds by separating the function call arguments by commas, which are not equivalent to the comma operator. Therefore, will never work. Moreover, the solution above will only work when the return type of is not . Furthermore, the calls will be executed in an unspecified order, because the order of evaluation of function arguments is undefined. To avoid the unspecified order, brace-enclosed initializer lists can be used, which guarantee strict left-to-right order of evaluation. An initializer list requires a non- return type, but the comma operator can be used to yield for each expansion element.
struct pass
{
template<typename ...T> pass(T...) {}
};
pass{(some_function(args), 1)...};
Instead of executing a function, a lambda expression may be specified and executed in place, which allows executing arbitrary sequences of statements in-place.
pass{([&](){ std::cout << args << std::endl; }(), 1)...};
However, in this particular example, a lambda function is not necessary. A more ordinary expression can be used instead:
pass{(std::cout << args << std::endl, 1)...};
In C++17, these can be rewritten using fold expressions on the comma operator:
([&](){
std::cout << args << std::endl;
}(), ...);
((std::cout << args << std::endl), ...);
Another way is to use overloading with "termination versions" of functions. This is more universal, but requires a bit more code and more effort to create. One function receives one argument of some type and the argument pack, whereas the other receives neither. (If both had the same list of initial parameters, the call would be ambiguous — a variadic parameter pack alone cannot disambiguate a call.) For example:
void func() {} // termination version
template<typename Arg1, typename... Args>
void func(const Arg1& arg1, const Args&&... args)
{
process( arg1 );
func(args...); // note: arg1 does not appear here!
}
If contains at least one argument, it will redirect to the second version — a parameter pack can be empty, in which case it will simply redirect to the termination version, which will do nothing.
Variadic templates can also be used in an exception specification, a base class list, or the initialization list of a constructor. For example, a class can specify the following:
template <typename... BaseClasses>
class ClassName : public BaseClasses...
{
public:
ClassName (BaseClasses&&... base_classes)
: BaseClasses(base_classes)...
{}
};
The unpack operator will replicate the types for the base classes of , such that this class will be derived from each of the types passed in. Also, the constructor must take a reference to each base class, so as to initialize the base classes of .
With regard to function templates, the variadic parameters can be forwarded. When combined with universal references (see above), this allows for perfect forwarding:
template<typename TypeToConstruct>
struct SharedPtrAllocator
{
template<typename ...Args>
std::shared_ptr<TypeToConstruct> construct_with_shared_ptr(Args&&... params)
{
return std::shared_ptr<TypeToConstruct>(new TypeToConstruct(std::forward<Args>(params)...));
}
};
This unpacks the argument list into the constructor of TypeToConstruct. The syntax perfectly forwards arguments as their proper types, even with regard to rvalue-ness, to the constructor. The unpack operator will propagate the forwarding syntax to each parameter. This particular factory function automatically wraps the allocated memory in a for a degree of safety with regard to memory leaks.
Additionally, the number of arguments in a template parameter pack can be determined as follows:
template<typename ...Args>
struct SomeStruct
{
static const int size = sizeof...(Args);
};
The expression will yield 2, while will give 0.
D
Definition
The definition of variadic templates in D is similar to their C++ counterpart:
template VariadicTemplate(Args...) { /* Body */ }
Likewise, any argument can precede the argument list:
template VariadicTemplate(T, string value, alias symbol, Args...) { /* Body */ }
Basic usage
Variadic arguments are very similar to constant array in their usage. They can be iterated upon, accessed by an index, have a property, and can be sliced. Operations are interpreted at compile time, which means operands can't be runtime value (such as function parameters).
Anything which is known at compile time can be passed as a variadic arguments. It makes variadic arguments similar to template alias arguments, but more powerful, as they also accept basic types (char, short, int...).
Here is an example that prints the string representation of the variadic parameters. and produce equal results.
static int s_int;
struct Dummy {}
void main()
{
pragma(msg, StringOf!("Hello world", uint, Dummy, 42, s_int));
pragma(msg, StringOf2!("Hello world", uint, Dummy, 42, s_int));
}
template StringOf(Args...)
{
enum StringOf = Args[0].stringof ~ StringOf!(Args[1..$]);
}
template StringOf()
{
enum StringOf = "";
}
template StringOf2(Args...)
{
static if (Args.length == 0)
enum StringOf2 = "";
else
enum StringOf2 = Args[0].stringof ~ StringOf2!(Args[1..$]);
}
Outputs:
"Hello world"uintDummy42s_int
"Hello world"uintDummy42s_int
AliasSeq
Variadic templates are often used to create a sequence of aliases, named AliasSeq.
The definition of an AliasSeq is actually very straightforward:
alias AliasSeq(Args...) = Args;
This structure allows one to manipulate a list of variadic arguments that will auto-expand. The arguments must either be symbols or values known at compile time. This includes values, types, functions or even non-specialized templates. This allows any operation you would expect:
import std.meta;
void main()
{
// Note: AliasSeq can't be modified, and an alias can't be rebound, so we'll need to define new names for our modifications.
alias numbers = AliasSeq!(1, 2, 3, 4, 5, 6);
// Slicing
alias lastHalf = numbers[$ / 2 .. $];
static assert(lastHalf == AliasSeq!(4, 5, 6));
// AliasSeq auto expansion
alias digits = AliasSeq!(0, numbers, 7, 8, 9);
static assert(digits == AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9));
// std.meta provides templates to work with AliasSeq, such as anySatisfy, allSatisfy, staticMap, and Filter.
alias evenNumbers = Filter!(isEven, digits);
static assert(evenNumbers == AliasSeq!(0, 2, 4, 6, 8));
}
template isEven(int number)
{
enum isEven = (0 == (number % 2));
}
See also
For articles on variadic constructs other than templates
Variadic function
Variadic macro in the C preprocessor
References
External links
Working draft for the C++ language, January 16, 2012
Variadic Templates in D language
Computer programming
Articles with example C++ code
Articles with example D code | Variadic template | [
"Technology",
"Engineering"
] | 3,020 | [
"Software engineering",
"Computer programming",
"Computers"
] |
21,193,115 | https://en.wikipedia.org/wiki/Cannula%20transfer | Cannula transfer or cannulation is a set of air-free techniques used with a Schlenk line, in transferring liquid or solution samples between reaction vessels via cannulae, avoiding atmospheric contamination. While the syringes are not the same as cannulae, the techniques remain relevant.
Two methods of cannula transfer are popular: vacuum, and pressure. Both utilize differences in pressures between two vessels to push the fluid through. Often, the main difficulty encountered is slow transfer due to the high viscosity of the fluid.
Equipment
Septa
Septa (: septum) are rubber stoppers which seal flasks or bottles. They give an airtight seal, preventing the ingress of the atmosphere, but are able to be pierced by sharp needles or cannulae.
Cannula
Cannulae are hollow flexible tubes. Their bore is usually 16-22 gauge thick. They are commonly made of stainless steel or PTFE for their chemical resistance. Stainless steel cannulae are usually 2–3 feet long, due to their relative inflexibility, while PTFE cannulae can be much shorter. The ends are usually sharp and non-coring, allowing them to easily pierce a rubber septum, without being clogged by rubber particles. Flat tips tend to provide more complete transfer of fluids.
Stainless steel cannulae tend to collapse when cut with wire cutters. They are best cut using pipecutters of appropriate size. Other workers recommend deeply scoring the cannula with a triangular file, then sharply snapping the weakened section.
Needles and syringes
Wide-bore needles of similar gauge are often used. Unlike hypodermic-type needles sometimes used in the chemistry laboratory, these needles tend to be reused due to cost. Long needles may be flexible enough to be bent in U-shapes; shorter needles often are not.
Polypropylene syringes used for medical applications are least expensive. While the material is relatively solvent-resistant, although they are designed primarily for aqueous solutions, some degradation or leaching by the contents may occur. In particular, the black rubber seal may swell and cause the plunger to seize.
All-glass gas-tight syringes have better solvent resistance, although they tend to leak more than plastic syringes. Greases used on the barrel may leach into the contents. Glass syringes with a teflon seal at the plunger are available as well, but they are more expensive. They tend to be used for microsyringes (usually containing less than 100 μL). Luer fittings are preferred, as needles are locked in even under higher pressure, e.g. when transferring viscous liquids.
Cleaning and storage
Cannulae and needles should be quickly flushed out with an appropriate solvent to prevent undetectable corrosion damage to the stainless steel. Since they are usually used for air-sensitive work, they are commonly kept in a hot oven, to reduce the adsorption of water molecules. Before use, they are usually subjected to three vacuum-refill cycles to remove any traces of air.
Cannula transfer methods
This technique has been described with illustrated detail.
Vacuum based
The two ends of the cannula are inserted through the septa covering donating and receiving flasks. The cannula extends below the surface of the fluid to be transferred. A vacuum is applied to the receiving flask, and the low pressure relative to the donating flask causes the fluid to flow through the cannula.
Vacuum transfers risk drawing air into the system, spoiling the air-free environment. Loss of the fluid by evaporation is another problem, although less so where the fluid is a neat liquid, than a solution of known concentration.
Positive pressure
The receiving flask is connected to its own gas bubbler, while the donating flask is connected to a source of inert gas. By increasing the inert gas pressure, the pressure within the donating flask is raised higher than the receiving flask, and the fluid is forced through the cannula.
Pressure transfers can be slow. Inert gas lines are usually vented out of a gas bubbler placed in-line to prevent overpressure. The vents need to be isolated by capping the bubbler outlet, or stopping the egress of inert gas with a stopcock or pinch clamp, to ensure sufficient pressure to complete the transfer. The use of a mercury bubbler instead of one filled with oil used to be popular, but is out of favor due to the difficulty in dealing with mercury spills.
Syphoning
By carefully filling the cannula fully with either above techniques, then allowing the pressures within the vessels to equalize, a syphon may be set up. This arrangement allows the slow addition of a fluid to a reaction vessel; the rate of addition may be controlled by adjusting the relative height of the donor vessel.
Handling pyrophoric material
While handling pyrophoric material (e.g. tert-butyllithium and trimethylaluminum), traces of the compound at the tip of the needle or cannula may ignite, and cause a clog. Some workers prefer to contain the tip of the needle or cannula in a short glass tube flushed with an inert gas, and sealed via two septa.
Instead of exposing the needle tip to the air, it is withdrawn into the inerted tube. Where desired, it may be inserted into a flask via two septa (one on the tube, one on the flask). Used this way, needle tip fires are eliminated, reducing the obvious hazards. Also, there is a reduced tendency for the needle tip to clog due to the reaction of traces of the reagent with air to give salts.
Filtration
Filtration is most easily accomplished using a syringe filter. PTFE filters tend to be most chemically resistant; nylon filters are less so.
Using a cannula, a filter stick Air-free technique#Gallery may be used. A filter stick is a short length of glass tubing sealed on one end with a septum, and sealed on the other with filter paper, or a sintered glass frit.
For larger volumes, it may be preferable to connect the donor and receiving flasks via ground glass joints to a sintered glass filter tube.
Gallery
Air-sensitive cannulas:
1: Pressure in (gas in) 2: Pressure out (oil bubbler orange) 3: Higher flask with transfer liquid (yellow) to transfer 4: Lower receiving flask/transferred liquid (yellow)
5: Liquid transfer cannula 6: Septum (orange) on transfer flask 7: Septum (orange) on receiving flask 8: Pressure-control regulator/stopcock
9: Tubing/ gas-line (not shown for clarity, arrows show connectivity) 10: Gas cannula 11: 2-way syringe stopcock 12: Gas-tight syringe
13: Gas/pressure removed from flask 4 14: Gas/pressure added to flask 3
O = Open stopcock; X = Closed stopcock; black-arrow = Gas flow direction, orange arrow = Liquid flow direction
References
Further reading
Air-free techniques
Laboratory techniques | Cannula transfer | [
"Chemistry",
"Engineering"
] | 1,509 | [
"Vacuum systems",
"Air-free techniques",
"nan"
] |
6,592,812 | https://en.wikipedia.org/wiki/Thermal%20contact%20conductance | In physics, thermal contact conductance is the study of heat conduction between solid or liquid bodies in thermal contact. The thermal contact conductance coefficient, , is a property indicating the thermal conductivity, or ability to conduct heat, between two bodies in contact. The inverse of this property is termed thermal contact resistance.
Definition
When two solid bodies come in contact, such as A and B in Figure 1, heat flows from the hotter body to the colder body. From experience, the temperature profile along the two bodies varies, approximately, as shown in the figure. A temperature drop is observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Thermal contact resistance is defined as the ratio between this temperature drop and the average heat flow across the interface.
According to Fourier's law, the heat flow between the bodies is found by the relation:
where is the heat flow, is the thermal conductivity, is the cross sectional area and is the temperature gradient in the direction of flow.
From considerations of conservation of energy, the heat flow between the two bodies in contact, bodies A and B, is found as:
One may observe that the heat flow is directly related to the thermal conductivities of the bodies in contact, and , the contact area , and the thermal contact resistance, , which, as previously noted, is the inverse of the thermal conductance coefficient, .
Importance
Most experimentally determined values of the thermal contact resistance fall between
0.000005 and 0.0005 m2 K/W (the corresponding range of thermal contact
conductance is 200,000 to 2000 W/m2 K). To know whether the thermal contact resistance is significant or not, magnitudes of the thermal resistances of the layers are compared with typical values of thermal contact resistance. Thermal contact resistance is significant and may dominate for good heat conductors such as metals but can be neglected for poor heat conductors such as insulators.
Thermal contact conductance is an important factor in a variety of applications, largely because many physical systems contain a mechanical combination of two materials. Some of the fields where contact conductance is of importance are:
Electronics
Electronic packaging
Heat sinks
Brackets
Industry
Nuclear reactor cooling
Gas turbine cooling
Internal combustion engines
Heat exchangers
Thermal insulation
Press hardening of automotive steels
Flight
Hypersonic flight vehicles
Thermal supervision for space vehicles
Residential/building science
Performance of building envelopes
Factors influencing contact conductance
Thermal contact conductance is a complicated phenomenon, influenced by many factors. Experience shows that the most important ones are as follows:
Contact pressure
For thermal transport between two contacting bodies, such as particles in a granular medium, the contact pressure, and the area of true contact area that arises from this, is the factor of most influence on overall contact conductance
. Governed by an interface's Normal contact stiffness, as contact pressure grows, true contact area increases and contact conductance grows (contact resistance becomes smaller).
Since the contact pressure is the most important factor, most studies, correlations and mathematical models for measurement of contact conductance are done as a function of this factor.
The thermal contact resistance of certain sandwich kinds of materials that are manufactured by rolling under high temperatures may sometimes be ignored because the decrease in thermal conductivity between them is negligible.
Interstitial materials
No truly smooth surfaces really exist, and surface imperfections are visible under a microscope. As a result, when two bodies are pressed together, contact is only performed in a finite number of points, separated by relatively large gaps, as can be shown in Fig. 2. Since the actual contact area is reduced, another resistance for heat flow exists. The gases/fluids filling these gaps may largely influence the total heat flow across the interface. The thermal conductivity of the interstitial material and its pressure, examined through reference to the Knudsen number, are the two properties governing its influence on contact conductance, and thermal transport in heterogeneous materials in general.
In the absence of interstitial materials, as in a vacuum, the contact resistance will be much larger, since flow through the intimate contact points is dominant.
Surface roughness, waviness and flatness
One can characterise a surface that has undergone certain finishing operations by three main properties of: roughness, waviness, and fractal dimension. Among these, roughness and fractality are of most importance, with roughness often indicated in terms of a rms value, and surface fractality denoted generally by Df. The effect of surface structures on thermal conductivity at interfaces is analogous to the concept of electrical contact resistance, also known as ECR, involving contact patch restricted transport of phonons rather than electrons.
Surface deformations
When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic, depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase
Surface cleanliness
The presence of dust particles, acids, etc., can also influence the contact conductance.
Measurement of thermal contact conductance
Going back to Formula 2, calculation of the thermal contact conductance may prove difficult, even impossible, due to the difficulty in measuring the contact area, (A product of surface characteristics, as explained earlier). Because of this, contact conductance/resistance is usually found experimentally, by using a standard apparatus.
The results of such experiments are usually published in Engineering literature, on journals such as Journal of Heat Transfer, International Journal of Heat and Mass Transfer, etc. Unfortunately, a centralized database of contact conductance coefficients does not exist, a situation which sometimes causes companies to use outdated, irrelevant data, or not taking contact conductance as a consideration at all.
CoCoE (Contact Conductance Estimator), a project founded to solve this problem and create a centralized database of contact conductance data and a computer program that uses it, was started in 2006.
Thermal boundary conductance
While a finite thermal contact conductance is due to voids at the interface, surface waviness, and surface roughness, etc., a finite conductance exists even at near ideal interfaces as well. This conductance, known as thermal boundary conductance, is due to the differences in electronic and vibrational properties between the contacting materials. This conductance is generally much higher than thermal contact conductance, but becomes important in nanoscale material systems.
See also
Heat transfer
References
External links
Project CoCoE - Free software to estimate TCC
Heat conduction
Thermodynamics
Physical quantities
Heat transfer | Thermal contact conductance | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,347 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Heat conduction",
"Physical properties",
"Dynamical systems"
] |
6,593,472 | https://en.wikipedia.org/wiki/Structure%20theorem%20for%20Gaussian%20measures | In mathematics, the structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space. It was proved in the 1970s by Kallianpur–Sato–Stefan and Dudley–Feldman–le Cam.
There is the earlier result due to H. Satô (1969) which proves that "any Gaussian measure on a separable Banach space is an abstract Wiener measure in the sense of L. Gross". The result by Dudley et al. generalizes this result to the setting of Gaussian measures on a general topological vector space.
Statement of the theorem
Let γ be a strictly positive Gaussian measure on a separable Banach space (E, || ||). Then there exists a separable Hilbert space (H, 〈 , 〉) and a map i : H → E such that i : H → E is an abstract Wiener space with γ = i∗(γH), where γH is the canonical Gaussian cylinder set measure on H.
References
Banach spaces
Probability theorems
Theorems in measure theory | Structure theorem for Gaussian measures | [
"Mathematics"
] | 236 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems"
] |
6,593,900 | https://en.wikipedia.org/wiki/Distributed%20temperature%20sensing | Distributed temperature sensing systems (DTS) are optoelectronic devices which measure temperatures by means of optical fibres functioning as linear sensors. Temperatures are recorded along the optical sensor cable, thus not at points, but as a continuous profile. A high accuracy of temperature determination is achieved over great distances. Typically the DTS systems can locate the temperature to a spatial resolution of 1 m with accuracy to within ±1 °C at a resolution of 0.01 °C. Measurement distances of greater than 30 km can be monitored and some specialised systems can provide even tighter spatial resolutions. Thermal changes along the optical fibre cause a local variation in the refractive index, which in turn leads to the inelastic scattering of the light propagating through it. Heat is held in the form of molecular or lattice vibrations in the material. Molecular vibrations at high frequencies (10 THz) are responsible for Raman scattering. Low frequency vibrations (10–30 GHz) cause Brillouin scattering. Energy is exchanged between the light travelling through the fibre and the material itself and cause a frequency shift in the incident light. This frequency shift can then be used to measure temperature changes along the fibre.
Measuring principle—Raman effect
Physical measurement dimensions, such as temperature or pressure and tensile forces, can affect glass fibres and locally change the characteristics of light transmission in the fibre. As a result of the damping of the light in the quartz glass fibres through scattering, the location of an external physical effect can be determined so that the optical fibre can be employed as a linear sensor.
Optical fibres are made from doped quartz glass. Quartz glass is a form of silicon dioxide (SiO2) with amorphous solid structure. Thermal effects induce lattice oscillations within the solid. When light falls onto these thermally excited molecular oscillations, an interaction occurs between the light particles (photons) and the electrons of the molecule. Light scattering, also known as Raman scattering, occurs in the optical fibre. Unlike incident light, this scattered light undergoes a spectral shift by an amount equivalent to the resonance frequency of the lattice oscillation.
The light scattered back from the fibre optic therefore contains three different spectral shares:
the Rayleigh scattering with the wavelength of the laser source used,
the Stokes line components from photons shifted to longer wavelength (lower frequency), and
the anti-Stokes line components with photons shifted to shorter wavelength (higher frequency) than the Rayleigh scattering.
The intensity of the so-called anti-Stokes band is temperature-dependent, while the so-called Stokes band is practically independent of temperature. The local temperature of the optical fibre is derived from the ratio of the anti-Stokes and Stokes light intensities.
Measuring principle—OTDR and OFDR technology
There are two basic principles of measurement for distributed sensing technology, OTDR (optical time-domain reflectometry) and OFDR (optical frequency-domain reflectometry). For distributed temperature sensing often a code correlation technology
is employed which carries elements from both principles.
OTDR was developed more than 20 years ago and has become the industry standard for telecom loss measurements which detects the—compared to Raman signal very dominant—Rayleigh backscattering signals. The principle for OTDR is quite simple and is very similar to the time of flight measurement used for radar. Essentially a narrow laser pulse generated either by semiconductor or solid state lasers is sent into the fibre and the backscattered light is analysed. From the time it takes the backscattered light to return to the detection unit it is possible to locate the location of the temperature event.
Alternative DTS evaluation units deploy the method of Optical Frequency Domain Reflectometry (OFDR). The OFDR system provides information on the local characteristic only when the backscatter signal detected during the entire measurement time is measured as a function of frequency in a complex fashion, and then subjected to Fourier transformation. The essential principles of OFDR technology are the quasi continuous wave mode employed by the laser and the narrow-band detection of the optical backscatter signal. This is offset by the technically difficult measurement of the Raman scattered light and rather complex signal processing, due to the FFT calculation with higher linearity requirements for the electronic components.
Code Correlation DTS sends on/off sequences of limited length into the fiber. The codes are chosen to have suitable properties, e.g. binary Golay code. In contrast to OTDR technology, the optical energy is spread over a code rather than packed into a single pulse. Thus a light source with lower peak power compared to OTDR technology can be used, e.g. long life compact semiconductor lasers. The detected backscatter needs to be transformed—similar to OFDR technology—back into a spatial profile, e.g. by cross-correlation. In contrast to OFDR technology, the emission is finite (for example 128 bit) which avoids that weak scattered signals from far are superposed by strong scattered signals from short distance, improving the Shot noise and the signal-to-noise ratio.
Using these techniques it is possible to analyse distances of greater than 30 km from one system and to measure temperature resolutions of less than 0.01°C.
Construction of sensing cable and system integration
The temperature measuring system consists of a controller (laser source, pulse generator for OTDR or code generator for Code Correlation or modulator and HF mixer for OFDR, optical module, receiver and micro-processor unit) and a quartz glass fibre as line-shaped temperature sensor. The fibre optic cable (can be 70 km in length) is passive in nature and has no individual sensing points and therefore can be manufactured based on standard telecoms fibres. This offers excellent economies of scale. Because the system designer/integrator does not have to worry about the precise location of each sensing point the cost for designing and installing a sensing system based on distributed fibre optic sensors is greatly reduced from that of traditional sensors. Additionally, because the sensing cable has no moving parts and design lives of >30 years, the maintenance and operation costs are also considerably less than for conventional sensors. Additional benefits of fibre optic sensing technology are that it is immune to electromagnetic interference, vibration and is safe for use in hazardous zones (the laser power falls below the levels that can cause ignition), thus making these sensors ideal for use in industrial sensing applications.
With regards to the construction of the sensing cable, although it is based on standard fibre optics, care must be taken in the design of the individual sensing cable to ensure that adequate protection is provided for the fibre. This must take into account operating temperature (standard cables operate to 85 °C, but it is possible to measure up to 700 °C with the correct design), gaseous environment (hydrogen can cause deterioration of the measurement though "hydrogen darkening" aka attenuation of the silica glass compounds) and mechanical protection.
Most of the available DTS systems have flexible system architectures and are relatively simple to integrate into industrial control systems such as SCADA. In the oil and gas industry an XML based file standard (WITSML) has been developed for transfer of data from DTS instruments. The standard is maintained by Energistics.
Laser safety and operation of system
When operating a system based on optical measurements such as optical DTS, laser safety requirements need to be considered for permanent installations. Many systems use low power laser design, e.g. with classification as laser safety class 1M, which can be applied by anyone (no approved laser safety officers required). Some systems are based on higher power lasers of a 3B rating, which although safe for use by approved laser safety officers, may not be suitable for permanent installations.
The advantage of purely passive optical sensor technology is the lack of electric or electromagnetic interaction. Some DTS systems on the market use a special low power design and are inherently safe in explosive environments, e.g. certified to ATEX directive Zone 0.
For use in fire detection application, regulations usually require certified systems according to relevant standards, such as EN 54-5 or EN 54-22 (Europe), UL521 or FM (USA), cUL521 (Canada) and/or other national or local standards.
For temperature estimation
Temperature distributions can be used to develop models based on the Proper Orthogonal Decomposition Method or principal component analysis. This allows to reconstruct the temperature distribution by measuring only in a few spatial locations
Applications
Distributed temperature sensing can be deployed successfully in multiple industrial segments:
Oil and gas production—permanent downhole monitoring, coil tubing optical enabled deployed intervention systems, slickline optical cable deployed intervention systems.
Power cable and transmission line monitoring (ampacity optimisation)
Fire detection in tunnels, industrial conveyor belts and special hazard buildings
Industrial induction furnace surveillance
Integrity of liquid natural gas (LNG) carriers and terminals
Leakage detection at dikes and dams
Temperature monitoring in plant and process engineering, including transmission pipelines
Storage tanks and vessels
More recently, DTS has been applied for environmental monitoring as well:
Stream temperature
Groundwater source detection and sediment scouring and deposition
Temperature profiles in a mine shaft and over lakes and glaciers
Deep rainforest ambient temperature at various foliage densities
Temperature profiles in an underground mine, Australia
Temperature profiles in ground loop heat exchangers (used for ground coupled heating and cooling systems)
See also
Distributed acoustic sensing
Fiber Bragg grating
Fiber optic sensor
Time-domain reflectometer
Well logging
WITSML
References
Sensors
Fiber optics
Measuring instruments
Fire detection and alarm
Petroleum production
Electrical components
Power cables | Distributed temperature sensing | [
"Technology",
"Engineering"
] | 1,924 | [
"Electrical components",
"Measuring instruments",
"Electrical engineering",
"Sensors",
"Components"
] |
6,594,053 | https://en.wikipedia.org/wiki/Cover-coding | Cover-coding is a technique for obscuring the data that is transmitted over an insecure link, to reduce the risks of snooping. An example of cover-coding would be for the sender to perform a bitwise XOR (exclusive OR) of the original data with a password or random number which is known to both sender and receiver. The resulting cover-coded data is then transmitted from sender to the receiver, who uncovers the original data by performing a further bitwise XOR (exclusive OR) operation on the received data using the same password or random number.
ISO 18000-6C (EPC Class 1 Generation 2) RFID tags protect some operations with a cover code.
The reader requests a random number from the tag,
and the tag responds with a new random number.
The reader then encrypts future communications with this number, using bitwise XOR, to the data it sends.
Cover coding is secure if the tag signal can't be intercepted and the random number is not re-used.
Compared to the loud transmissions from the reader,
tag backscatter is much weaker and difficult -- but not impossible -- to intercept.
References
Cryptography | Cover-coding | [
"Mathematics",
"Engineering"
] | 245 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
6,599,910 | https://en.wikipedia.org/wiki/Gene%20delivery | Gene delivery is the process of introducing foreign genetic material, such as DNA or RNA, into host cells. Gene delivery must reach the genome of the host cell to induce gene expression. Successful gene delivery requires the foreign gene delivery to remain stable within the host cell and can either integrate into the genome or replicate independently of it. This requires foreign DNA to be synthesized as part of a vector, which is designed to enter the desired host cell and deliver the transgene to that cell's genome. Vectors utilized as the method for gene delivery can be divided into two categories, recombinant viruses and synthetic vectors (viral and non-viral).
In complex multicellular eukaryotes (more specifically Weissmanists), if the transgene is incorporated into the host's germline cells, the resulting host cell can pass the transgene to its progeny. If the transgene is incorporated into somatic cells, the transgene will stay with the somatic cell line, and thus its host organism.
Gene delivery is a necessary step in gene therapy for the introduction or silencing of a gene to promote a therapeutic outcome in patients and also has applications in the genetic modification of crops. There are many different methods of gene delivery for various types of cells and tissues.
History
Viral based vectors emerged in the 1980s as a tool for transgene expression. In 1983, Albert Siegel described the use of viral vectors in plant transgene expression although viral manipulation via cDNA cloning was not yet available. The first virus to be used as a vaccine vector was the vaccinia virus in 1984 as a way to protect chimpanzees against hepatitis B. Non-viral gene delivery was first reported on in 1943 by Avery et al. who showed cellular phenotype change via exogenous DNA exposure.
Methods
There are a variety of methods available to deliver genes to host cells. When genes are delivered to bacteria or plants the process is called transformation and when it is used to deliver genes to animals it is called transfection. This is because transformation has a different meaning in relation to animals, indicating progression to a cancerous state. For some bacteria no external methods are need to introduce genes as they are naturally able to take up foreign DNA. Most cells require some sort of intervention to make the cell membrane permeable to DNA and allow the DNA to be stably inserted into the hosts genome.
Chemical
Chemical based methods of gene delivery can use natural or synthetic compounds to form particles that facilitate the transfer of genes into cells. These synthetic vectors have the ability to electrostatically bind DNA or RNA and compact the genetic information to accommodate larger genetic transfers. Chemical vectors usually enter cells by endocytosis and can protect genetic material from degradation.
Heat shock
One of the simplest method involves altering the environment of the cell and then stressing it by giving it a heat shock. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse. Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. It is suggested that exposing the cells to divalent cations in cold condition may change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall.
Calcium phosphate
Another simple methods involves using calcium phosphate to bind the DNA and then exposing it to cultured cells. The solution, along with the DNA, is encapsulated by the cells and a small amount of DNA can be integrated into the genome.
Liposomes and polymers
Liposomes and polymers can be used as vectors to deliver DNA into cells. Positively charged liposomes bind with the negatively charged DNA, while polymers can be designed that interact with DNA. They form lipoplexes and polyplexes respectively, which are then up-taken by the cells. The two systems can also be combined. Polymer-based non-viral vectors uses polymers to interact with DNA and form polyplexes.
Nanoparticles
The use of engineered inorganic and organic nanoparticles is another non-viral approach for gene delivery.
Physical
Artificial gene delivery can be mediated by physical methods which uses force to introduce genetic material through the cell membrane.
Electroporation
Electroporation is a method of promoting competence. Cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms.
Biolistics
Another method used to transform plant cells is biolistics, where particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material enters the cells and transforms them. This method can be used on plants that are not susceptible to Agrobacterium infection and also allows transformation of plant plastids. Plants cells can also be transformed using electroporation, which uses an electric shock to make the cell membrane permeable to plasmid DNA. Due to the damage caused to the cells and DNA the transformation efficiency of biolistics and electroporation is lower than agrobacterial transformation.
Microinjection
Microinjection is where DNA is injected through the cell's nuclear envelope directly into the nucleus.
Sonoporation
Sonoporation is the transient permeation of cell membranes assisted by ultrasound, typically in the presence of gas microbubbles. Sonoporation allows for the entry of genetic material into cells.
Photoporation
Photoporation is when laser pulses are used to create pores in a cell membrane to allow entry of genetic material.
Magnetofection
Magnetofection uses magnetic particles complexed with DNA and an external magnetic field concentrate nucleic acid particles into target cells.
Hydroporation
A hydrodynamic capillary effect can be used to manipulate cell permeability.
Agrobacterium
In plants the DNA is often inserted using Agrobacterium-mediated recombination, taking advantage of the Agrobacteriums T-DNA sequence that allows natural insertion of genetic material into plant cells. Plant tissue are cut into small pieces and soaked in a fluid containing suspended Agrobacterium. The bacteria will attach to many of the plant cells exposed by the cuts. The bacteria uses conjugation to transfer a DNA segment called T-DNA from its plasmid into the plant. The transferred DNA is piloted to the plant cell nucleus and integrated into the host plants genomic DNA.The plasmid T-DNA is integrated semi-randomly into the genome of the host cell.
By modifying the plasmid to express the gene of interest, researchers can insert their chosen gene stably into the plants genome. The only essential parts of the T-DNA are its two small (25 base pair) border repeats, at least one of which is needed for plant transformation. The genes to be introduced into the plant are cloned into a plant transformation vector that contains the T-DNA region of the plasmid. An alternative method is agroinfiltration.
Viral delivery
Virus mediated gene delivery utilizes the ability of a virus to inject its DNA inside a host cell and takes advantage of the virus' own ability to replicate and implement their own genetic material. Viral methods of gene delivery are more likely to induce an immune response, but they have high efficiency. Transduction is the process that describes virus-mediated insertion of DNA into the host cell. Viruses are a particularly effective form of gene delivery because the structure of the virus prevents degradation via lysosomes of the DNA it is delivering to the nucleus of the host cell. In gene therapy a gene that is intended for delivery is packaged into a replication-deficient viral particle to form a viral vector. Viruses used for gene therapy to date include retrovirus, adenovirus, adeno-associated virus and herpes simplex virus. However, there are drawbacks to using viruses to deliver genes into cells. Viruses can only deliver very small pieces of DNA into the cells, it is labor-intensive and there are risks of random insertion sites, cytopathic effects and mutagenesis.
Viral vector based gene delivery uses a viral vector to deliver genetic material to the host cell. This is done by using a virus that contains the desired gene and removing the part of the viruses genome that is infectious. Viruses are efficient at delivering genetic material to the host cell's nucleus, which is vital for replication.
RNA-based viral vectors
RNA-based viruses were developed because of the ability to transcribe directly from infectious RNA transcripts. RNA vectors are quickly expressed and expressed in the targeted form since no processing is required [source needed]. Retroviral vectors include oncoretroviral, lentiviral and human foamy virus are RNA-based viral vectors that reverse transcript and integrated into the host genome, permits long-term transgene expression .
DNA-based viral vectors
DNA-based viral vectors include Adenoviridae, adeno-associated virus and herpes simplex virus.
Applications
Gene therapy
Several of the methods used to facilitate gene delivery have applications for therapeutic purposes. Gene therapy utilizes gene delivery to deliver genetic material with the goal of treating a disease or condition in the cell. Gene delivery in therapeutic settings utilizes non-immunogenic vectors capable of cell specificity that can deliver an adequate amount of transgene expression to cause the desired effect.
Advances in genomics have enabled a variety of new methods and gene targets to be identified for possible applications. DNA microarrays used in a variety of next-gen sequencing can identify thousands of genes simultaneously, with analytical software looking at gene expression patterns, and orthologous genes in model species to identify function. This has allowed a variety of possible vectors to be identified for use in gene therapy. As a method for creating a new class of vaccine, gene delivery has been utilized to generate a hybrid biosynthetic vector to deliver a possible vaccine. This vector overcomes traditional barriers to gene delivery by combining E. coli with a synthetic polymer to create a vector that maintains plasmid DNA while having an increased ability to avoid degradation by target cell lysosomes.
See also
Gene targeting
Minicircle
Plasmid
Transgene
Vector (molecular biology)
Viral vector
References
Further reading
External links
The 10th US-Japan Symposium on Drug Delivery Systems
Nature: Gene Delivery
Genetic Science Learning Center: Gene Delivery
Lateral Gene Transfer
Genome Editing
NIH: How does gene therapy work?
Applied genetics
Biotechnology | Gene delivery | [
"Chemistry",
"Biology"
] | 2,197 | [
"Genetics techniques",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Gene delivery"
] |
6,601,335 | https://en.wikipedia.org/wiki/Partial%20specific%20volume | The partial specific volume express the variation of the extensive volume of a mixture in respect to composition of the masses. It is the partial derivative of volume with respect to the mass of the component of interest.
where is the partial specific volume of a component defined as:
The PSV is usually measured in milliLiters (mL) per gram (g), proteins > 30 kDa can be assumed to have a partial specific volume of 0.708 mL/g. Experimental determination is possible by measuring the natural frequency of a U-shaped tube filled successively with air, buffer and protein solution.
Properties
The weighted sum of partial specific volumes of a mixture or solution is an inverse of density of the mixture namely the specific volume of the mixture.
See also
Partial molar property
Apparent molar property
References
Mass density | Partial specific volume | [
"Physics",
"Chemistry",
"Biology"
] | 163 | [
"Mechanical quantities",
"Physical quantities",
"Biotechnology stubs",
"Mass",
"Intensive quantities",
"Volume-specific quantities",
"Biochemistry stubs",
"Density",
"Biochemistry",
"Mass density",
"Matter"
] |
18,907,541 | https://en.wikipedia.org/wiki/Zeta%20Phoenicis | Zeta Phoenicis (ζ Phoenicis, abbreviated Zet Phe, ζ Phe) is a multiple star system in the constellation of Phoenix. It is visible to the naked eye. Based upon parallax measurements made by the Hipparcos spacecraft, it is located some away.
Zeta Phoenicis A is itself an Algol-type eclipsing binary star. It consists of two B-type main sequence stars that orbit each other. The larger and brighter (Zeta Phoenicis Aa) is formally named Wurren . When one passes in front of one another, it blocks some of the other star's light. As a result, its apparent magnitude fluctuates between 3.9 and 4.4 with a period of 1.6697739 days (its orbital period).
The system most likely contains four stars with two other telescopic components of apparent magnitude 7.2 and 8.2 at angular separations of 0.8 and 6.4 arcseconds from the main pair. The closer (Zeta Phoenicis B) is an A-type main-sequence star with an orbital period around the main pair of about 210 years, as well as an eccentricity of about 0.35. The further (Zeta Phoenicis C) is an F-type main-sequence star with an orbital period of over 5,000 years.
Nomenclature
ζ Phoenicis (Latinised to Zeta Phoenicis) is the system's Bayer designation. The designations of the three constituents as ζ Phoenicis A, B and C, and those of A components—ζ Phoenicis Aa and Ab—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The system bore the traditional name Wurren in the culture of the Wardaman people of the Northern Territory of Australia, meaning child, but in this context refers to a "Little Fish", a star adjacent to Achernar (Gawalyan = porcupine or echidna) to whom little fish provides water. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Wurren for the component Zeta Phoenicis Aa on 19 November 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese occasioned by adaptation of the European southern hemisphere constellations into the Chinese system, (), meaning Crooked Running Water, refers to an asterism consisting of Zeta Phoenicis, Alpha Eridani (Achernar) and Eta Phoenicis. Consequently, Zeta Phoenicis itself is known as (, ).
References
Der Brockhaus. Astronomie. 2006, p. 334.
Phoenicis, Zeta
Algol variables
Phoenix (constellation)
B-type main-sequence stars
005348
006882
Durchmusterung objects
0338 | Zeta Phoenicis | [
"Astronomy"
] | 652 | [
"Phoenix (constellation)",
"Constellations"
] |
18,909,129 | https://en.wikipedia.org/wiki/Stable%20vector%20bundle | In mathematics, a stable vector bundle is a (holomorphic or algebraic) vector bundle that is stable in the sense of geometric invariant theory. Any holomorphic vector bundle may be built from stable ones using Harder–Narasimhan filtration. Stable bundles were defined by David Mumford in and later built upon by David Gieseker, Fedor Bogomolov, Thomas Bridgeland and many others.
Motivation
One of the motivations for analyzing stable vector bundles is their nice behavior in families. In fact, Moduli spaces of stable vector bundles can be constructed using the Quot scheme in many cases, whereas the stack of vector bundles is an Artin stack whose underlying set is a single point.
Here's an example of a family of vector bundles which degenerate poorly. If we tensor the Euler sequence of by there is an exact sequencewhich represents a non-zero element since the trivial exact sequence representing the vector isIf we consider the family of vector bundles in the extension from for , there are short exact sequenceswhich have Chern classes generically, but have at the origin. This kind of jumping of numerical invariants does not happen in moduli spaces of stable vector bundles.
Stable vector bundles over curves
A slope of a holomorphic vector bundle W over a nonsingular algebraic curve (or over a Riemann surface) is a rational number μ(W) = deg(W)/rank(W). A bundle W is stable if and only if
for all proper non-zero subbundles V of W
and is semistable if
for all proper non-zero subbundles V of W. Informally this says that a bundle is stable if it is "more ample" than any proper subbundle, and is unstable if it contains a "more ample" subbundle.
If W and V are semistable vector bundles and μ(W) >μ(V), then there are no nonzero maps W → V.
Mumford proved that the moduli space of stable bundles of given rank and degree over a nonsingular curve is a quasiprojective algebraic variety. The cohomology of the moduli space of stable vector bundles over a curve was described by using algebraic geometry over finite fields and using Narasimhan-Seshadri approach.
Stable vector bundles in higher dimensions
If X is a smooth projective variety of dimension m and H is a hyperplane section, then a vector bundle (or a torsion-free sheaf) W is called stable (or sometimes Gieseker stable) if
for all proper non-zero subbundles (or subsheaves) V of W, where χ denotes the Euler characteristic of an algebraic vector bundle and the vector bundle V(nH) means the n-th twist of V by H. W is called semistable if the above holds with < replaced by ≤.
Slope stability
For bundles on curves the stability defined by slopes and by growth of Hilbert polynomial coincide. In higher dimensions, these two notions are different and have different advantages. Gieseker stability has an interpretation in terms of geometric invariant theory, while μ-stability has better properties for tensor products, pullbacks, etc.
Let X be a smooth projective variety of dimension n, H its hyperplane section. A slope of a vector bundle (or, more generally, a torsion-free coherent sheaf) E with respect to H is a rational number defined as
where c1 is the first Chern class. The dependence on H is often omitted from the notation.
A torsion-free coherent sheaf E is μ-semistable if for any nonzero subsheaf F ⊆ E the slopes satisfy the inequality μ(F) ≤ μ(E). It's μ-stable if, in addition, for any nonzero subsheaf F ⊆ E of smaller rank the strict inequality μ(F) < μ(E) holds. This notion of stability may be called slope stability, μ-stability, occasionally Mumford stability or Takemoto stability.
For a vector bundle E the following chain of implications holds: E is μ-stable ⇒ E is stable ⇒ E is semistable ⇒ E is μ-semistable.
Harder-Narasimhan filtration
Let E be a vector bundle over a smooth projective curve X. Then there exists a unique filtration by subbundles
such that the associated graded components Fi := Ei+1/Ei are semistable vector bundles and the slopes decrease, μ(Fi) > μ(Fi+1). This filtration was introduced in and is called the Harder-Narasimhan filtration. Two vector bundles with isomorphic associated gradeds are called S-equivalent.
On higher-dimensional varieties the filtration also always exist and is unique, but the associated graded components may no longer be bundles. For Gieseker stability the inequalities between slopes should be replaced with inequalities between Hilbert polynomials.
Kobayashi–Hitchin correspondence
Narasimhan–Seshadri theorem says that stable bundles on a projective nonsingular curve are the same as those that have projectively flat unitary irreducible connections. For bundles of degree 0 projectively flat connections are flat and thus stable bundles of degree 0 correspond to irreducible unitary representations of the fundamental group.
Kobayashi and Hitchin conjectured an analogue of this in higher dimensions. It was proved for projective nonsingular surfaces by , who showed that in this case a vector bundle is stable if and only if it has an irreducible Hermitian–Einstein connection.
Generalizations
It's possible to generalize (μ-)stability to non-smooth projective schemes and more general coherent sheaves using the Hilbert polynomial. Let X be a projective scheme, d a natural number, E a coherent sheaf on X with dim Supp(E) = d. Write the Hilbert polynomial of E as PE(m) = αi(E)/(i!) mi. Define the reduced Hilbert polynomial pE := PE/αd(E).
A coherent sheaf E is semistable if the following two conditions hold:
E is pure of dimension d, i.e. all associated primes of E have dimension d;
for any proper nonzero subsheaf F ⊆ E the reduced Hilbert polynomials satisfy pF(m) ≤ pE(m) for large m.
A sheaf is called stable if the strict inequality pF(m) < pE(m) holds for large m.
Let Cohd(X) be the full subcategory of coherent sheaves on X with support of dimension ≤ d. The slope of an object F in Cohd may be defined using the coefficients of the Hilbert polynomial as if αd(F) ≠ 0 and 0 otherwise. The dependence of on d is usually omitted from the notation.
A coherent sheaf E with is called μ-semistable if the following two conditions hold:
the torsion of E is in dimension ≤ d-2;
for any nonzero subobject F ⊆ E in the quotient category Cohd(X)/Cohd-1(X) we have .
E is μ-stable if the strict inequality holds for all proper nonzero subobjects of E.
Note that Cohd is a Serre subcategory for any d, so the quotient category exists. A subobject in the quotient category in general doesn't come from a subsheaf, but for torsion-free sheaves the original definition and the general one for d = n are equivalent.
There are also other directions for generalizations, for example Bridgeland's stability conditions.
One may define stable principal bundles in analogy with stable vector bundles.
See also
Kobayashi–Hitchin correspondence
Corlette–Simpson correspondence
Quot scheme
References
especially appendix 5C.
Algebraic geometry | Stable vector bundle | [
"Mathematics"
] | 1,634 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
18,909,631 | https://en.wikipedia.org/wiki/Augmentation%20%28pharmacology%29 | Augmentation, in the context of the pharmacological management of psychiatry, refers to the combination of two or more drugs to achieve better treatment results. Examples include:
Prescribing an atypical antipsychotic when someone is already taking a selective serotonin reuptake inhibitor for the treatment of depression.
Prescribing estrogen for someone already being treated with antipsychotics for the management of schizophrenia.
Giving an adenosine A2A receptor antagonist on top of existing treatment for Parkinson's disease.
In pharmacology, the term is occasionally used to describe treatments that increase (augment) the concentration of some substance in the body. This might be done when someone is deficient in a hormone, enzyme, or other endogenous substance. For example:
Use of DDCIs in addition to L-DOPA, to reduce conversion of L-DOPA outside the brain.
To give α1 antitrypsin to someone with alpha 1-antitrypsin deficiency.
References
Clinical pharmacology | Augmentation (pharmacology) | [
"Chemistry"
] | 215 | [
"Pharmacology",
"Clinical pharmacology"
] |
2,740,708 | https://en.wikipedia.org/wiki/Strict%202-category | In category theory, a strict 2-category is a category with "morphisms between morphisms", that is, where each hom-set itself carries the structure of a category. It can be formally defined as a category enriched over Cat (the category of categories and functors, with the monoidal structure given by product of categories).
The concept of 2-category was first introduced by Charles Ehresmann in his work on enriched categories in 1965. The more general concept of bicategory (or weak 2-category), where composition of morphisms is associative only up to a 2-isomorphism, was introduced in 1968 by Jean Bénabou.
Definition
A 2-category C consists of:
A class of 0-cells (or objects) , , ....
For all objects and , a category . The objects of this category are called 1-cells and its morphisms are called 2-cells; the composition in this category is usually written or and called vertical composition or composition along a 1-cell.
For any object there is a functor from the terminal category (with one object and one arrow) to that picks out the identity 1-cell on and its identity 2-cell . In practice these two are often denoted simply by .
For all objects , and , there is a functor , called horizontal composition or composition along a 0-cell, which is associative and admits the identity 1 and 2-cells of as identities. Here, associativity for means that horizontally composing twice to is independent of which of the two and are composed first. The composition symbol is often omitted, the horizontal composite of 2-cells and being written simply as .
The 0-cells, 1-cells, and 2-cells terminology is replaced by 0-morphisms, 1-morphisms, and 2-morphisms in some sources (see also Higher category theory).
The notion of 2-category differs from the more general notion of a bicategory in that composition of 1-cells (horizontal composition) is required to be strictly associative, whereas in a bicategory it needs only be associative up to a 2-isomorphism. The axioms of a 2-category are consequences of their definition as Cat-enriched categories:
Vertical composition is associative and unital, the units being the identity 2-cells .
Horizontal composition is also (strictly) associative and unital, the units being the identity 2-cells on the identity 1-cells .
The interchange law holds; i.e. it is true that for composable 2-cells
The interchange law follows from the fact that is a functor between hom categories. It can be drawn as a pasting diagram as follows:
Here the left-hand diagram denotes the vertical composition of horizontal composites, the right-hand diagram denotes the horizontal composition of vertical composites, and the diagram in the centre is the customary representation of both. The 2-cell are drawn with double arrows ⇒, the 1-cell with single arrows →, and the 0-cell with points.
Examples
The category Ord (of preordered sets) is a 2-category since preordered sets can easily be interpreted as categories.
Category of small categories
The archetypal 2-category is the category of small categories, with natural transformations serving as 2-morphisms; typically 2-morphisms are given by Greek letters (such as above) for this reason.
The objects (0-cells) are all small categories, and for all objects and the category is a functor category. In this context, vertical composition is the composition of natural transformations.
Doctrines
In mathematics, a doctrine is simply a 2-category which is heuristically regarded as a system of theories. For example, algebraic theories, as invented by William Lawvere, is an example of a doctrine, as are multi-sorted theories, operads, categories, and toposes.
The objects of the 2-category are called theories, the 1-morphisms are called models of the in , and the 2-morphisms are called morphisms between models.
The distinction between a 2-category and a doctrine is really only heuristic: one does not typically consider a 2-category to be populated by theories as objects and models as morphisms. It is this vocabulary that makes the theory of doctrines worth while.
For example, the 2-category Cat of categories, functors, and natural transformations is a doctrine. One sees immediately that all presheaf categories are categories of models.
As another example, one may take the subcategory of Cat consisting only of categories with finite products as objects and product-preserving functors as 1-morphisms. This is the doctrine of multi-sorted algebraic theories. If one only wanted 1-sorted algebraic theories, one would restrict the objects to only those categories that are generated under products by a single object.
Doctrines were discovered by Jonathan Mock Beck.
See also
n-category
References
Footnotes
Generalised algebraic models, by Claudia Centazzo.
External links
Higher category theory | Strict 2-category | [
"Mathematics"
] | 1,063 | [
"Higher category theory",
"Mathematical structures",
"Category theory"
] |
2,741,241 | https://en.wikipedia.org/wiki/Phosphorescent%20organic%20light-emitting%20diode | Phosphorescent organic light-emitting diodes (PHOLED) are a type of organic light-emitting diode (OLED) that use the principle of phosphorescence to obtain higher internal efficiencies than fluorescent OLEDs. This technology is currently under development by many industrial and academic research groups.
Method of operation
Like all types of OLED, phosphorescent OLEDs emit light due to the electroluminescence of an organic semiconductor layer in an electric current. Electrons and holes are injected into the organic layer at the electrodes and form excitons, a bound state of the electron and hole.
Electrons and holes are both fermions with half integer spin. An exciton is formed by the coulombic attraction between the electron and the hole, and it may either be in a singlet state or a triplet state, depending on the spin states of these two bound species. Statistically, there is a 25% probability of forming a singlet state and 75% probability of forming a triplet state. Decay of the excitons results in the production of light through spontaneous emission.
In OLEDs using fluorescent organic molecules only, the decay of triplet excitons is quantum mechanically forbidden by selection rules, meaning that the lifetime of triplet excitons is long and phosphorescence is not readily observed. Hence it would be expected that in fluorescent OLEDs only the formation of singlet excitons results in the emission of useful radiation, placing a theoretical limit on the internal quantum efficiency (the percentage of excitons formed that result in emission of a photon) of 25%.
However, phosphorescent OLEDs generate light from both triplet and singlet excitons, allowing the internal quantum efficiency of such devices to reach nearly 100%.
This is commonly achieved by doping a host molecule with an organometallic complex. These contain a heavy metal atom at the centre of the molecule, for example platinum or iridium, of which the green emitting complex Ir(mppy)3 is just one of many examples. The large spin–orbit interaction experienced by the molecule due to this heavy metal atom facilitates intersystem crossing, a process which mixes the singlet and triplet character of excited states. This reduces the lifetime of the triplet state, therefore phosphorescence is readily observed.
Applications
Due to their potentially high level of energy efficiency, even when compared to other OLEDs, PHOLEDs are being studied for potential use in large-screen displays such as computer monitors or television screens, as well as general lighting needs. One potential use of PHOLEDs as lighting devices is to cover walls with large area PHOLED light panels. This would allow entire rooms to glow uniformly, rather than require the use of light bulbs which distribute light unequally throughout a room. The United States Department of Energy has recognized the potential for massive energy savings via the use of this technology and therefore has awarded $200,000 USD in contracts to develop PHOLED products for general lighting applications.
Challenges
One problem that currently hampers the widespread adoption of this highly energy efficient technology is that the average lifetimes of red and green PHOLEDs are often tens of thousands of hours longer than those of blue PHOLEDs. This may cause displays to become visually distorted much sooner than would be acceptable for a commercially viable device.
References
Optical diodes
Display technology
Molecular electronics | Phosphorescent organic light-emitting diode | [
"Chemistry",
"Materials_science",
"Engineering"
] | 703 | [
"Molecular physics",
"Molecular electronics",
"Electronic engineering",
"Display technology",
"Nanotechnology"
] |
2,742,758 | https://en.wikipedia.org/wiki/Ibritumomab%20tiuxetan | Ibritumomab tiuxetan (pronounced ), sold under the trade name Zevalin, is a monoclonal antibody radioimmunotherapy treatment for non-Hodgkin's lymphoma. The drug uses the monoclonal mouse IgG1 antibody ibritumomab in conjunction with the chelator tiuxetan, to which a radioactive isotope (either yttrium-90 or indium-111) is added. Tiuxetan is a modified version of DTPA whose carbon backbone contains an isothiocyanatobenzyl and a methyl group.
Medical use
Ibritumomab tiuxetan is used to treat relapsed or refractory, low grade or transformed B cell non-Hodgkin's lymphoma (NHL), a lymphoproliferative disorder, and previously untreated follicular NHL in adults who achieve a partial or complete response to first-line chemotherapy.
The treatment starts with an infusion of rituximab. This may be followed by an administration of indium-111 labeled ibritumomab tiuxetan (111In replaces the 90Y component) to allow the distribution of the medication to be imaged on a gamma camera, before the actual therapy is administered.
Mechanism of action
The antibody binds to the CD20 antigen found on the surface of normal and malignant B cells (but not B cell precursors), allowing radiation from the attached isotope (mostly beta emission) to kill it and some nearby cells. In addition, the antibody itself may trigger cell death via antibody-dependent cell-mediated cytotoxicity (ADCC), complement-dependent cytotoxicity (CDC), and apoptosis. Together, these actions eliminate B cells from the body, allowing a new population of healthy B cells to develop from lymphoid stem cells.
History
Developed by IDEC Pharmaceuticals, now part of Biogen Idec, ibritumomab tiuxetan was the first radioimmunotherapy drug approved by the US Food and Drug Administration (FDA) in 2002 to treat cancer. It was approved for the treatment of people with relapsed or refractory, low‑grade or follicular B‑cell non‑Hodgkin's lymphoma (NHL), including people with rituximab refractory follicular NHL. It was given marketing authorization by the European Medicines Agency in 2004 for the treatment of adults with rituximab relapsed or refractory CD20+ follicular B-cell non-Hodgkin's lymphoma but. The authorization lapsed in July 2024, after it wasn't marketed for more than three consecutive years.
In September 2009, ibritumomab tiuxetan received approval from the FDA for an expanded label to include previously untreated people with a chemotherapy response.
Society and culture
Economics
Ibritumomab tiuxetan is under patent protection and not available in generic form. When approved, it was the most expensive medication available given in a single dose, costing over (€30,000) for the average dose. Compared with other monoclonal antibody treatments (many of which are well over $40,000 for a course of therapy), it may be considered cost effective.
References
Monoclonal antibodies for tumors
Antibody-drug conjugates
Yttrium compounds
Indium compounds
Radiopharmaceuticals | Ibritumomab tiuxetan | [
"Chemistry",
"Biology"
] | 736 | [
"Antibody-drug conjugates",
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
2,743,476 | https://en.wikipedia.org/wiki/Atmospheric%20escape | Atmospheric escape is the loss of planetary atmospheric gases to outer space. A number of different mechanisms can be responsible for atmospheric escape; these processes can be divided into thermal escape, non-thermal (or suprathermal) escape, and impact erosion. The relative importance of each loss process depends on the planet's escape velocity, its atmosphere composition, and its distance from its star. Escape occurs when molecular kinetic energy overcomes gravitational energy; in other words, a molecule can escape when it is moving faster than the escape velocity of its planet. Categorizing the rate of atmospheric escape in exoplanets is necessary to determining whether an atmosphere persists, and so the exoplanet's habitability and likelihood of life.
Thermal escape mechanisms
Thermal escape occurs if the molecular velocity due to thermal energy is sufficiently high. Thermal escape happens at all scales, from the molecular level (Jeans escape) to bulk atmospheric outflow (hydrodynamic escape).
Jeans escape
One classical thermal escape mechanism is Jeans escape, named after British astronomer Sir James Jeans, who first described this process of atmospheric loss. In a quantity of gas, the average velocity of any one molecule is measured by the gas's temperature, but the velocities of individual molecules change as they collide with one another, gaining and losing kinetic energy. The variation in kinetic energy among the molecules is described by the Maxwell distribution. The kinetic energy (), mass (), and velocity () of a molecule are related by . Individual molecules in the high tail of the distribution (where a few particles have much higher speeds than the average) may reach escape velocity and leave the atmosphere, provided they can escape before undergoing another collision; this happens predominantly in the exosphere, where the mean free path is comparable in length to the pressure scale height. The number of particles able to escape depends on the molecular concentration at the exobase, which is limited by diffusion through the thermosphere.
Three factors strongly contribute to the relative importance of Jeans escape: mass of the molecule, escape velocity of the planet, and heating of the upper atmosphere by radiation from the parent star. Heavier molecules are less likely to escape because they move slower than lighter molecules at the same temperature. This is why hydrogen escapes from an atmosphere more easily than carbon dioxide. Second, a planet with a larger mass tends to have more gravity, so the escape velocity tends to be greater, and fewer particles will gain the energy required to escape. This is why the gas giant planets still retain significant amounts of hydrogen, which escape more readily from Earth's atmosphere. Finally, the distance a planet orbits from a star also plays a part; a close planet has a hotter atmosphere, with higher velocities and hence, a greater likelihood of escape. A distant body has a cooler atmosphere, with lower velocities, and less chance of escape.
Hydrodynamic escape
An atmosphere with high pressure and temperature can also undergo hydrodynamic escape. In this case, a large amount of thermal energy, usually through extreme ultraviolet radiation, is absorbed by the atmosphere. As molecules are heated, they expand upwards and are further accelerated until they reach escape velocity. In this process, lighter molecules can drag heavier molecules with them through collisions as a larger quantity of gas escapes. Hydrodynamic escape has been observed for exoplanets close to their host star, including the hot Jupiter HD 209458b.
Non-thermal (suprathermal) escape
Escape can also occur due to non-thermal interactions. Most of these processes occur due to photochemistry or charged particle (ion) interactions.
Photochemical escape
In the upper atmosphere, high energy ultraviolet photons can react more readily with molecules. Photodissociation can break a molecule into smaller components and provide enough energy for those components to escape. Photoionization produces ions, which can get trapped in the planet's magnetosphere or undergo dissociative recombination. In the first case, these ions may undergo escape mechanisms described below. In the second case, the ion recombines with an electron, releases energy, and can escape.
Sputtering escape
Excess kinetic energy from the solar wind can impart sufficient energy to eject atmospheric particles, similar to sputtering from a solid surface. This type of interaction is more pronounced in the absence of a planetary magnetosphere, as the electrically charged solar wind is deflected by magnetic fields, which mitigates the loss of atmosphere.
Charge exchange escape
Ions in the solar wind or magnetosphere can charge exchange with molecules in the upper atmosphere. A fast-moving ion can capture the electron from a slow atmospheric neutral, creating a fast neutral and a slow ion. The slow ion is trapped on the magnetic field lines, but the fast neutral can escape.
Polar wind escape
Atmospheric molecules can also escape from the polar regions on a planet with a magnetosphere, due to the polar wind. Near the poles of a magnetosphere, the magnetic field lines are open, allowing a pathway for ions in the atmosphere to exhaust into space. The ambipolar electric field accelerates any ions in the ionosphere, launching along these lines.
Impact erosion
The impact of a large meteoroid can lead to the loss of atmosphere. If a collision is sufficiently energetic, it is possible for ejecta, including atmospheric molecules, to reach escape velocity.
In order to have a significant effect on atmospheric escape, the radius of the impacting body must be larger than the scale height. The projectile can impart momentum, and thereby facilitate escape of the atmosphere, in three main ways: (a) the meteoroid heats and accelerates the gas it encounters as it travels through the atmosphere, (b) solid ejecta from the impact crater heat atmospheric particles through drag as they are ejected, and (c) the impact creates vapor which expands away from the surface. In the first case, the heated gas can escape in a manner similar to hydrodynamic escape, albeit on a more localized scale. Most of the escape from impact erosion occurs due to the third case. The maximum atmosphere that can be ejected is above a plane tangent to the impact site.
Dominant atmospheric escape and loss processes in the Solar System
Earth
Atmospheric escape of hydrogen on Earth is due to charge exchange escape (~60–90%), Jeans escape (~10–40%), and polar wind escape (~10–15%), currently losing about 3 kg/s of hydrogen. The Earth additionally loses approximately 50 g/s of helium primarily through polar wind escape. Escape of other atmospheric constituents is much smaller. A Japanese research team in 2017 found evidence of a small number of oxygen ions on the moon that came from the Earth.
In 1 billion years, the Sun will be 10% brighter than it is now, making it hot enough on Earth to dramatically increase the water vapor in the atmosphere where solar ultraviolet light will
dissociate H2O, allowing it to gradually escape into space until the oceans dry up
Venus
Recent models indicate that hydrogen escape on Venus is almost entirely due to suprathermal mechanisms, primarily photochemical reactions and charge exchange with the solar wind. Oxygen escape is dominated by charge exchange and sputtering escape. Venus Express measured the effect of coronal mass ejections on the rate of atmospheric escape of Venus, and researchers found a factor of 1.9 increase in escape rate during periods of increased coronal mass ejections compared with calmer space weather.
Mars
Primordial Mars also suffered from the cumulative effects of multiple small impact erosion events, and recent observations with MAVEN suggest that 66% of the 36Ar in the Martian atmosphere has been lost over the last 4 billion years due to suprathermal escape, and the amount of CO2 lost over the same time period is around 0.5 bar or more.
The MAVEN mission has also explored the current rate of atmospheric escape of Mars. Jeans escape plays an important role in the continued escape of hydrogen on Mars, contributing to a loss rate that varies between 160 - 1800 g/s. Jeans escape of hydrogen can be significantly modulated by lower atmospheric processes, such as gravity waves, convection, and dust storms. Oxygen loss is dominated by suprathermal methods: photochemical (~1300 g/s), charge exchange (~130 g/s), and sputtering (~80 g/s) escape combine for a total loss rate of ~1500 g/s. Other heavy atoms, such as carbon and nitrogen, are primarily lost due to photochemical reactions and interactions with the solar wind.
Titan and Io
Saturn's moon Titan and Jupiter's moon Io have atmospheres and are subject to atmospheric loss processes. They have no magnetic fields of their own, but orbit planets with powerful magnetic fields, which protects a given moon from the solar wind when its orbit is within the bow shock. However Titan spends roughly half of its orbital period outside of the bow-shock, subjected to unimpeded solar winds. The kinetic energy gained from pick-up and sputtering associated with the solar winds increases thermal escape throughout the orbit of Titan, causing neutral hydrogen to escape. The escaped hydrogen maintains an orbit following in the wake of Titan, creating a neutral hydrogen torus around Saturn. Io, in its orbit around Jupiter, encounters a plasma cloud. Interaction with the plasma cloud induces sputtering, kicking off sodium particles. The interaction produces a stationary banana-shaped charged sodium cloud along a part of the orbit of Io.
Observations of exoplanet atmospheric escape
Studies of exoplanets have measured atmospheric escape as a means of determining atmospheric composition and habitability. The most common method is Lyman-alpha line absorption. Much as exoplanets are discovered using the dimming of a distant star's brightness (transit), looking specifically at wavelengths corresponding to hydrogen absorption describes the amount of hydrogen present in a sphere around the exoplanet. This method indicates that the hot Jupiters HD209458b and HD189733b and Hot Neptune GJ436b are experiencing significant atmospheric escape.
In 2018 it was discovered with the Hubble Space Telescope that atmospheric escape can also be measured with the 1083 nm Helium triplet. This wavelength is much more accessible from ground-based high-resolution spectrographs, when compared to the ultraviolet Lyman-alpha lines. The wavelength around the helium triplet has also the advantage that it is not severely affected by interstellar absorption, which is an issue for Lyman-alpha. Helium has on the other hand the disadvantage that it requires knowledge about the hydrogen-helium ratio to model the mass-loss of the atmosphere. Helium escape was measured around many giant exoplanets, including WASP-107b, WASP-69b and HD 189733b. It has also been detected around some mini-Neptunes, such as TOI-560 b and HD 63433 c.
Other atmospheric loss mechanisms
Sequestration is not a form of escape from the planet, but a loss of molecules from the atmosphere and into the planet. It occurs on Earth when water vapor condenses to form rain or glacial ice, when carbon dioxide is sequestered in sediments or cycled through the oceans, or when rocks are oxidized (for example, by increasing the oxidation states of ferric rocks from Fe2+ to Fe3+). Gases can also be sequestered by adsorption, where fine particles in the regolith capture gas which adheres to the surface particles.
References
Further reading
Ingersoll, Andrew P. (2013). Planetary climates. Princeton, N.J.: Princeton University Press. . .
Concepts in astrophysics
Atmosphere | Atmospheric escape | [
"Physics"
] | 2,387 | [
"Concepts in astrophysics",
"Astrophysics"
] |
2,744,989 | https://en.wikipedia.org/wiki/Dense%20plasma%20focus | A dense plasma focus (DPF) is a type of plasma generating system originally developed as a fusion power device starting in the early 1960s. The system demonstrated scaling laws that suggested it would not be useful in the commercial power role, and since the 1980s it has been used primarily as a fusion teaching system, and as a source of neutrons and X-rays.
The original concept was developed in 1954 by N.V. Filippov, who noticed the effect while working on early pinch machines in the USSR. A major research program on DPF was carried out in the USSR through the late 1950s, and continues to this day. A different version of the same basic concept was independently discovered in the US by J.W. Mather in the early 1960s. This version saw some development in the 1970s, and variations continue to be developed.
The basic design derives from the z-pinch concept. Both the DPF and pinch use large electrical currents run through a gas to cause it to ionize into a plasma and then pinch down on itself to increase the density and temperature of the plasma. The DPF differs largely in form; most devices use two concentric cylinders and form the pinch at the end of the central cylinder. In contrast, z-pinch systems generally use a single cylinder, sometimes a torus, and pinch the plasma into the center.
The plasma focus is similar to the high-intensity plasma gun device (HIPGD) (or just plasma gun), which ejects plasma in the form of a plasmoid, without pinching it. A comprehensive review of the dense plasma focus and its diverse applications has been made by Krishnan in 2012.
Pinch concept
Pinch-based devices are the earliest systems to be seriously developed for fusion research, starting with very small machines built in London in 1948. These normally took one of two forms; linear pinch machines are straight tubes with electrodes at both ends to apply the current into the plasma, whereas toroidal pinch machines are donut-shaped machines with large magnets wrapped around them that supply the current via magnetic induction.
In both types of machines, a large burst of current is applied to a dilute gas inside the tube. This current initially ionizes the gas into a plasma. Once the ionization is complete, which occurs in microseconds, the plasma begins to conduct a current. Due to the Lorentz force, this current creates a magnetic field that causes the plasma to "pinch" itself down into a filament, similar to a lightning bolt. This process increases the density of the plasma very rapidly, causing its temperature to increase.
Early devices quickly demonstrated a problem with the stability of this process. As the current began to flow in the plasma, magnetic effects known as the "sausage" and "kink" appeared that caused the plasma to become unstable and eventually hit the sides of the container. When this occurred, the hot plasma would cause atoms of the metal or glass to spall off and enter the fuel, rapidly cooling the plasma. Unless the plasma could be made stable, this loss process would make fusion impossible.
In the mid-1950s, two possible solutions appeared. In the fast-pinch concept, a linear device would undergo the pinch so quickly that the plasma as a whole would not move, instead only the outermost layer would begin to pinch, creating a shock wave that would continue the process after the current was removed. In the stabilized pinch, new magnetic fields would be added that would mix with the current's field and create a more stable configuration. In testing, neither of these systems worked, and the pinch route to fusion was largely abandoned by the early 1960s.
DPF concept
During experiments on a linear pinch machine, Filippov noticed that certain arrangements of the electrodes and tube would cause the plasma to form into new shapes. This led to the DPF concept.
In a typical DPF machine, there are two cylindrical electrodes. The inner one, often solid, is physically separated from the outer by an insulating disk at one end of the device. It is left open at the other end. The end result is something like a coffee mug with a half hot dog standing on its end in the middle of the mug.
When current is applied, it begins to arc at the path of least resistance, at the end near the insulator disk. This causes the gas in the area to rapidly ionize, and current begins to flow through it to the outer electrode. The current creates a magnetic field that begins to push the plasma down the tube towards the open end. It reaches the end in microseconds.
When it reaches the end, it continues moving for a short time, but the endpoints of the current sheet remain attached to the end of the cylinders. This causes the plasma sheet to bow out into a shape not unlike an umbrella or the cap of a mushroom.
At this point further movement stops, and the continuing current instead begins to pinch the section near the central electrode. Eventually this causes the former ring-shaped area to compress down into a vertical post extending off the end of the inner electrode. In this area the density is greatly increased.
The whole process proceeds at many times the speed of sound in the ambient gas. As the current sheath continues to move axially, the portion in contact with the anode slides across the face of the anode, axisymmetrically. When the imploding front of the shock wave coalesces onto the axis, a reflected shock front emanates from the axis until it meets the driving current sheath which then forms the axisymmetric boundary of the pinched, or focused, hot plasma column.
The dense plasma column (akin to the Z-pinch) rapidly pinches and undergoes instabilities and breaks up. The intense electromagnetic radiation and particle bursts, collectively referred to as multi-radiation occur during the dense plasma and breakup phases. These critical phases last typically tens of nanoseconds for a small (kJ, 100 kA) focus machine to around a microsecond for a large (MJ, several MA) focus machine.
The process, including axial and radial phases, may last, for the Mather DPF machine, a few microseconds (for a small focus) to 10 microseconds for a larger focus machine. A Filippov focus machine has a very short axial phase compared to a Mather focus.
Applications
When operated using deuterium, intense bursts of X-rays and charged particles are emitted, as are nuclear fusion byproducts including neutrons. There is ongoing research that demonstrates potential applications as a soft X-ray source for next-generation microelectronics lithography, surface micromachining, pulsed X-ray and neutron source for medical and security inspection applications and materials modification, among others.
For nuclear weapons applications, dense plasma focus devices can be used as an external neutron source. Other applications include simulation of nuclear explosions (for testing of the electronic equipment) and a short and intense neutron source useful for non-contact discovery or inspection of nuclear materials (uranium, plutonium).
Characteristics
An important characteristic of the dense plasma focus is that the energy density of the focused plasma is practically a constant over the whole range of machines, from sub-kilojoule machines to megajoule machines, when these machines are tuned for optimal operation. This means that a small table-top-sized plasma focus machine produces essentially the same plasma characteristics (temperature and density) as the largest plasma focus. Of course the larger machine will produce the larger volume of focused plasma with a corresponding longer lifetime and more radiation yield.
Even the smallest plasma focus has essentially the same dynamic characteristics as larger machines, producing the same plasma characteristics and the same radiation products. This is due to the scalability of plasma phenomena.
See also plasmoid, the self-contained magnetic plasma ball that may be produced by a dense plasma focus.
Design parameters
The fact that the plasma energy density is constant throughout the range of plasma focus devices, from big to small, is related to the value of a design parameter that needs to be kept at a certain value if the plasma focus is to operate efficiently.
The critical 'speed' design parameter for neutron-producing devices is , where is the current, is the anode radius, and is the gas density or pressure.
For example, for neutron-optimised operation in deuterium the value of this critical parameter, experimentally observed over a range of machines from kilojoules to hundreds of kilojoules, is: 9 kA/(mm·Torr0.5), or 780 kA/(m·Pa0.5), with a remarkably small deviation of 10% over such a large range of sizes of machines.
Thus if we have a peak current of 180 kA we require an anode radius of 10 mm with a deuterium fill pressure of . The length of the anode has then to be matched to the risetime of the capacitor current in order to allow an average axial transit speed of the current sheath of just over 50 mm/μs. Thus a capacitor risetime of 3 μs requires a matched anode length of 160 mm.
The above example of peak current of 180 kA rising in 3 μs, anode radius and length of respectively 10 and 160 mm are close to the design parameters of the UNU/ICTP PFF (United Nations University/International Centre for Theoretical Physics Plasma Fusion Facility). This small table-top device was designed as a low-cost integrated experimental system for training and transfer to initiate/strengthen experimental plasma research in developing countries.
It can be noted that the square of the drive parameter is a measure of the "plasma energy density".
On the other hand, another proposed, so called "energy density parameter" , where E is the energy stored in the capacitor bank and a is the anode radius, for neutron-optimised operation in deuterium the value of this critical parameter, experimentally observed over a range of machines from tens of joules to hundreds of kilojoules, is in the order of J/m3. For example, for a capacitor bank of 3kJ, the anode radius is in the order of 12mm. This parameter has a range of 3.6x10^9 to 7.6x10^11 for the machines surveyed by Soto. The wide range of this parameter is because it is a "storage energy density" which translates into plasma energy density with different efficiency depending on the widely differing performance of different machines. Thus to result in the necessary plasma energy density (which is found to be a near constant for optimized neutron production) requires widely differing initial storage density.
Current research
A network of ten identical DPF machines operates in eight countries around the world. This network produces research papers on topics including machine optimization & diagnostics (soft X-rays, neutrons, electron and ion beams), applications (microlithography, micromachining, materials modification and fabrication, imaging & medical, astrophysical simulation) as well as modeling & computation. The network was organized by Sing Lee in 1986 and is coordinated by the Asian African Association for Plasma Training, AAAPT. A simulation package, the Lee Model, has been developed for this network but is applicable to all plasma focus devices. The code typically produces excellent agreement between computed and measured results, and is available for downloading as a Universal Plasma Focus Laboratory Facility. The Institute for Plasma Focus Studies IPFS was founded on 25 February 2008 to promote correct and innovative use of the Lee Model code and to encourage the application of plasma focus numerical experiments. IPFS research has already extended numerically-derived neutron scaling laws to multi-megajoule experiments. These await verification. Numerical experiments with the code have also resulted in the compilation of a global scaling law indicating that the well-known neutron saturation effect is better correlated to a scaling deterioration mechanism. This is due to the increasing dominance of the axial phase dynamic resistance as capacitor bank impedance decreases with increasing bank energy (capacitance). In principle, the resistive saturation could be overcome by operating the pulse power system at a higher voltage.
The International Centre for Dense Magnetised Plasmas (ICDMP) in Warsaw Poland, operates several plasma focus machines for an international research and training programme. Among these machines is one with energy capacity of 1 MJ making it one of the largest plasma focus devices in the world.
In Argentina there is an Inter-institutional Program for Plasma Focus Research since 1996, coordinated by a National Laboratory of Dense Magnetized Plasmas (www.pladema.net) in Tandil, Buenos Aires. The Program also cooperates with the Chilean Nuclear Energy Commission, and networks the Argentine National Energy Commission, the Scientific Council of Buenos Aires, the University of Center, the University of Mar del Plata, The University of Rosario, and the Institute of Plasma Physics of the University of Buenos Aires. The program operates six Plasma Focus Devices, developing applications, in particular ultra-short tomography and substance detection by neutron pulsed interrogation. PLADEMA also contributed during the last decade with several mathematical models of Plasma Focus. The thermodynamic model was able to develop for the first time design maps combining geometrical and operational parameters, showing that there is always an optimum gun length and charging pressure which maximize the neutron emission. Currently there is a complete finite-elements code validated against numerous experiments, which can be used confidently as a design tool for Plasma Focus.
In Chile, at the Chilean Nuclear Energy Commission the plasma focus experiments have been extended to sub-kilojoules devices and the scales rules have been stretched up to region less than one joule.
Their studies have contributes to know that is possible to scale the plasma focus in a wide range of energies and sizes keeping the same value of ion density, magnetic field, plasma sheath velocity, Alfvén speed and the quantity of energy per particle. Therefore, fusion reactions are even possible to be obtained in ultraminiature devices (driven by generators of 0.1J for example), as they are in the bigger devices (driven by generators of 1MJ). However, the stability of the plasma pinch highly depends on the size and energy of the device. A rich plasma phenomenology it has been observed in the table-top plasma focus devices developed at the Chilean Nuclear Energy Commission: filamentary structures, toroidal singularities, plasma bursts
and plasma jets generations. In addition, possible applications are explored using these kind of small plasma devices: development of portable generator as non-radioactive sources of neutrons and X-rays for field applications, pulsed radiation applied to biological studies, plasma focus as neutron source for nuclear fusion-fission hybrid reactors, and the use of plasma focus devices as plasma accelerators for studies of materials under intense fusion-relevant pulses. In addition, Chilean Nuclear Energy Commission currently operates the facility SPEED-2, the largest Plasma Focus facility of the southern hemisphere.
Since the beginning of 2009, a number of new plasma focus machines have been/are being commissioned including the INTI Plasma Focus in Malaysia, the NX3 in Singapore, the first plasma focus to be commissioned in a US university in recent times, the KSU Plasma Focus at Kansas State University which recorded its first fusion neutron emitting pinch on New Year's Eve 2009 and the IR-MPF-100 plasma focus (115kJ) in Iran.
Fusion power
Several groups proposed that fusion power based on the DPF could be economically viable, possibly even with low-neutron fuel cycles like p-B11. The feasibility of net power from p-B11 in the DPF requires that the bremsstrahlung losses be reduced by quantum mechanical effects induced by an extremely strong magnetic field "frozen into the plasma". The high magnetic field also results in a high rate of emission of cyclotron radiation, but at the densities envisioned, where the plasma frequency is larger than the cyclotron frequency, most of this power will be reabsorbed before being lost from the plasma. Another advantage claimed is the capability of direct conversion of the energy of the fusion products into electricity, with an efficiency potentially above 70%.
Lawrenceville Plasma Physics
Experiments and computer simulations to investigate the capability of DPF for fusion power are underway at Lawrenceville Plasma Physics (LPP) under the direction of Eric Lerner, who explained his "Focus Fusion" approach in a 2007 Google Tech Talk. On November 14, 2008, Lerner received funding for continued research, to test the scientific feasibility of Focus Fusion.
On October 15, 2009, the DPF device "Focus Fusion-1" achieved its first pinch. On January 28, 2011, LPP published initial results including experimental shots with considerably higher fusion yields than the historical DPF trend. In March, 2012, the company announced that it had achieved temperatures of 1.8 billion degrees, beating the old record of 1.1 billion that had survived since 1978. In 2016 the company announced that it had achieved a fusion yield of 0.25 joules. In 2017 the company reduced impurities by mass by 3x and ion numbers by 10x. Fusion yield increased by 50%. Fusion yield doubled compared to other plasma focus devices with the same 60 kJ energy input. In addition, mean ion energy increased to a record of 240 ± 20 keV for any confined fusion plasma. A deuterium-nitrogen mix and corona-discharge pre-ionization reduced the fusion yield standard deviation by 4x to about 15%.
In 2019, the team conducted a series of experiments replacing tungsten electrodes with beryllium electrodes (termed Focus Fusion 2B). After 44 shots, the electrode formed a much thinner 10 nm oxide layer with correspondingly fewer impurities and less electrode erosion than with tungsten electrodes. Fusion yield reached 0.1 joule. Yield generally increased and impurities decreased with an increasing number of shots.
History
1958: Петров Д.П., Филиппов Н.В., Филиппова Т.И., Храбров В.А. "Мощный импульсный газовый разряд в камерах с проводящими стенками". В сб. Физика плазмы и проблемы управляемых термоядерных реакций. Изд. АН СССР, 1958, т. 4, с. 170–181.
1958: Hannes Alfvén: Proceedings of the Second International Conference on Peaceful Uses of Atomic Energy (United Nations), 31, 3
1960: H Alfven, L Lindberg and P Mitlid, "Experiments with plasma rings" (1961) Journal of Nuclear Energy. Part C, Plasma Physics, Accelerators, Thermonuclear Research, Volume 1, Issue 3, pp. 116–120
1960: Lindberg, L., E. Witalis and C. T. Jacobsen, "Experiments with plasma rings" (1960) Nature 185:452.
1961: Hannes Alfvén: Plasma Ring Experiment in "On the Origin of Cosmic Magnetic Fields" (1961) Astrophysical Journal, vol. 133, p. 1049
1961: Lindberg, L. & Jacobsen, C., "On the Amplification of the Poloidal Magnetic Flux in a Plasma" (1961) Astrophysical Journal, vol. 133, p. 1043
1962: Filippov. N.V., et al., "Dense, High-Temperature Plasma in a Noncylindrical 2-pinch Compression" (1962) 'Nuclear Fusion Supplement'. Pt. 2, 577
1969: Buckwald, Robert Allen, "Dense Plasma Focus Formation by Disk Symmetry" (1969) Thesis, Ohio State University.
Notes
External links
Institute for Plasma Focus Studies (IPFS).
Research papers published in 2011 by IPFS staff.
The Plasma Focus-Trending into the Future()
Dimensions and Lifetime of the Plasma Focus ()
Plasma Radiation Source Lab at the National Institute of Education in Singapore
Plasma Focus Laboratory, International Centre for Dense Magnetised Plasmas, Warsaw, Poland
Optics and Plasma Physics Group, Pontificia Universidad Católica de Chile
Paper by Leopoldo Soto (Chilean Nuclear Energy Commission, Thermonucluar Plasma Department): New trends and future perspectives on plasma focus research
Focus Fusion Society
Abdus Salam ICTP Plasma Focus Laboratory.
Numerical Simulation Package: Universal Plasma Focus Laboratory Facility at INTI-UC.
Dense Plasma Focus Network in Argentina.
Research papers published in 2011 by IPFS staff.
Fusion Energy Site with links .
Google talk by Eric J. Lerner, President of Lawrenceville Plasma Physics and Executive Director of the Focus Fusion Society
Magnetic confinement fusion
Neutron sources
Plasma technology and applications
Soviet inventions | Dense plasma focus | [
"Physics"
] | 4,320 | [
"Plasma technology and applications",
"Plasma physics"
] |
25,560,578 | https://en.wikipedia.org/wiki/Metamaterial%20cloaking | Metamaterial cloaking is the usage of metamaterials in an invisibility cloak. This is accomplished by manipulating the paths traversed by light through a novel optical material. Metamaterials direct and control the propagation and transmission of specified parts of the light spectrum and demonstrate the potential to render an object seemingly invisible. Metamaterial cloaking, based on transformation optics, describes the process of shielding something from view by controlling electromagnetic radiation. Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself.
Electromagnetic metamaterials
Electromagnetic metamaterials respond to chosen parts of radiated light, also known as the electromagnetic spectrum, in a manner that is difficult or impossible to achieve with natural materials. In other words, these metamaterials can be further defined as artificially structured composite materials, which exhibit interaction with light usually not available in nature (electromagnetic interactions). At the same time, metamaterials have the potential to be engineered and constructed with desirable properties that fit a specific need. That need will be determined by the particular application.
The artificial structure for cloaking applications is a lattice design – a sequentially repeating network – of identical elements. Additionally, for microwave frequencies, these materials are analogous to crystals for optics. Also, a metamaterial is composed of a sequence of elements and spacings, which are much smaller than the selected wavelength of light. The selected wavelength could be radio frequency, microwave, or other radiations, now just beginning to reach into the visible frequencies. Macroscopic properties can be directly controlled by adjusting characteristics of the rudimentary elements, and their arrangement on, or throughout the material. Moreover, these metamaterials are a basis for building very small cloaking devices in anticipation of larger devices, adaptable to a broad spectrum of radiated light.
Hence, although light consists of an electric field and a magnetic field, ordinary optical materials, such as optical microscope lenses, have a strong reaction only to the electric field. The corresponding magnetic interaction is essentially nil. This results in only the most common optical effects, such as ordinary refraction with common diffraction limitations in lenses and imaging.
Since the beginning of optical sciences, centuries ago, the ability to control the light with materials has been limited to these common optical effects. Metamaterials, on the other hand, are capable of a very strong interaction, or coupling, with the magnetic component of light. Therefore, the range of response to radiated light is expanded beyond the ordinary optical limitations that are described by the sciences of physical optics and optical physics. In addition, as artificially constructed materials, both the magnetic and electric components of the radiated light can be controlled at will, in any desired fashion as it travels, or more accurately propagates, through the material. This is because a metamaterial's behavior is typically formed from individual components, and each component responds independently to a radiated spectrum of light. At this time, however, metamaterials are limited. Cloaking across a broad spectrum of frequencies has not been achieved, including the visible spectrum. Dissipation, absorption, and dispersion are also current drawbacks, but this field is still in its optimistic infancy.
Metamaterials and transformation optics
The field of transformation optics is founded on the effects produced by metamaterials.
Transformation optics has its beginnings in the conclusions of two research endeavors. They were published on May 25, 2006, in the same issue of Science, a peer-reviewed journal. The two papers are tenable theories on bending or distorting light to electromagnetically conceal an object. Both papers notably map the initial configuration of the electromagnetic fields on to a Cartesian mesh. Twisting the Cartesian mesh, in essence, transforms the coordinates of the electromagnetic fields, which in turn conceal a given object. Hence, with these two papers, transformation optics is born.
Transformation optics subscribes to the capability of bending light, or electromagnetic waves and energy, in any preferred or desired fashion, for a desired application. Maxwell's equations do not vary even though coordinates transform. Instead it is the values of the chosen parameters of the materials which "transform", or alter, during a certain time period. So, transformation optics developed from the capability to choose the parameters for a given material. Hence, since Maxwell's equations retain the same form, it is the successive values of the parameters, permittivity and permeability, which change over time. Furthermore, permittivity and permeability are in a sense responses to the electric and magnetic fields of a radiated light source respectively, among other descriptions. The precise degree of electric and magnetic response can be controlled in a metamaterial, point by point. Since so much control can be maintained over the responses of the material, this leads to an enhanced and highly flexible gradient-index material. Conventionally predetermined refractive index of ordinary materials instead become independent spatial gradients in a metamaterial, which can be controlled at will. Therefore, transformation optics is a new method for creating novel and unique optical devices.
Science of cloaking devices
The purpose of a cloaking device is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (or sound waves), as with Metamaterial cloaking.
Cloaking objects, or making them appear invisible with metamaterials, is roughly analogous to a magician's sleight of hand, or his tricks with mirrors. The object or subject doesn't really disappear; the vanishing is an illusion. With the same goal, researchers employ metamaterials to create directed blind spots by deflecting certain parts of the light spectrum (electromagnetic spectrum). It is the light spectrum, as the transmission medium, that determines what the human eye can see.
In other words, light is refracted or reflected determining the view, color, or illusion that is seen. The visible extent of light is seen in a chromatic spectrum such as the rainbow. However, visible light is only part of a broad spectrum, which extends beyond the sense of sight. For example, there are other parts of the light spectrum which are in common use today. The microwave spectrum is employed by radar, cell phones, and wireless Internet. The infrared spectrum is used for thermal imaging technologies, which can detect a warm body amidst a cooler night time environment, and infrared illumination is combined with specialized digital cameras for night vision. Astronomers employ the terahertz band for submillimeter observations to answer deep cosmological questions.
Furthermore, electromagnetic energy is light energy, but only a small part of it is visible light. This energy travels in waves. Shorter wavelengths, such as visible light and infrared, carry more energy per photon than longer waves, such as microwaves and radio waves. For the sciences, the light spectrum is known as the electromagnetic spectrum.
The properties of optics and light
Prisms, mirrors, and lenses have a long history of altering the diffracted visible light that surrounds all. However, the control exhibited by these ordinary materials is limited. Moreover, the one material which is common among these three types of directors of light is conventional glass. Hence, these familiar technologies are constrained by the fundamental, physical laws of optics. With metamaterials in general, and the cloaking technology in particular, it appears these barriers disintegrate with advancements in materials and technologies never before realized in the natural physical sciences. These unique materials became notable because electromagnetic radiation can be bent, reflected, or skewed in new ways. The radiated light could even be slowed or captured before transmission. In other words, new ways to focus and project light and other radiation are being developed. Furthermore, the expanded optical powers presented in the science of cloaking objects appear to be technologically beneficial across a wide spectrum of devices already in use. This means that every device with basic functions that rely on interaction with the radiated electromagnetic spectrum could technologically advance. With these beginning steps a whole new class of optics has been established.
Interest in the properties of optics and light
Interest in the properties of optics, and light, date back to almost 2000 years to Ptolemy (AD 85 – 165). In his work entitled Optics, he writes about the properties of light, including reflection, refraction, and color. He developed a simplified equation for refraction without trigonometric functions. About 800 years later, in AD 984, Ibn Sahl discovered a law of refraction mathematically equivalent to Snell's law. He was followed by the most notable Islamic scientist, Ibn Al-Haytham (c.965–1039), who is considered to be "one of the few most outstanding figures in optics in all times". He made significant advances in the science of physics in general, and optics in particular. He anticipated the universal laws of light articulated by seventeenth century scientists by hundreds of years.
In the seventeenth century both Willebrord Snellius and Descartes were credited with discovering the law of refraction. It was Snellius who noted that Ptolemy's equation for refraction was inexact. Consequently, these laws have been passed along, unchanged for about 400 years, like the laws of gravity.
Perfect cloak and theory
Electromagnetic radiation and matter have a symbiotic relationship. Radiation does not simply act on a material, nor is it simply acted upon by a given material. Radiation interacts with matter. Cloaking applications which employ metamaterials alter how objects interact with the electromagnetic spectrum. The guiding vision for the metamaterial cloak is a device that directs the flow of light smoothly around an object, like water flowing past a rock in a stream, without reflection, rendering the object invisible. In reality, the simple cloaking devices of the present are imperfect, and have limitations.
One challenge up to the present date has been the inability of metamaterials, and cloaking devices, to interact at frequencies, or wavelengths, within the visible light spectrum.
Challenges presented by the first cloaking device
The principle of cloaking, with a cloaking device, was first proved (demonstrated) at frequencies in the microwave radiation band on October 19, 2006. This demonstration used a small cloaking device. Its height was less than one half inch (< 13 mm) and its diameter five inches (125 mm), and it successfully diverted microwaves around itself. The object to be hidden from view, a small cylinder, was placed in the center of the device. The invisibility cloak deflected microwave beams so they flowed around the cylinder inside with only minor distortion, making it appear almost as if nothing were there at all.
Such a device typically involves surrounding the object to be cloaked with a shell which affects the passage of light near it. There was reduced reflection of electromagnetic waves (microwaves), from the object. Unlike a homogeneous natural material with its material properties the same everywhere, the cloak's material properties vary from point to point, with each point designed for specific electromagnetic interactions (inhomogeneity), and are different in different directions (anisotropy). This accomplishes a gradient in the material properties. The associated report was published in the journal Science.
Although a successful demonstration, three notable limitations can be shown. First, since its effectiveness was only in the microwave spectrum the small object is somewhat invisible only at microwave frequencies. This means invisibility had not been achieved for the human eye, which sees only within the visible spectrum. This is because the wavelengths of the visible spectrum are tangibly shorter than microwaves. However, this was considered the first step toward a cloaking device for visible light, although more advanced nanotechnology-related techniques would be needed due to light's short wavelengths. Second, only small objects can be made to appear as the surrounding air. In the case of the 2006 proof of cloaking demonstration, the hidden from view object, a copper cylinder, would have to be less than five inches in diameter, and less than one half inch tall. Third, cloaking can only occur over a narrow frequency band, for any given demonstration. This means that a broad band cloak, which works across the electromagnetic spectrum, from radio frequencies to microwave to the visible spectrum, and to x-ray, is not available at this time. This is due to the dispersive nature of present-day metamaterials. The coordinate transformation (transformation optics) requires extraordinary material parameters that are only approachable through the use of resonant elements, which are inherently narrow band, and dispersive at resonance.
Usage of metamaterials
At the very beginning of the new millennium, metamaterials were established as an extraordinary new medium, which expanded control capabilities over matter. Hence, metamaterials are applied to cloaking applications for a few reasons. First, the parameter known as material response has broader range. Second, the material response can be controlled at will.
Third, optical components, such as lenses, respond within a certain defined range to light. As stated earlier – the range of response has been known, and studied, going back to Ptolemy – eighteen hundred years ago. The range of response could not be effectively exceeded, because natural materials proved incapable of doing so. In scientific studies and research, one way to communicate the range of response is the refractive index of a given optical material. Every natural material so far only allows for a positive refractive index. Metamaterials, on the other hand, are an innovation that are able to achieve negative refractive index, zero refractive index, and fractional values in between zero and one. Hence, metamaterials extend the material response, among other capabilities.
However, negative refraction is not the effect that creates invisibility-cloaking. It is more accurate to say that gradations of refractive index, when combined, create invisibility-cloaking. Fourth, and finally, metamaterials demonstrate the capability to deliver chosen responses at will.
Device
Before actually building the device, theoretical studies were conducted. The following is one of two studies accepted simultaneously by a scientific journal, as well being distinguished as one of the first published theoretical works for an invisibility cloak.
Controlling electromagnetic fields
The exploitation of "light", the electromagnetic spectrum, is accomplished with common objects and materials which control and direct the electromagnetic fields. For example, a glass lens in a camera is used to produce an image, a metal cage may be used to screen sensitive equipment, and radio antennas are designed to transmit and receive daily FM broadcasts. Homogeneous materials, which manipulate or modulate electromagnetic radiation, such as glass lenses, are limited in the upper limit of refinements to correct for aberrations. Combinations of inhomogeneous lens materials are able to employ gradient refractive indices, but the ranges tend to be limited.
Metamaterials were introduced about a decade ago, and these expand control of parts of the electromagnetic spectrum; from microwave, to terahertz, to infrared. Theoretically, metamaterials, as a transmission medium, will eventually expand control and direction of electromagnetic fields into the visible spectrum. Hence, a design strategy was introduced in 2006, to show that a metamaterial can be engineered with arbitrarily assigned positive or negative values of permittivity and permeability, which can also be independently varied at will. Then direct control of electromagnetic fields becomes possible, which is relevant to novel and unusual lens design, as well as a component of the scientific theory for cloaking of objects from electromagnetic detection.
Each component responds independently to a radiated electromagnetic wave as it travels through the material, resulting in electromagnetic inhomogeneity for each component. Each component has its own response to the external electric and magnetic fields of the radiated source. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. These materials obey the laws of physics, but behave differently from normal materials. Metamaterials are artificial materials engineered to provide properties which "may not be readily available in nature". These materials usually gain their properties from structure rather than composition, using the inclusion of small inhomogeneities to enact effective macroscopic behavior.
The structural units of metamaterials can be tailored in shape and size. Their composition, and their form or structure, can be finely adjusted. Inclusions can be designed, and then placed at desired locations in order to vary the function of a given material. As the lattice is constant, the cells are smaller than the radiated light.
The design strategy has at its core inhomogeneous composite metamaterials which direct, at will, conserved quantities of electromagnetism. These quantities are specifically, the electric displacement field D, the magnetic field intensity B, and the Poynting vector S. Theoretically, when regarding the conserved quantities, or fields, the metamaterial exhibits a twofold capability. First, the fields can be concentrated in a given direction. Second, they can be made to avoid or surround objects, returning without perturbation to their original path. These results are consistent with Maxwell's equations and are more than only ray approximation found in geometrical optics. Accordingly, in principle, these effects can encompass all forms of electromagnetic radiation phenomena on all length scales.
The hypothesized design strategy begins with intentionally choosing a configuration of an arbitrary number of embedded sources. These sources become localized responses of permittivity, ε, and magnetic permeability, μ. The sources are embedded in an arbitrarily selected transmission medium with dielectric and magnetic characteristics. As an electromagnetic system the medium can then be schematically represented as a grid.
The first requirement might be to move a uniform electric field through space, but in a definite direction, which avoids an object or obstacle. Next remove and embed the system in an elastic medium that can be warped, twisted, pulled or stretched as desired. The initial condition of the fields is recorded on a Cartesian mesh. As the elastic medium is distorted in one, or combination, of the described possibilities, the same pulling and stretching process is recorded by the Cartesian mesh. The same set of contortions can now be recorded, occurring as coordinate transformation:
a (x,y,z), b (x,y,z), c (x,y,z), d (x,y,z) ....
Hence, the permittivity, ε, and permeability, μ, is proportionally calibrated by a common factor. This implies that less precisely, the same occurs with the refractive index. Renormalized values of permittivity and permeability are applied in the new coordinate system. For the renormalization equations see ref. #.
Application to cloaking devices
Given the above parameters of operation, the system, a metamaterial, can now be shown to be able to conceal an object of arbitrary size. Its function is to manipulate incoming rays, which are about to strike the object. These incoming rays are instead electromagnetically steered around the object by the metamaterial, which then returns them to their original trajectory. As part of the design it can be assumed that no radiation leaves the concealed volume of space, and no radiation can enter the space. As illustrated by the function of the metamaterial, any radiation attempting to penetrate is steered around the space or the object within the space, returning to the initial direction. It appears to any observer that the concealed volume of space is empty, even with an object present there. An arbitrary object may be hidden because it remains untouched by external radiation.
A sphere with radius R1 is chosen as the object to be hidden. The cloaking region is to be contained within the annulus R1 < r < R2. A simple transformation that achieves the desired result can be found by taking all fields in the region r < R2 and compressing them into the region R1 < r < R2. The coordinate transformations do not alter Maxwell's equations. Only the values of ε and μ change over time.
Cloaking hurdles
There are issues to be dealt with to achieve invisibility cloaking. One issue, related to ray tracing, is the anisotropic effects of the material on the electromagnetic rays entering the "system". Parallel bundles of rays, (see above image), headed directly for the center are abruptly curved and, along with neighboring rays, are forced into tighter and tighter arcs. This is due to rapid changes in the now shifting and transforming permittivity ε and permeability μ. The second issue is that, while it has been discovered that the selected metamaterials are capable of working within the parameters of the anisotropic effects and the continual shifting of ε and μ, the values for ε and μ cannot be very large or very small. The third issue is that the selected metamaterials are currently unable to achieve broad, frequency spectrum capabilities. This is because the rays must curve around the "concealed" sphere, and therefore have longer trajectories than traversing free space, or air. However, the rays must arrive around the other side of the sphere in phase with the beginning radiated light. If this is happening then the phase velocity exceeds the velocity of light in a vacuum, which is the speed limit of the universe. (Note, this does not violate the laws of physics). And, with a required absence of frequency dispersion, the group velocity will be identical with phase velocity. In the context of this experiment, group velocity can never exceed the velocity of light, hence the analytical parameters are effective for only one frequency.
Optical conformal mapping and ray tracing in transformation media
The goal then is to create no discernible difference between a concealed volume of space and the propagation of electromagnetic waves through empty space. It would appear that achieving a perfectly concealed (100%) hole, where an object could be placed and hidden from view, is not probable. The problem is the following: in order to carry images, light propagates in a continuous range of directions. The scattering data of electromagnetic waves, after bouncing off an object or hole, is unique compared to light propagating through empty space, and is therefore easily perceived. Light propagating through empty space is consistent only with empty space. This includes microwave frequencies.
Although mathematical reasoning shows that perfect concealment is not probable because of the wave nature of light, this problem does not apply to electromagnetic rays, i.e., the domain of geometrical optics. Imperfections can be made arbitrarily, and exponentially small for objects that are much larger than the wavelength of light.
Mathematically, this implies n < 1, because the rays follow the shortest path and hence in theory create a perfect concealment. In practice, a certain amount of acceptable visibility occurs, as noted above. The range of the refractive index of the dielectric (optical material) needs to be across a wide spectrum to achieve concealment, with the illusion created by wave propagation across empty space. These places where n < 1 would be the shortest path for the ray around the object without phase distortion. Artificial propagation of empty space could be reached in the microwave-to-terahertz range. In stealth technology, impedance matching could result in absorption of beamed electromagnetic waves rather than reflection, hence, evasion of detection by radar. These general principles can also be applied to sound waves, where the index n describes the ratio of the local phase velocity of the wave to the bulk value. Hence, it would be useful to protect a space from any sound sourced detection. This also implies protection from sonar. Furthermore, these general principles are applicable in diverse fields such as electrostatics, fluid mechanics, classical mechanics, and quantum chaos.
Mathematically, it can be shown that the wave propagation is indistinguishable from empty space where light rays propagate along straight lines. The medium performs an optical conformal mapping to empty space.
Microwave frequencies
The next step, then, is to actually conceal an object by controlling electromagnetic fields.
Now, the demonstrated and theoretical ability for controlled electromagnetic fields has opened a new field, transformation optics. This nomenclature is derived from coordinate transformations used to create variable pathways for the propagation of light through a material. This demonstration is based on previous theoretical prescriptions, along with the accomplishment of the prism experiment. One possible application of transformation optics and materials is electromagnetic cloaking for the purpose of rendering a volume or object undetectable to incident radiation, including radiated probing.
This demonstration, for the first time, of actually concealing an object with electromagnetic fields, uses the method of purposely designed spatial variation. This is an effect of embedding purposely designed electromagnetic sources in the metamaterial.
As discussed earlier, the fields produced by the metamaterial are compressed into a shell (coordinate transformations) surrounding the now concealed volume. Earlier this was supported theory; this experiment demonstrated the effect actually occurs. Maxwell's equations are scalar when applying transformational coordinates, only the permittivity tensor and permeability tensor are affected, which then become spatially variant, and directionally dependent along different axes. The researchers state:
Before the actual demonstration, the experimental limits of the transformational fields were computationally determined, in addition to simulations, as both were used to determine the effectiveness of the cloak.
A month prior to this demonstration, the results of an experiment to spatially map the internal and external electromagnetic fields of negative refractive metamaterial was published in September 2006. This was innovative because prior to this the microwave fields were measured only externally. In this September experiment the permittivity and permeability of the microstructures (instead of external macrostructure) of the metamaterial samples were measured, as well as the scattering by the two-dimensional negative index metamaterials. This gave an average effective refractive index, which results in assuming homogeneous metamaterial.
Employing this technique for this experiment, spatial mapping of phases and amplitudes of the microwave radiations interacting with metamaterial samples was conducted. The performance of the cloak was confirmed by comparing the measured field maps to simulations.
For this demonstration, the concealed object was a conducting cylinder at the inner radius of the cloak. As the largest possible object designed for this volume of space, it has the most substantial scattering properties. The conducting cylinder was effectively concealed in two dimensions.
Infrared frequencies
The definition optical frequency, in metamaterials literature, ranges from far infrared, to near infrared, through the visible spectrum, and includes at least a portion of ultra-violet. To date when literature refers optical frequencies these are almost always frequencies in the infrared, which is below the visible spectrum. In 2009 a group of researchers announced cloaking at optical frequencies. In this case the cloaking frequency was centered at 1500 nm or 1.5 micrometers – the infrared.
Sonic frequencies
A laboratory metamaterial device, applicable to ultra-sound waves was demonstrated in January 2011. It can be applied to sound wavelengths corresponding to frequencies from 40 to 80 kHz.
The metamaterial acoustic cloak is designed to hide objects submerged in water. The metamaterial cloaking mechanism bends and twists sound waves by intentional design.
The cloaking mechanism consists of 16 concentric rings in a cylindrical configuration. Each ring has acoustic circuits. It is intentionally designed to guide sound waves in two dimensions.
Each ring has a different index of refraction. This causes sound waves to vary their speed from ring to ring. "The sound waves propagate around the outer ring, guided by the channels in the circuits, which bend the waves to wrap them around the outer layers of the cloak". It forms an array of cavities that slow the speed of the propagating sound waves. An experimental cylinder was submerged and then disappeared from sonar. Other objects of various shape and density were also hidden from the sonar. The acoustic cloak demonstrated effectiveness for frequencies of 40 kHz to 80 kHz.
In 2014 researchers created a 3D acoustic cloak from stacked plastic sheets dotted with repeating patterns of holes. The pyramidal geometry of the stack and the hole placement provide the effect.
Invisibility in diffusive light scattering media
In 2014, scientists demonstrated good cloaking performance in murky water, demonstrating that an object shrouded in fog can disappear completely when appropriately coated with metamaterial. This is due to the random scattering of light, such as that which occurs in clouds, fog, milk, frosted glass, etc., combined with the properties of the metatmaterial coating. When light is diffused, a thin coat of metamaterial around an object can make it essentially invisible under a range of lighting conditions.
Cloaking attempts
Broadband ground-plane cloak
If a transformation to quasi-orthogonal coordinates is applied to Maxwell's equations in order to conceal a perturbation on a flat conducting plane rather than a singular point, as in the first demonstration of a transformation optics-based cloak, then an object can be hidden underneath the perturbation. This is sometimes referred to as a "carpet" cloak.
As noted above, the original cloak demonstrated utilized resonant metamaterial elements to meet the effective material constraints. Utilizing a quasi-conformal transformation in this case, rather than the non-conformal original transformation, changed the required material properties. Unlike the original (singular expansion) cloak, the "carpet" cloak required less extreme material values. The quasi-conformal carpet cloak required anisotropic, inhomogeneous materials which only varied in permittivity. Moreover, the permittivity was always positive. This allowed the use of non-resonant metamaterial elements to create the cloak, significantly increasing the bandwidth.
An automated process, guided by a set of algorithms, was used to construct a metamaterial consisting of thousands of elements, each with its own geometry. Developing the algorithm allowed the manufacturing process to be automated, which resulted in fabrication of the metamaterial in nine days. The previous device used in 2006 was rudimentary in comparison, and the manufacturing process required four months in order to create the device. These differences are largely due to the different form of transformation: the original 2006 cloak transformed a singular point, while the ground-plane version transforms a plane, and the transformation in the carpet cloak was quasi-conformal, rather than non-conformal.
Other theories of cloaking
Other theories of cloaking discuss various science and research based theories for producing an electromagnetic cloak of invisibility. Theories presented employ transformation optics, event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking.
Institutional research
The research in the field of metamaterials has diffused out into the American government science research departments, including the US Naval Air Systems Command, US Air Force, and US Army. Many scientific institutions are involved including:
California Institute of Technology
Massachusetts Institute of Technology
Colorado State University
Pennsylvania State University
Duke University
Harvard University
Aalto University
Imperial College London
Max Planck Society
MSU Faculty of Physics
National Institute of Standards and Technology
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
University College London
University of California, Berkeley
University of California, Irvine
University of California, Los Angeles
University of California, San Diego
University of Colorado
University of Delaware
University of Rochester
Funding for research into this technology is provided by the following American agencies:
Air Force Research Laboratory
Defense Advanced Research Projects Agency
Director of Central Intelligence
National Geospatial-Intelligence Agency
Naval Air Systems Command
Office of Naval Research
Through this research, it has been realized that developing a method for controlling electromagnetic fields can be applied to escape detection by radiated probing, or sonar technology, and to improve communications in the microwave range; that this method is relevant to superlens design and to the cloaking of objects within and from electromagnetic fields.
In the news
On October 20, 2006, the day after Duke University achieved enveloping and "disappearing" an object in the microwave range, the story was reported by Associated Press. Media outlets covering the story included USA Today, MSNBC's Countdown With Keith Olbermann: Sight Unseen, The New York Times with Cloaking Copper, Scientists Take Step Toward Invisibility, (London) The Times with Don't Look Now—Visible Gains in the Quest for Invisibility, Christian Science Monitor with Disappear Into Thin Air? Scientists Take Step Toward Invisibility, Australian Broadcasting, Reuters with Invisibility Cloak a Step Closer, and the (Raleigh) News & Observer with 'Invisibility Cloak a Step Closer.
On November 6, 2006, the Duke University research and development team was selected as part of the Scientific American best 50 articles of 2006.
In the month of November 2009, "research into designing and building unique 'metamaterials' has received a £4.9 million funding boost. Metamaterials can be used for invisibility 'cloaking' devices, sensitive security sensors that can detect tiny quantities of dangerous substances, and flat lenses that can be used to image tiny objects much smaller than the wavelength of light."
In November 2010, scientists at the University of St Andrews in Scotland reported the creation of a flexible cloaking material they call "Metaflex", which may bring industrial applications significantly closer.
In 2014, the world 's first 3D acoustic device was built by Duke engineers.
See also
History of metamaterials
Acoustic metamaterials
Chirality
Metamaterial
Metamaterial absorber
Metamaterial antennas
Nonlinear metamaterials
Photonic crystal
Photonic metamaterials
Plasmonic metamaterials
Seismic metamaterials
Split-ring resonator
Superlens
Terahertz metamaterials
Theories of cloaking
Transformation optics
Tunable metamaterials
Academic journals
Metamaterials (journal)
Metamaterials books
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
References
Further reading
148 pages. "Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Physics in the Graduate School of Duke University 2009"
External links
Defining metamaterials
Manipulating the Near Field with Metamaterials Slide show, with audio available, by Dr. John Pendry, Imperial College, London
Researchers propose mimicking the cosmos with metamaterials
Electromagnetism
Metamaterials
Stealth technology | Metamaterial cloaking | [
"Physics",
"Materials_science",
"Engineering"
] | 7,025 | [
"Electromagnetism",
"Physical phenomena",
"Metamaterials",
"Materials science",
"Fundamental interactions"
] |
25,562,365 | https://en.wikipedia.org/wiki/Lisp%20Algebraic%20Manipulator | The Lisp Algebraic Manipulator (also known as LAM) was created by Ray d'Inverno, who had written Atlas LISP Algebraic Manipulation (ALAM was designed in 1970). LAM later became the basis for the interactive computer package SHEEP.
Notes
Computer algebra systems
Tensors | Lisp Algebraic Manipulator | [
"Mathematics",
"Engineering"
] | 60 | [
"Computer algebra systems",
"Tensors",
"Mathematical software"
] |
25,564,115 | https://en.wikipedia.org/wiki/Cephaeline | Cephaeline is an alkaloid that is found in Cephaelis ipecacuanha and other plant species including Psychotria acuminata. Cephaeline induces vomiting by stimulating the stomach lining and is found in commercial products such as syrup of ipecac. Chemically, it is closely related to emetine.
Poison treatment
Cephaeline in the form of syrup of ipecac was once commonly recommended as an emergency treatment for accidental poisoning, but its use has been phased out due to its ineffectiveness.
References
Isoquinoline alkaloids
Hydroxyarenes
Norsalsolinol ethers
Emetics
Drugs with no legal status | Cephaeline | [
"Chemistry"
] | 141 | [
"Alkaloids by chemical classification",
"Tetrahydroisoquinoline alkaloids"
] |
25,564,608 | https://en.wikipedia.org/wiki/Journal%20of%20Hydraulic%20Engineering | The Journal of Hydraulic Engineering, formerly the Journal of the Hydraulics Division (1956–1982), is a peer-reviewed scientific journal published by the American Society of Civil Engineers. Topics range from flows in closed conduits to free-surface flows (canals, rivers, lakes, and estuaries) to environmental fluid dynamics. Topics include transport processes involving fluids (multiphase flows) such as sediment and contaminant transport, and heat and gas transfers. Emphasis is placed on the presentation of concepts, methods, techniques, and results that advance knowledge and/or are suitable for general application in the hydraulic engineering profession.
History
One of ASCE's flagship journals which began publication in 1956, this journal's origin goes back to the publication of the first volume of Transactions of the American Society of Civil Engineers in 1892.
Indexes
The journal is indexed in Google Scholar, Baidu, Elsevier (Ei Compendex), Clarivate Analytics (Web of Science), ProQuest, Civil engineering database, TRDI, OCLC (WorldCat), IET/INSPEC, Crossref, Scopus, and EBSCOHost.
References
External links
ASCE Library
ASCE website
Civil engineering journals
Academic journals established in 1875
Monthly journals
English-language journals
American Society of Civil Engineers academic journals | Journal of Hydraulic Engineering | [
"Engineering"
] | 270 | [
"Civil engineering journals",
"Civil engineering"
] |
523,430 | https://en.wikipedia.org/wiki/Constructible%20polygon | In mathematics, a constructible polygon is a regular polygon that can be constructed with compass and straightedge. For example, a regular pentagon is constructible with compass and straightedge while a regular heptagon is not. There are infinitely many constructible polygons, but only 31 with an odd number of sides are known.
Conditions for constructibility
Some regular polygons are easy to construct with compass and straightedge; others are not. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular polygons with compass and straightedge? If not, which n-gons (that is, polygons with n edges) are constructible and which are not?
Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons. Gauss stated without proof that this condition was also necessary, but never published his proof.
A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem: A regular n-gon can be constructed with compass and straightedge if and only if n is the product of a power of 2 and any number of distinct (unequal) Fermat primes. Here, a power of 2 is a number of the form , where m ≥ 0 is an integer. A Fermat prime is a prime number of the form , where m ≥ 0 is an integer. The number of Fermat primes involved can be 0, in which case n is a power of 2.
In order to reduce a geometric problem to a problem of pure number theory, the proof uses the fact that a regular n-gon is constructible if and only if the cosine is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Equivalently, a regular n-gon is constructible if any root of the nth cyclotomic polynomial is constructible.
Detailed results by Gauss's theory
Restating the Gauss–Wantzel theorem:
A regular n-gon is constructible with straightedge and compass if and only if n = 2kp1p2...pt where k and t are non-negative integers, and the pi's (when t > 0) are distinct Fermat primes.
The five known Fermat primes are:
F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 .
Since there are 31 nonempty subsets of the five known Fermat primes, there are 31 known constructible polygons with an odd number of sides.
The next twenty-eight Fermat numbers, F5 through F32, are known to be composite.
Thus a regular n-gon is constructible if
n = 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272, 320, 340, 384, 408, 480, 510, 512, 514, 544, 640, 680, 768, 771, 816, 960, 1020, 1024, 1028, 1088, 1280, 1285, 1360, 1536, 1542, 1632, 1920, 2040, 2048, ... ,
while a regular n-gon is not constructible with compass and straightedge if
n = 7, 9, 11, 13, 14, 18, 19, 21, 22, 23, 25, 26, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, ... .
Connection to Pascal's triangle
Since there are five known Fermat primes, we know of 31 numbers that are products of distinct Fermat primes, and hence 31 constructible odd-sided regular polygons. These are 3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537, 196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009, 50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295 . As John Conway commented in The Book of Numbers, these numbers, when written in binary, are equal to the first 32 rows of the modulo-2 Pascal's triangle, minus the top row, which corresponds to a monogon. (Because of this, the 1s in such a list form an approximation to the Sierpiński triangle.) This pattern breaks down after this, as the next Fermat number is composite (4294967297 = 641 × 6700417), so the following rows do not correspond to constructible polygons. It is unknown whether any more Fermat primes exist, and it is therefore unknown how many odd-sided constructible regular polygons exist. In general, if there are q Fermat primes, then there are 2q−1 regular constructible polygons.
General theory
In the light of later work on Galois theory, the principles of these proofs have been clarified. It is straightforward to show from analytic geometry that constructible lengths must come from base lengths by the solution of some sequence of quadratic equations. In terms of field theory, such lengths must be contained in a field extension generated by a tower of quadratic extensions. It follows that a field generated by constructions will always have degree over the base field that is a power of two.
In the specific case of a regular n-gon, the question reduces to the question of constructing a length
cos ,
which is a trigonometric number and hence an algebraic number. This number lies in the n-th cyclotomic field — and in fact in its real subfield, which is a totally real field and a rational vector space of dimension
½ φ(n),
where φ(n) is Euler's totient function. Wantzel's result comes down to a calculation showing that φ(n) is a power of 2 precisely in the cases specified.
As for the construction of Gauss, when the Galois group is a 2-group it follows that it has a sequence of subgroups of orders
1, 2, 4, 8, ...
that are nested, each in the next (a composition series, in group theory terminology), something simple to prove by induction in this case of an abelian group. Therefore, there are subfields nested inside the cyclotomic field, each of degree 2 over the one before. Generators for each such field can be written down by Gaussian period theory. For example, for n = 17 there is a period that is a sum of eight roots of unity, one that is a sum of four roots of unity, and one that is the sum of two, which is
cos .
Each of those is a root of a quadratic equation in terms of the one before. Moreover, these equations have real rather than complex roots, so in principle can be solved by geometric construction: this is because the work all goes on inside a totally real field.
In this way the result of Gauss can be understood in current terms; for actual calculation of the equations to be solved, the periods can be squared and compared with the 'lower' periods, in a quite feasible algorithm.
Compass and straightedge constructions
Compass and straightedge constructions are known for all known constructible polygons. If n = pq with p = 2 or p and q coprime, an n-gon can be constructed from a p-gon and a q-gon.
If p = 2, draw a q-gon and bisect one of its central angles. From this, a 2q-gon can be constructed.
If p > 2, inscribe a p-gon and a q-gon in the same circle in such a way that they share a vertex. Because p and q are coprime, there exists integers a and b such that ap + bq = 1. Then 2aπ/q + 2bπ/p = 2π/pq. From this, a pq-gon can be constructed.
Thus one only has to find a compass and straightedge construction for n-gons where n is a Fermat prime.
The construction for an equilateral triangle is simple and has been known since antiquity; see Equilateral triangle.
Constructions for the regular pentagon were described both by Euclid (Elements, ca. 300 BC), and by Ptolemy (Almagest, ca. 150 AD).
Although Gauss proved that the regular 17-gon is constructible, he did not actually show how to do it. The first construction is due to Erchinger, a few years after Gauss's work.
The first explicit constructions of a regular 257-gon were given by Magnus Georg Paucker (1822) and Friedrich Julius Richelot (1832).
A construction for a regular 65537-gon was first given by Johann Gustav Hermes (1894). The construction is very complex; Hermes spent 10 years completing the 200-page manuscript.
Gallery
From left to right, constructions of a 15-gon, 17-gon, 257-gon and 65537-gon. Only the first stage of the 65537-gon construction is shown; the constructions of the 15-gon, 17-gon, and 257-gon are given completely.
Other constructions
The concept of constructibility as discussed in this article applies specifically to compass and straightedge constructions. More constructions become possible if other tools are allowed. The so-called neusis constructions, for example, make use of a marked ruler. The constructions are a mathematical idealization and are assumed to be done exactly.
A regular polygon with n sides can be constructed with ruler, compass, and angle trisector if and only if where r, s, k ≥ 0 and where the pi are distinct Pierpont primes greater than 3 (primes of the form These polygons are exactly the regular polygons that can be constructed with Conic section, and the regular polygons that can be constructed with paper folding. The first numbers of sides of these polygons are:
3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 24, 26, 27, 28, 30, 32, 34, 35, 36, 37, 38, 39, 40, 42, 45, 48, 51, 52, 54, 56, 57, 60, 63, 64, 65, 68, 70, 72, 73, 74, 76, 78, 80, 81, 84, 85, 90, 91, 95, 96, 97, 102, 104, 105, 108, 109, 111, 112, 114, 117, 119, 120, 126, 128, 130, 133, 135, 136, 140, 144, 146, 148, 152, 153, 156, 160, 162, 163, 168, 170, 171, 180, 182, 185, 189, 190, 192, 193, 194, 195, 204, 208, 210, 216, 218, 219, 221, 222, 224, 228, 234, 238, 240, 243, 247, 252, 255, 256, 257, 259, 260, 266, 270, 272, 273, 280, 285, 288, 291, 292, 296, ...
See also
Polygon
Carlyle circle
References
External links
Regular Polygon Formulas, Ask Dr. Math FAQ.
Carl Schick: Weiche Primzahlen und das 257-Eck : eine analytische Lösung des 257-Ecks. Zürich : C. Schick, 2008. .
65537-gon, exact construction for the 1st side, using the Quadratrix of Hippias and GeoGebra as additional aids, with brief description (German)
Euclidean plane geometry
Carl Friedrich Gauss | Constructible polygon | [
"Mathematics"
] | 2,861 | [
"Constructible polygons",
"Planes (geometry)",
"Euclidean plane geometry"
] |
523,879 | https://en.wikipedia.org/wiki/Simply%20connected%20space | In topology, a topological space is called simply connected (or 1-connected, or 1-simply connected) if it is path-connected and every path between two points can be continuously transformed into any other such path while preserving the two endpoints in question. Intuitively, this corresponds to a space that has no disjoint parts and no holes that go completely through it, because two paths going around different sides of such a hole cannot be continuously transformed into each other. The fundamental group of a topological space is an indicator of the failure for the space to be simply connected: a path-connected topological space is simply connected if and only if its fundamental group is trivial.
Definition and equivalent formulations
A topological space is called if it is path-connected and any loop in defined by can be contracted to a point: there exists a continuous map such that restricted to is Here, and denotes the unit circle and closed unit disk in the Euclidean plane respectively.
An equivalent formulation is this: is simply connected if and only if it is path-connected, and whenever and are two paths (that is, continuous maps) with the same start and endpoint ( and ), then can be continuously deformed into while keeping both endpoints fixed. Explicitly, there exists a homotopy such that and
A topological space is simply connected if and only if is path-connected and the fundamental group of at each point is trivial, i.e. consists only of the identity element. Similarly, is simply connected if and only if for all points the set of morphisms in the fundamental groupoid of has only one element.
In complex analysis: an open subset is simply connected if and only if both and its complement in the Riemann sphere are connected. The set of complex numbers with imaginary part strictly greater than zero and less than one furnishes an example of an unbounded, connected, open subset of the plane whose complement is not connected. It is nevertheless simply connected. A relaxation of the requirement that be connected leads to an exploration of open subsets of the plane with connected extended complement. For example, a (not necessarily connected) open set has a connected extended complement exactly when each of its connected components is simply connected.
Informal discussion
Informally, an object in our space is simply connected if it consists of one piece and does not have any "holes" that pass all the way through it. For example, neither a doughnut nor a coffee cup (with a handle) is simply connected, but a hollow rubber ball is simply connected. In two dimensions, a circle is not simply connected, but a disk and a line are. Spaces that are connected but not simply connected are called non-simply connected or multiply connected.
The definition rules out only handle-shaped holes. A sphere (or, equivalently, a rubber ball with a hollow center) is simply connected, because any loop on the surface of a sphere can contract to a point even though it has a "hole" in the hollow center. The stronger condition, that the object has no holes of dimension, is called contractibility.
Examples
The Euclidean plane is simply connected, but minus the origin is not. If then both and minus the origin are simply connected.
Analogously: the n-dimensional sphere is simply connected if and only if
Every convex subset of is simply connected.
A torus, the (elliptic) cylinder, the Möbius strip, the projective plane and the Klein bottle are not simply connected.
Every topological vector space is simply connected; this includes Banach spaces and Hilbert spaces.
For the special orthogonal group is not simply connected and the special unitary group is simply connected.
The one-point compactification of is not simply connected (even though is simply connected).
The long line is simply connected, but its compactification, the extended long line is not (since it is not even path connected).
Properties
A surface (two-dimensional topological manifold) is simply connected if and only if it is connected and its genus (the number of of the surface) is 0.
A universal cover of any (suitable) space is a simply connected space which maps to via a covering map.
If and are homotopy equivalent and is simply connected, then so is
The image of a simply connected set under a continuous function need not be simply connected. Take for example the complex plane under the exponential map: the image is which is not simply connected.
The notion of simple connectedness is important in complex analysis because of the following facts:
The Cauchy's integral theorem states that if is a simply connected open subset of the complex plane and is a holomorphic function, then has an antiderivative on and the value of every line integral in with integrand depends only on the end points and of the path, and can be computed as The integral thus does not depend on the particular path connecting and
The Riemann mapping theorem states that any non-empty open simply connected subset of (except for itself) is conformally equivalent to the unit disk.
The notion of simple connectedness is also a crucial condition in the Poincaré conjecture.
See also
References
Algebraic topology
Properties of topological spaces
de:Zusammenhängender Raum#Einfach zusammenhängend | Simply connected space | [
"Mathematics"
] | 1,064 | [
"Properties of topological spaces",
"Algebraic topology",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology"
] |
524,003 | https://en.wikipedia.org/wiki/Internal%20and%20external%20angles | In geometry, an angle of a polygon is formed by two adjacent sides. For a simple polygon (non-self-intersecting), regardless of whether it is convex or non-convex, this angle is called an internal angle (or interior angle) if a point within the angle is in the interior of the polygon. A polygon has exactly one internal angle per vertex.
If every internal angle of a simple polygon is less than a straight angle ( radians or 180°), then the polygon is called convex.
In contrast, an external angle (also called a turning angle or exterior angle) is an angle formed by one side of a simple polygon and a line extended from an adjacent side.
Properties
The sum of the internal angle and the external angle on the same vertex is π radians (180°).
The sum of all the internal angles of a simple polygon is π(n−2) radians or 180(n–2) degrees, where n is the number of sides. The formula can be proved by using mathematical induction: starting with a triangle, for which the angle sum is 180°, then replacing one side with two sides connected at another vertex, and so on.
The sum of the external angles of any simple polygon, if only one of the two external angles is assumed at each vertex, is 2π radians (360°).
The measure of the exterior angle at a vertex is unaffected by which side is extended: the two exterior angles that can be formed at a vertex by extending alternately one side or the other are vertical angles and thus are equal.
Extension to crossed polygons
The interior angle concept can be extended in a consistent way to crossed polygons such as star polygons by using the concept of directed angles. In general, the interior angle sum in degrees of any closed polygon, including crossed (self-intersecting) ones, is then given by 180(n–2k)°, where n is the number of vertices, and the strictly positive integer k is the number of total (360°) revolutions one undergoes by walking around the perimeter of the polygon. In other words, the sum of all the exterior angles is 2πk radians or 360k degrees. Example: for ordinary convex polygons and concave polygons, k = 1, since the exterior angle sum is 360°, and one undergoes only one full revolution by walking around the perimeter.
References
External links
Internal angles of a triangle
Interior angle sum of polygons: a general formula - Provides an interactive Java activity that extends the interior angle sum formula for simple closed polygons to include crossed (complex) polygons.
Angle
Euclidean plane geometry
Elementary geometry
Polygons | Internal and external angles | [
"Physics",
"Mathematics"
] | 564 | [
"Geometric measurement",
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Euclidean plane geometry",
"Elementary mathematics",
"Elementary geometry",
"Wikipedia categories named after physical quantities",
"Angle"
] |
524,124 | https://en.wikipedia.org/wiki/Hypercharge | In particle physics, the hypercharge (a portmanteau of hyperonic and charge) Y of a particle is a quantum number conserved under the strong interaction. The concept of hypercharge provides a single charge operator that accounts for properties of isospin, electric charge, and flavour. The hypercharge is useful to classify hadrons; the similarly named weak hypercharge has an analogous role in the electroweak interaction.
Definition
Hypercharge is one of two quantum numbers of the SU(3) model of hadrons, alongside isospin . The isospin alone was sufficient for two quark flavours — namely and — whereas presently 6 flavours of quarks are known.
SU(3) weight diagrams (see below) are 2 dimensional, with the coordinates referring to two quantum numbers: (also known as ), which is the component of isospin, and , which is the hypercharge (defined by strangeness , charm , bottomness , topness , and baryon number ). Mathematically, hypercharge is
Strong interactions conserve hypercharge (and weak hypercharge), but weak interactions do not.
Relation with electric charge and isospin
The Gell-Mann–Nishijima formula relates isospin and electric charge
where I3 is the third component of isospin and Q is the particle's charge.
Isospin creates multiplets of particles whose average charge is related to the hypercharge by:
since the hypercharge is the same for all members of a multiplet, and the average of the I3 values is 0.
These definitions in their original form hold only for the three lightest quarks.
SU(3) model in relation to hypercharge
The SU(2) model has multiplets characterized by a quantum number J, which is the total angular momentum. Each multiplet consists of substates with equally-spaced values of Jz, forming a symmetric arrangement seen in atomic spectra and isospin. This formalizes the observation that certain strong baryon decays were not observed, leading to the prediction of the mass, strangeness and charge of the baryon.
The SU(3) has supermultiplets containing SU(2) multiplets. SU(3) now needs two numbers to specify all its sub-states which are denoted by λ1 and λ2.
specifies the number of points in the topmost side of the hexagon while specifies the number of points on the bottom side.
Examples
The nucleon group (protons with and neutrons with ) have an average charge of , so they both have hypercharge (since baryon number and ). From the Gell-Mann–Nishijima formula we know that proton has isospin while neutron has
This also works for quarks: For the up quark, with a charge of , and an of , we deduce a hypercharge of , due to its baryon number (since three quarks make a baryon, each quark has a baryon number of ).
For a strange quark, with electric charge , a baryon number of , and strangeness −1, we get a hypercharge so we deduce that That means that a strange quark makes an isospin singlet of its own (the same happens with charm, bottom and top quarks), while up and down constitute an isospin doublet.
All other quarks have hypercharge .
Practical obsolescence
Hypercharge was a concept developed in the 1960s, to organize groups of particles in the "particle zoo" and to develop ad hoc conservation laws based on their observed transformations. With the advent of the quark model, it is now obvious that strong hypercharge, , is the following combination of the numbers of up (), down (), strange (), charm (), top () and bottom ():
In modern descriptions of hadron interaction, it has become more obvious to draw Feynman diagrams that trace through the individual constituent quarks (which are conserved) composing the interacting baryons and mesons, rather than bothering to count strong hypercharge quantum numbers. Weak hypercharge, however, remains an essential part of understanding the electroweak interaction.
References
Nuclear physics
Quarks
Standard Model
Electroweak theory
he:היפרמטען | Hypercharge | [
"Physics"
] | 909 | [
"Standard Model",
"Physical phenomena",
"Electroweak theory",
"Particle physics",
"Fundamental interactions",
"Nuclear physics"
] |
524,246 | https://en.wikipedia.org/wiki/Pathogenesis | In pathology, pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. The word comes .
Description
Types of pathogenesis include microbial infection, inflammation, malignancy and tissue breakdown. For example, bacterial pathogenesis is the process by which bacteria cause infectious illness.
Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system (skin tumors and lymphoma after a renal transplant, which requires immunosuppression), Streptococcus pneumoniae is spread through contact with respiratory secretions, such as saliva, mucus, or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply.
The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented. Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology. Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference.
See also
Causal inference
Epidemiology
Molecular pathological epidemiology
Molecular pathology
Pathology
Pathophysiology
Salutogenesis
References
Further reading
Pathology | Pathogenesis | [
"Biology"
] | 338 | [
"Pathology"
] |
524,445 | https://en.wikipedia.org/wiki/Thrombopoietin | Thrombopoietin (THPO) also known as megakaryocyte growth and development factor (MGDF) is a protein that in humans is encoded by the THPO gene.
Thrombopoietin is a glycoprotein hormone produced by the liver and kidney which regulates the production of platelets. It stimulates the production and differentiation of megakaryocytes, the bone marrow cells that bud off large numbers of platelets.
Megakaryocytopoiesis is the cellular development process that leads to platelet production. The protein encoded by this gene is a humoral growth factor necessary for megakaryocyte proliferation and maturation, as well as for thrombopoiesis. This protein is the ligand for MLP/C_MPL, the product of myeloproliferative leukemia virus oncogene.
Genetics
The thrombopoietin gene is located on the long arm of chromosome 3 (q26.3-27). Abnormalities in this gene occur in some hereditary forms of thrombocytosis (high platelet count) and in some cases of leukemia. The first 155 amino acids of the protein share homology with erythropoietin.
Function and regulation
Thrombopoietin is produced in the liver by both parenchymal cells and sinusoidal endothelial cells, as well as in the kidney by proximal convoluted tubule cells. Small amounts are also made by striated muscle and bone marrow stromal cells. In the liver, its production is augmented by interleukin 6 (IL-6). However, the liver and the kidney are the primary sites of thrombopoietin production.
Thrombopoietin regulates the differentiation of megakaryocytes and platelets, but studies on the removal of the thrombopoietin receptor show that its effects on hematopoiesis are more versatile.
Its negative feedback is different from that of most hormones in endocrinology: The effector regulates the hormone directly. Thrombopoietin is bound to the surface of platelets and megakaryocytes by the mpl receptor (CD 110). Inside the platelets it gets destroyed, while inside the megakaryocytes it gives the signal of their maturation and consecutively more platelet production. The bounding of the hormone at these cells thereby reduces further megakaryocyte exposure to the hormone. Therefore, the rising and dropping platelet and megakaryocyte concentrations regulate the thrombopoietin levels. Low platelets and megakaryocytes lead a higher degree of thrombopoietin exposure to the undifferentiated bone marrow cells, leading to differentiation into megakaryocytes and further maturation of these cells. On the other hand, high platelet and megakaryocyte concentrations lead to more thrombopoetin destruction and thus less availability of thrombopoietin to bone marrow.
TPO, like EPO, plays a role in brain development. It promotes apoptosis of newly generated neurons, an effect counteracted by EPO and neurotrophins.
Therapeutic use
Despite numerous trials, thrombopoietin has not been found to be useful therapeutically. Theoretical uses include the procurement of platelets for donation, and recovery of platelet counts after myelosuppressive chemotherapy.
Trials of a modified recombinant form, megakaryocyte growth and differentiation factor (MGDF), were stopped when healthy volunteers developed autoantibodies to endogenous thrombopoietin and then developed thrombocytopenia. Romiplostim and Eltrombopag, structurally different compounds that stimulate the same pathway, are used instead.
A quadrivalent peptide analogue is being investigated, as well as several small-molecule agents, and several non-peptide ligands of c-Mpl, which act as thrombopoietin analogues.
Discovery
Thrombopoietin was cloned by five independent teams in 1994. Before its identification, its function has been hypothesized for as much as 30 years as being linked to the cell surface receptor c-Mpl, and in older publications thrombopoietin is described as c-Mpl ligand (the agent that binds to the c-Mpl molecule). Thrombopoietin is one of the Class I hematopoietic cytokines.
See also
Thrombopoietic agent
References
Further reading
External links
Longer summary on thrombopoietin
Growth factors
Thrombopoietin receptor agonists | Thrombopoietin | [
"Chemistry"
] | 970 | [
"Growth factors",
"Signal transduction"
] |
524,501 | https://en.wikipedia.org/wiki/Independent%20set%20%28graph%20theory%29 | In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.
A maximal independent set is an independent set that is not a proper subset of any other independent set.
A maximum independent set is an independent set of largest possible size for a given graph . This size is called the independence number of and is usually denoted by . The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph.
Every maximum independent set also is maximal, but the converse implication does not necessarily hold.
Properties
Relationship to other graph parameters
A set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored in Ramsey theory.
A set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set and the size of a minimum vertex cover is equal to the number of vertices in the graph.
A vertex coloring of a graph corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, the chromatic number , is at least the quotient of the number of vertices in and the independent number .
In a bipartite graph with no isolated vertices, the number of vertices in a maximum independent set equals the number of edges in a minimum edge covering; this is Kőnig's theorem.
Maximal independent set
An independent set that is not a proper subset of another independent set is called maximal. Such sets are dominating sets. Every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer.
The number of maximal independent sets in n-vertex cycle graphs is given by the Perrin numbers, and the number of maximal independent sets in n-vertex path graphs is given by the Padovan sequence. Therefore, both numbers are proportional to powers of 1.324718..., the plastic ratio.
Finding independent sets
In computer science, several computational problems related to independent sets have been studied.
In the maximum independent set problem, the input is an undirected graph, and the output is a maximum independent set in the graph. If there are multiple maximum independent sets, only one need be output. This problem is sometimes referred to as "vertex packing".
In the maximum-weight independent set problem, the input is an undirected graph with weights on its vertices and the output is an independent set with maximum total weight. The maximum independent set problem is the special case in which all weights are one.
In the maximal independent set listing problem, the input is an undirected graph, and the output is a list of all its maximal independent sets. The maximum independent set problem may be solved using as a subroutine an algorithm for the maximal independent set listing problem, because the maximum independent set must be included among all the maximal independent sets.
In the independent set decision problem, the input is an undirected graph and a number k, and the output is a Boolean value: true if the graph contains an independent set of size k, and false otherwise.
The first three of these problems are all important in practical applications; the independent set decision problem is not, but is necessary in order to apply the theory of NP-completeness to problems related to independent sets.
Maximum independent sets and maximum cliques
The independent set problem and the clique problem are complementary: a clique in G is an independent set in the complement graph of G and vice versa. Therefore, many computational results may be applied equally well to either problem. For example, the results related to the clique problem have the following corollaries:
The independent set decision problem is NP-complete, and hence it is not believed that there is an efficient algorithm for solving it.
The maximum independent set problem is NP-hard and it is also hard to approximate.
Despite the close relationship between maximum cliques and maximum independent sets in arbitrary graphs, the independent set and clique problems may be very different when restricted to special classes of graphs. For instance, for sparse graphs (graphs in which the number of edges is at most a constant times the number of vertices in any subgraph), the maximum clique has bounded size and may be found exactly in linear time; however, for the same classes of graphs, or even for the more restricted class of bounded degree graphs, finding the maximum independent set is MAXSNP-complete, implying that, for some constant c (depending on the degree) it is NP-hard to find an approximate solution that comes within a factor of c of the optimum.
Exact algorithms
The maximum independent set problem is NP-hard. However, it can be solved more efficiently than the O(n2 2n) time that would be given by a naive brute force algorithm that examines every vertex subset and checks whether it is an independent set.
As of 2017 it can be solved in time O(1.1996n) using polynomial space. When restricted to graphs with maximum degree 3, it can be solved in time O(1.0836n).
For many classes of graphs, a maximum weight independent set may be found in polynomial time. Famous examples are claw-free graphs, P5-free graphs and perfect graphs. For chordal graphs, a maximum weight independent set can be found in linear time.
Modular decomposition is a good tool for solving the maximum weight independent set problem; the linear time algorithm on cographs is the basic example for that. Another important tool are clique separators as described by Tarjan.
Kőnig's theorem implies that in a bipartite graph the maximum independent set can be found in polynomial time using a bipartite matching algorithm.
Approximation algorithms
In general, the maximum independent set problem cannot be approximated to a constant factor in polynomial time (unless P = NP). In fact, Max Independent Set in general is Poly-APX-complete, meaning it is as hard as any problem that can be approximated to a polynomial factor. However, there are efficient approximation algorithms for restricted classes of graphs.
In planar graphs
In planar graphs, the maximum independent set may be approximated to within any approximation ratio c < 1 in polynomial time; similar polynomial-time approximation schemes exist in any family of graphs closed under taking minors.
In bounded degree graphs
In bounded degree graphs, effective approximation algorithms are known with approximation ratios that are constant for a fixed value of the maximum degree; for instance, a greedy algorithm that forms a maximal independent set by, at each step, choosing the minimum degree vertex in the graph and removing its neighbors, achieves an approximation ratio of (Δ+2)/3 on graphs with maximum degree Δ. Approximation hardness bounds for such instances were proven in . Indeed, even Max Independent Set on 3-regular 3-edge-colorable graphs is APX-complete.
In interval intersection graphs
An interval graph is a graph in which the nodes are 1-dimensional intervals (e.g. time intervals) and there is an edge between two intervals if and only if they intersect. An independent set in an interval graph is just a set of non-overlapping intervals. The problem of finding maximum independent sets in interval graphs has been studied, for example, in the context of job scheduling: given a set of jobs that has to be executed on a computer, find a maximum set of jobs that can be executed without interfering with each other. This problem can be solved exactly in polynomial time using earliest deadline first scheduling.
In geometric intersection graphs
A geometric intersection graph is a graph in which the nodes are geometric shapes and there is an edge between two shapes if and only if they intersect. An independent set in a geometric intersection graph is just a set of disjoint (non-overlapping) shapes. The problem of finding maximum independent sets in geometric intersection graphs has been studied, for example, in the context of Automatic label placement: given a set of locations in a map, find a maximum set of disjoint rectangular labels near these locations.
Finding a maximum independent set in intersection graphs is still NP-complete, but it is easier to approximate than the general maximum independent set problem. A recent survey can be found in the introduction of .
In d-claw-free graphs
A d-claw in a graph is a set of d+1 vertices, one of which (the "center") is connected to the other d vertices, but the other d vertices are not connected to each other. A d-claw-free graph is a graph that does not have a d-claw subgraph. Consider the algorithm that starts with an empty set, and incrementally adds an arbitrary vertex to it as long as it is not adjacent to any existing vertex. In d-claw-free graphs, every added vertex invalidates at most d-1 vertices from the maximum independent set; therefore, this trivial algorithm attains a (d-1)-approximation algorithm for the maximum independent set. In fact, it is possible to get much better approximation ratios:
Neuwohner presented a polynomial time algorithm that, for any constant ε>0, finds a (d/2-1/63,700,992+ε)-approximation for the maximum weight independent set in a d-claw free graph.
Cygan presented a quasi-polynomial time algorithm that, for any ε>0, attains a (d+ε)/3 approximation.
Finding maximal independent sets
The problem of finding a maximal independent set can be solved in polynomial time by a trivial parallel greedy algorithm . All maximal independent sets can be found in time O(3n/3) = O(1.4423n).
Counting independent sets
The counting problem #IS asks, given an undirected graph, how many independent sets it contains. This problem is intractable, namely, it is ♯P-complete, already on graphs with maximal degree three. It is further known that, assuming that NP is different from RP, the problem cannot be tractably approximated in the sense that it does not have a fully polynomial-time approximation scheme with randomization (FPRAS), even on graphs with maximal degree six; however it does have an fully polynomial-time approximation scheme (FPTAS) in the case where the maximal degree is five. The problem #BIS, of counting independent sets on bipartite graphs, is also ♯P-complete, already on graphs with maximal degree three.
It is not known whether #BIS admits a FPRAS.
The question of counting maximal independent sets has also been studied.
Applications
The maximum independent set and its complement, the minimum vertex cover problem, is involved in proving the computational complexity of many theoretical problems. They also serve as useful models for real world optimization problems, for example maximum independent set is a useful model for discovering stable genetic components for designing engineered genetic systems.
See also
An independent set of edges is a set of edges of which no two have a vertex in common. It is usually called a matching.
A vertex coloring is a partition of the vertex set into independent sets.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Challenging Benchmarks for Maximum Clique, Maximum Independent Set, Minimum Vertex Cover and Vertex Coloring
Independent Set and Vertex Cover, Hanan Ayad.
Graph theory objects
NP-complete problems
Computational problems in graph theory | Independent set (graph theory) | [
"Mathematics"
] | 2,482 | [
"Computational problems in graph theory",
"Graph theory objects",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
525,149 | https://en.wikipedia.org/wiki/Codimension | In mathematics, codimension is a basic geometric idea that applies to subspaces in vector spaces, to submanifolds in manifolds, and suitable subsets of algebraic varieties.
For affine and projective algebraic varieties, the codimension equals the height of the defining ideal. For this reason, the height of an ideal is often called its codimension.
The dual concept is relative dimension.
Definition
Codimension is a relative concept: it is only defined for one object inside another. There is no “codimension of a vector space (in isolation)”, only the codimension of a vector subspace.
If W is a linear subspace of a finite-dimensional vector space V, then the codimension of W in V is the difference between the dimensions:
It is the complement of the dimension of W, in that, with the dimension of W, it adds up to the dimension of the ambient space V:
Similarly, if N is a submanifold or subvariety in M, then the codimension of N in M is
Just as the dimension of a submanifold is the dimension of the tangent bundle (the number of dimensions that you can move on the submanifold), the codimension is the dimension of the normal bundle (the number of dimensions you can move off the submanifold).
More generally, if W is a linear subspace of a (possibly infinite dimensional) vector space V then the codimension of W in V is the dimension (possibly infinite) of the quotient space V/W, which is more abstractly known as the cokernel of the inclusion. For finite-dimensional vector spaces, this agrees with the previous definition
and is dual to the relative dimension as the dimension of the kernel.
Finite-codimensional subspaces of infinite-dimensional spaces are often useful in the study of topological vector spaces.
Additivity of codimension and dimension counting
The fundamental property of codimension lies in its relation to intersection: if W1 has codimension k1, and W2 has codimension k2, then if U is their intersection with codimension j we have
max (k1, k2) ≤ j ≤ k1 + k2.
In fact j may take any integer value in this range. This statement is more perspicuous than the translation in terms of dimensions, because the RHS is just the sum of the codimensions. In words
codimensions (at most) add.
If the subspaces or submanifolds intersect transversally (which occurs generically), codimensions add exactly.
This statement is called dimension counting, particularly in intersection theory.
Dual interpretation
In terms of the dual space, it is quite evident why dimensions add. The subspaces can be defined by the vanishing of a certain number of linear functionals, which if we take to be linearly independent, their number is the codimension. Therefore, we see that U is defined by taking the union of the sets of linear functionals defining the Wi. That union may introduce some degree of linear dependence: the possible values of j express that dependence, with the RHS sum being the case where there is no dependence. This definition of codimension in terms of the number of functions needed to cut out a subspace extends to situations in which both the ambient space and subspace are infinite dimensional.
In other language, which is basic for any kind of intersection theory, we are taking the union of a certain number of constraints. We have two phenomena to look out for:
the two sets of constraints may not be independent;
the two sets of constraints may not be compatible.
The first of these is often expressed as the principle of counting constraints: if we have a number N of parameters to adjust (i.e. we have N degrees of freedom), and a constraint means we have to 'consume' a parameter to satisfy it, then the codimension of the solution set is at most the number of constraints. We do not expect to be able to find a solution if the predicted codimension, i.e. the number of independent constraints, exceeds N (in the linear algebra case, there is always a trivial, null vector solution, which is therefore discounted).
The second is a matter of geometry, on the model of parallel lines; it is something that can be discussed for linear problems by methods of linear algebra, and for non-linear problems in projective space, over the complex number field.
In geometric topology
Codimension also has some clear meaning in geometric topology: on a manifold, codimension 1 is the dimension of topological disconnection by a submanifold, while codimension 2 is the dimension of ramification and knot theory. In fact, the theory of high-dimensional manifolds, which starts in dimension 5 and above, can alternatively be said to start in codimension 3, because higher codimensions avoid the phenomenon of knots. Since surgery theory requires working up to the middle dimension, once one is in dimension 5, the middle dimension has codimension greater than 2, and hence one avoids knots.
This quip is not vacuous: the study of embeddings in codimension 2 is knot theory, and difficult, while the study of embeddings in codimension 3 or more is amenable to the tools of high-dimensional geometric topology, and hence considerably easier.
See also
Glossary of differential geometry and topology
References
Algebraic geometry
Geometric topology
Linear algebra
Dimension
Dimension theory | Codimension | [
"Physics",
"Mathematics"
] | 1,171 | [
"Geometric measurement",
"Algebraic geometry",
"Physical quantities",
"Geometric topology",
"Fields of abstract algebra",
"Topology",
"Theory of relativity",
"Linear algebra",
"Dimension",
"Algebra"
] |
525,234 | https://en.wikipedia.org/wiki/Singular%20perturbation | In mathematics, a singular perturbation problem is a problem containing a small parameter that cannot be approximated by setting the parameter value to zero. More precisely, the solution cannot be uniformly approximated by an asymptotic expansion
as . Here is the small parameter of the problem and are a sequence of functions of of increasing order, such as . This is in contrast to regular perturbation problems, for which a uniform approximation of this form can be obtained. Singularly perturbed problems are generally characterized by dynamics operating on multiple scales. Several classes of singular perturbations are outlined below.
The term "singular perturbation" was
coined in the 1940s by Kurt Otto Friedrichs and Wolfgang R. Wasow.
Methods of analysis
A perturbed problem whose solution can be approximated on the whole problem domain, whether space or time, by a single asymptotic expansion has a regular perturbation. Most often in applications, an acceptable approximation to a regularly perturbed problem is found by simply replacing the small parameter by zero everywhere in the problem statement. This corresponds to taking only the first term of the expansion, yielding an approximation that converges, perhaps slowly, to the true solution as decreases. The solution to a singularly perturbed problem cannot be approximated in this way: As seen in the examples below, a singular perturbation generally occurs when a problem's small parameter multiplies its highest operator. Thus naively taking the parameter to be zero changes the very nature of the problem. In the case of differential equations, boundary conditions cannot be satisfied; in algebraic equations, the possible number of solutions is decreased.
Singular perturbation theory is a rich and ongoing area of exploration for mathematicians, physicists, and other researchers. The methods used to tackle problems in this field are many. The more basic of these include the method of matched asymptotic expansions and WKB approximation for spatial problems, and in time, the Poincaré–Lindstedt method, the method of multiple scales and periodic averaging.
The numerical methods for solving singular perturbation problems are also very popular.
For books on singular perturbation in ODE and PDE's, see for example Holmes, Introduction to Perturbation Methods, Hinch, Perturbation methods or Bender and Orszag, Advanced Mathematical Methods for Scientists and Engineers.
Examples of singular perturbative problems
Each of the examples described below shows how a naive perturbation analysis, which assumes that the problem is regular instead of singular, will fail. Some show how the problem may be solved by more sophisticated singular methods.
Vanishing coefficients in ordinary differential equations
Differential equations that contain a small parameter that premultiplies the highest order term typically exhibit boundary layers, so that the solution evolves in two different scales. For example, consider the boundary value problem
Its solution when is the solid curve shown below. Note that the solution changes rapidly near the origin. If we naively set , we would get the solution labelled "outer" below which does not model the boundary layer, for which x is close to zero. For more details that show how to obtain the uniformly valid approximation, see method of matched asymptotic expansions.
Examples in time
An electrically driven robot manipulator can have slower mechanical dynamics and faster electrical dynamics, thus exhibiting two time scales. In such cases, we can divide the system into two subsystems, one corresponding to faster dynamics and other corresponding to slower dynamics, and then design controllers for each one of them separately. Through a singular perturbation technique, we can make these two subsystems independent of each other, thereby simplifying the control problem.
Consider a class of system described by the following set of equations:
with . The second equation indicates that the dynamics of is much faster than that of . A theorem due to Tikhonov states that, with the correct conditions on the system, it will initially and very quickly approximate the solution to the equations
on some interval of time and that, as decreases toward zero, the system will approach the solution more closely in that same interval.
Examples in space
In fluid mechanics, the properties of a slightly viscous fluid are dramatically different outside and inside a narrow boundary layer. Thus the fluid exhibits multiple spatial scales.
Reaction–diffusion systems in which one reagent diffuses much more slowly than another can form spatial patterns marked by areas where a reagent exists, and areas where it does not, with sharp transitions between them. In ecology, predator-prey models such as
where is the prey and is the predator, have been shown to exhibit such patterns.
Algebraic equations
Consider the problem of finding all roots of the polynomial . In the limit , this cubic degenerates into the quadratic with roots at . Substituting a regular perturbation series
in the equation and equating equal powers of only yields corrections to these two roots:
To find the other root, singular perturbation analysis must be used. We must then deal with the fact that the equation degenerates into a quadratic when we let tend to zero, in that limit one of the roots escapes to infinity. To prevent this root from becoming invisible to the perturbative analysis, we must rescale to keep track with this escaping root so that in terms of the rescaled variables, it doesn't escape. We define a rescaled variable where the exponent will be chosen such that we rescale just fast enough so that the root is at a finite value of in the limit of to zero, but such that it doesn't collapse to zero where the other two roots will end up. In terms of we have
We can see that for the is dominated by the lower degree terms, while at it becomes as dominant as the term while they both dominate the remaining term. This point where the highest order term will no longer vanish in the limit to zero by becoming equally dominant to another term, is called significant degeneration; this yields the correct rescaling to make the remaining root visible. This choice yields
Substituting the perturbation series
yields
We are then interested in the root at ; the double root at are the two roots that we've found above that collapse to zero in the limit of an infinite rescaling. Calculating the first few terms of the series then yields
References
Differential equations
Nonlinear control
Perturbation theory | Singular perturbation | [
"Physics",
"Mathematics"
] | 1,299 | [
"Mathematical objects",
"Differential equations",
"Equations",
"Quantum mechanics",
"Perturbation theory"
] |
525,887 | https://en.wikipedia.org/wiki/Density%20of%20states | In condensed matter physics, the density of states (DOS) of a system describes the number of allowed modes or states per unit energy range. The density of states is defined as where is the number of states in the system of volume whose energies lie in the range from to . It is mathematically represented as a distribution by a probability density function, and it is generally an average over the space and time domains of the various states occupied by the system. The density of states is directly related to the dispersion relations of the properties of the system. High DOS at a specific energy level means that many states are available for occupation.
Generally, the density of states of matter is continuous. In isolated systems however, such as atoms or molecules in the gas phase, the density distribution is discrete, like a spectral density. Local variations, most often due to distortions of the original system, are often referred to as local densities of states (LDOSs).
Introduction
In quantum mechanical systems, waves, or wave-like particles, can occupy modes or states with wavelengths and propagation directions dictated by the system. For example, in some systems, the interatomic spacing and the atomic charge of a material might allow only electrons of certain wavelengths to exist. In other systems, the crystalline structure of a material might allow waves to propagate in one direction, while suppressing wave propagation in another direction. Often, only specific states are permitted. Thus, it can happen that many states are available for occupation at a specific energy level, while no states are available at other energy levels.
Looking at the density of states of electrons at the band edge between the valence and conduction bands in a semiconductor, for an electron in the conduction band, an increase of the electron energy makes more states available for occupation. Alternatively, the density of states is discontinuous for an interval of energy, which means that no states are available for electrons to occupy within the band gap of the material. This condition also means that an electron at the conduction band edge must lose at least the band gap energy of the material in order to transition to another state in the valence band.
This determines if the material is an insulator or a metal in the dimension of the propagation. The result of the number of states in a band is also useful for predicting the conduction properties. For example, in a one dimensional crystalline structure an odd number of electrons per atom results in a half-filled top band; there are free electrons at the Fermi level resulting in a metal. On the other hand, an even number of electrons exactly fills a whole number of bands, leaving the rest empty. If then the Fermi level lies in an occupied band gap between the highest occupied state and the lowest empty state, the material will be an insulator or semiconductor.
Depending on the quantum mechanical system, the density of states can be calculated for electrons, photons, or phonons, and can be given as a function of either energy or the wave vector . To convert between the DOS as a function of the energy and the DOS as a function of the wave vector, the system-specific energy dispersion relation between and must be known.
In general, the topological properties of the system such as the band structure, have a major impact on the properties of the density of states. The most well-known systems, like neutron matter in neutron stars and free electron gases in metals (examples of degenerate matter and a Fermi gas), have a 3-dimensional Euclidean topology. Less familiar systems, like two-dimensional electron gases (2DEG) in graphite layers and the quantum Hall effect system in MOSFET type devices, have a 2-dimensional Euclidean topology. Even less familiar are carbon nanotubes, the quantum wire and Luttinger liquid with their 1-dimensional topologies. Systems with 1D and 2D topologies are likely to become more common, assuming developments in nanotechnology and materials science proceed.
Definition
The density of states related to volume and countable energy levels is defined as:
Because the smallest allowed change of momentum for a particle in a box of dimension and length is , the volume-related density of states for continuous energy levels is obtained in the limit as
Here, is the spatial dimension of the considered system and the wave vector.
For isotropic one-dimensional systems with parabolic energy dispersion, the density of states is In two dimensions the density of states is a constant while in three dimensions it becomes
Equivalently, the density of states can also be understood as the derivative of the microcanonical partition function (that is, the total number of states with energy less than ) with respect to the energy:
The number of states with energy (degree of degeneracy) is given by:
where the last equality only applies when the mean value theorem for integrals is valid.
Symmetry
There is a large variety of systems and types of states for which DOS calculations can be done.
Some condensed matter systems possess a structural symmetry on the microscopic scale which can be exploited to simplify calculation of their densities of states. In spherically symmetric systems, the integrals of functions are one-dimensional because all variables in the calculation depend only on the radial parameter of the dispersion relation. Fluids, glasses and amorphous solids are examples of a symmetric system whose dispersion relations have a rotational symmetry.
Measurements on powders or polycrystalline samples require evaluation and calculation functions and integrals over the whole domain, most often a Brillouin zone, of the dispersion relations of the system of interest. Sometimes the symmetry of the system is high, which causes the shape of the functions describing the dispersion relations of the system to appear many times over the whole domain of the dispersion relation. In such cases the effort to calculate the DOS can be reduced by a great amount when the calculation is limited to a reduced zone or fundamental domain. The Brillouin zone of the face-centered cubic lattice (FCC) in the figure on the right has the 48-fold symmetry of the point group Oh with full octahedral symmetry. This configuration means that the integration over the whole domain of the Brillouin zone can be reduced to a 48-th part of the whole Brillouin zone. As a crystal structure periodic table shows, there are many elements with a FCC crystal structure, like diamond, silicon and platinum and their Brillouin zones and dispersion relations have this 48-fold symmetry. Two other familiar crystal structures are the body-centered cubic lattice (BCC) and hexagonal closed packed structures (HCP) with cubic and hexagonal lattices, respectively. The BCC structure has the 24-fold pyritohedral symmetry of the point group Th. The HCP structure has the 12-fold prismatic dihedral symmetry of the point group D3h. A complete list of symmetry properties of a point group can be found in point group character tables.
In general it is easier to calculate a DOS when the symmetry of the system is higher and the number of topological dimensions of the dispersion relation is lower. The DOS of dispersion relations with rotational symmetry can often be calculated analytically. This result is fortunate, since many materials of practical interest, such as steel and silicon, have high symmetry.
In anisotropic condensed matter systems such as a single crystal of a compound, the density of states could be different in one crystallographic direction than in another. These causes the anisotropic density of states to be more difficult to visualize, and might require methods such as calculating the DOS for particular points or directions only, or calculating the projected density of states (PDOS) to a particular crystal orientation.
k-space topologies
The density of states is dependent upon the dimensional limits of the object itself. In a system described by three orthogonal parameters (3 Dimension), the units of DOS is in a two dimensional system, the units of DOS is in a one dimensional system, the units of DOS is The referenced volume is the volume of -space; the space enclosed by the constant energy surface of the system derived through a dispersion relation that relates to . An example of a 3-dimensional -space is given in Fig. 1. It can be seen that the dimensionality of the system confines the momentum of particles inside the system.
Density of wave vector states (sphere)
The calculation for DOS starts by counting the allowed states at a certain that are contained within inside the volume of the system. This procedure is done by differentiating the whole k-space volume in n-dimensions at an arbitrary , with respect to . The volume, area or length in 3, 2 or 1-dimensional spherical -spaces are expressed by
for a -dimensional -space with the topologically determined constants
for linear, disk and spherical symmetrical shaped functions in 1, 2 and 3-dimensional Euclidean -spaces respectively.
According to this scheme, the density of wave vector states is, through differentiating with respect to , expressed by
The 1, 2 and 3-dimensional density of wave vector states for a line, disk, or sphere are explicitly written as
One state is large enough to contain particles having wavelength λ. The wavelength is related to through the relationship.
In a quantum system the length of λ will depend on a characteristic spacing of the system L that is confining the particles. Finally the density of states N is multiplied by a factor , where is a constant degeneracy factor that accounts for internal degrees of freedom due to such physical phenomena as spin or polarization. If no such phenomenon is present then . Vk is the volume in k-space whose wavevectors are smaller than the smallest possible wavevectors decided by the characteristic spacing of the system.
Density of energy states
To finish the calculation for DOS find the number of states per unit sample volume at an energy inside an interval . The general form of DOS of a system is given as
The scheme sketched so far only applies to monotonically rising and spherically symmetric dispersion relations. In general the dispersion relation is not spherically symmetric and in many cases it isn't continuously rising either. To express D as a function of E the inverse of the dispersion relation has to be substituted into the expression of as a function of k to get the expression of as a function of the energy. If the dispersion relation is not spherically symmetric or continuously rising and can't be inverted easily then in most cases the DOS has to be calculated numerically. More detailed derivations are available.
Dispersion relations
The dispersion relation for electrons in a solid is given by the electronic band structure.
The kinetic energy of a particle depends on the magnitude and direction of the wave vector k, the properties of the particle and the environment in which the particle is moving. For example, the kinetic energy of an electron in a Fermi gas is given by
where m is the electron mass. The dispersion relation is a spherically symmetric parabola and it is continuously rising so the DOS can be calculated easily.
For longitudinal phonons in a string of atoms the dispersion relation of the kinetic energy in a 1-dimensional k-space, as shown in Figure 2, is given by
where is the oscillator frequency, the mass of the atoms, the inter-atomic force constant and inter-atomic spacing. For small values of the dispersion relation is linear:
When the energy is
With the transformation and small this relation can be transformed to
Isotropic dispersion relations
The two examples mentioned here can be expressed like
This expression is a kind of dispersion relation because it interrelates two wave properties and it is isotropic because only the length and not the direction of the wave vector appears in the expression. The magnitude of the wave vector is related to the energy as:
Accordingly, the volume of n-dimensional -space containing wave vectors smaller than is:
Substitution of the isotropic energy relation gives the volume of occupied states
Differentiating this volume with respect to the energy gives an expression for the DOS of the isotropic dispersion relation
Parabolic dispersion
In the case of a parabolic dispersion relation (p = 2), such as applies to free electrons in a Fermi gas, the resulting density of states, , for electrons in a n-dimensional systems is
for , with for .
In 1-dimensional systems the DOS diverges at the bottom of the band as drops to . In 2-dimensional systems the DOS turns out to be independent of . Finally for 3-dimensional systems the DOS rises as the square root of the energy.
Including the prefactor , the expression for the 3D DOS is
where is the total volume, and includes the 2-fold spin degeneracy.
Linear dispersion
In the case of a linear relation (p = 1), such as applies to photons, acoustic phonons, or to some special kinds of electronic bands in a solid, the DOS in 1, 2 and 3 dimensional systems is related to the energy as:
Distribution functions
The density of states plays an important role in the kinetic theory of solids. The product of the density of states and the probability distribution function is the number of occupied states per unit volume at a given energy for a system in thermal equilibrium. This value is widely used to investigate various physical properties of matter. The following are examples, using two common distribution functions, of how applying a distribution function to the density of states can give rise to physical properties.
Fermi–Dirac statistics: The Fermi–Dirac probability distribution function, Fig. 4, is used to find the probability that a fermion occupies a specific quantum state in a system at thermal equilibrium. Fermions are particles which obey the Pauli exclusion principle (e.g. electrons, protons, neutrons). The distribution function can be written as
is the chemical potential (also denoted as EF and called the Fermi level when T=0), is the Boltzmann constant, and is temperature. Fig. 4 illustrates how the product of the Fermi-Dirac distribution function and the three-dimensional density of states for a semiconductor can give insight to physical properties such as carrier concentration and Energy band gaps.
Bose–Einstein statistics: The Bose–Einstein probability distribution function is used to find the probability that a boson occupies a specific quantum state in a system at thermal equilibrium. Bosons are particles which do not obey the Pauli exclusion principle (e.g. phonons and photons). The distribution function can be written as
From these two distributions it is possible to calculate properties such as the internal energy per unit volume , the number of particles , specific heat capacity , and thermal conductivity . The relationships between these properties and the product of the density of states and the probability distribution, denoting the density of states by instead of , are given by
is dimensionality, is sound velocity and is mean free path.
Applications
The density of states appears in many areas of physics, and helps to explain a number of quantum mechanical phenomena.
Quantization
Calculating the density of states for small structures shows that the distribution of electrons changes as dimensionality is reduced. For quantum wires, the DOS for certain energies actually becomes higher than the DOS for bulk semiconductors, and for quantum dots the electrons become quantized to certain energies.
Photonic crystals
The photon density of states can be manipulated by using periodic structures with length scales on the order of the wavelength of light. Some structures can completely inhibit the propagation of light of certain colors (energies), creating a photonic band gap: the DOS is zero for those photon energies. Other structures can inhibit the propagation of light only in certain directions to create mirrors, waveguides, and cavities. Such periodic structures are known as photonic crystals. In nanostructured media the concept of local density of states (LDOS) is often more relevant than that of DOS, as the DOS varies considerably from point to point.
Computational calculation
Interesting systems are in general complex, for instance compounds, biomolecules, polymers, etc. Because of the complexity of these systems the analytical calculation of the density of states is in most of the cases impossible. Computer simulations offer a set of algorithms to evaluate the density of states with a high accuracy. One of these algorithms is called the Wang and Landau algorithm.
Within the Wang and Landau scheme any previous knowledge of the density of states is required. One proceeds as follows: the cost function (for example the energy) of the system is discretized. Each time the bin i is reached one updates a histogram for the density of states, , by
where is called the modification factor. As soon as each bin in the histogram is visited a certain number of times (10-15), the modification factor is reduced by some criterion, for instance,
where denotes the -th update step. The simulation finishes when the modification factor is less than a certain threshold, for instance
The Wang and Landau algorithm has some advantages over other common algorithms such as multicanonical simulations and parallel tempering. For example, the density of states is obtained as the main product of the simulation. Additionally, Wang and Landau simulations are completely independent of the temperature. This feature allows to compute the density of states of systems with very rough energy landscape such as proteins.
Mathematically the density of states is formulated in terms of a tower of covering maps.
Local density of states
An important feature of the definition of the DOS is that it can be extended to any system. One of its properties are the translationally invariability which means that the density of the states is homogeneous and it's the same at each point of the system. But this is just a particular case and the LDOS gives a wider description with a heterogeneous density of states through the system.
Concept
Local density of states (LDOS) describes a space-resolved density of states. In materials science, for example, this term is useful when interpreting the data from a scanning tunneling microscope (STM), since this method is capable of imaging electron densities of states with atomic resolution. According to crystal structure, this quantity can be predicted by computational methods, as for example with density functional theory.
A general definition
In a local density of states the contribution of each state is weighted by the density of its wave function at the point. becomes
the factor of means that each state contributes more in the regions where the density is high. An average over of this expression will restore the usual formula for a DOS. The LDOS is useful in inhomogeneous systems, where contains more information than alone.
For a one-dimensional system with a wall, the sine waves give
where .
In a three-dimensional system with the expression is
In fact, we can generalise the local density of states further to
this is called the spectral function and it's a function with each wave function separately in its own variable. In more advanced theory it is connected with the Green's functions and provides a compact representation of some results such as optical absorption.
Solid state devices
LDOS can be used to gain profit into a solid-state device. For example, the figure on the right illustrates LDOS of a transistor as it turns on and off in a ballistic simulation. The LDOS has clear boundary in the source and drain, that corresponds to the location of band edge. In the channel, the DOS is increasing as gate voltage increase and potential barrier goes down.
Optics and photonics
In optics and photonics, the concept of local density of states refers to the states that can be occupied by a photon. For light it is usually measured by fluorescence methods, near-field scanning methods or by cathodoluminescence techniques. Different photonic structures have different LDOS behaviors with different consequences for spontaneous emission. In photonic crystals, near-zero LDOS are expected, inhibiting spontaneous emission.
Similar LDOS enhancement is also expected in plasmonic cavity.
However, in disordered photonic nanostructures, the LDOS behave differently. They fluctuate spatially with their statistics, and are proportional to the scattering strength of the structures.
In addition, the relationship with the mean free path of the scattering is trivial as the LDOS can be still strongly influenced by the short details of strong disorders in the form of a strong Purcell enhancement of the emission. and finally, for the plasmonic disorder, this effect is much stronger for LDOS fluctuations as it can be observed as a strong near-field localization.
See also
References
Further reading
Chen, Gang. Nanoscale Energy Transport and Conversion. New York: Oxford, 2005
Streetman, Ben G. and Sanjay Banerjee. Solid State Electronic Devices. Upper Saddle River, NJ: Prentice Hall, 2000.
Muller, Richard S. and Theodore I. Kamins. Device Electronics for Integrated Circuits. New York: John Wiley and Sons, 2003.
Kittel, Charles and Herbert Kroemer. Thermal Physics. New York: W.H. Freeman and Company, 1980
Sze, Simon M. Physics of Semiconductor Devices. New York: John Wiley and Sons, 1981
External links
Online lecture:ECE 606 Lecture 8: Density of States by M. Alam
Scientists shed light on glowing materials How to measure the Photonic LDOS
Statistical mechanics
Physical quantities
Electronic band structures | Density of states | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,365 | [
"Electron",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electronic band structures",
"Condensed matter physics",
"Statistical mechanics",
"Physical properties"
] |
20,095,931 | https://en.wikipedia.org/wiki/Eight-vertex%20model | In statistical mechanics, the eight-vertex model is a generalization of the ice-type (six-vertex) models. It was discussed by Sutherland and Fan & Wu, and solved by Rodney Baxter in the zero-field case.
Description
As with the ice-type models, the eight-vertex model is a square lattice model, where each state is a configuration of arrows at a vertex. The allowed vertices have an even number of arrows pointing towards the vertex; these include the six inherited from the ice-type model (1-6), sinks (7), and sources (8).
We consider a lattice, with vertices and edges. Imposing periodic boundary conditions requires that the states 7 and 8 occur equally often, as do states 5 and 6, and thus can be taken to have the same energy. For the zero-field case the same is true for the two other pairs of states. Each vertex has an associated energy and Boltzmann weight
giving the partition function over the lattice as
where the outer summation is over all allowed configurations of vertices in the lattice. In this general form the partition function remains unsolved.
Solution in the zero-field case
The zero-field case of the model corresponds physically to the absence of external electric fields. Hence, the model remains unchanged under the reversal of all arrows. The states 1 and 2, and 3 and 4, consequently must occur as pairs. The vertices may be assigned arbitrary weights
The solution is based on the observation that rows in transfer matrices commute, for a certain parametrization of these four Boltzmann weights. This came about as a modification of an alternate solution for the six-vertex model which makes use of elliptic theta functions.
Commuting transfer matrices
The proof relies on the fact that when and , for quantities
the transfer matrices and (associated with the weights , , , and , , , ) commute. Using the star-triangle relation, Baxter reformulated this condition as equivalent to a parametrization of the weights given as
for fixed modulus and and variable . Here snh is the hyperbolic analogue of sn, given by
and and are theta functions of modulus . The associated transfer matrix thus is a function of alone; for all ,
The matrix function
The other crucial part of the solution is the existence of a nonsingular matrix-valued function , such that for all complex the matrices commute with each other and the transfer matrices, and satisfy
where
The existence and commutation relations of such a function are demonstrated by considering pair propagations through a vertex, and periodicity relations of the theta functions, in a similar way to the six-vertex model.
Explicit solution
The commutation of matrices in () allow them to be diagonalised, and thus eigenvalues can be found. The partition function is calculated from the maximal eigenvalue, resulting in a free energy per site of
for
where and are the complete elliptic integrals of moduli and .
The eight vertex model was also solved in quasicrystals.
Equivalence with an Ising model
There is a natural correspondence between the eight-vertex model, and the Ising model with 2-spin and 4-spin nearest neighbor interactions. The states of this model are spins on faces of a square lattice. The analogue of 'edges' in the eight-vertex model are products of spins on adjacent faces:
The most general form of the energy for this model is
where , , , describe the horizontal, vertical and two diagonal 2-spin interactions, and describes the 4-spin interaction between four faces at a vertex; the sum is over the whole lattice.
We denote horizontal and vertical spins (arrows on edges) in the eight-vertex model , respectively, and define up and right as positive directions. The restriction on vertex states is that the product of four edges at a vertex is 1; this automatically holds for Ising "edges." Each configuration then corresponds to a unique , configuration, whereas each , configuration gives two choices of configurations.
Equating general forms of Boltzmann weights for each vertex , the following relations between the and , , , , define the correspondence between the lattice models:
It follows that in the zero-field case of the eight-vertex model, the horizontal and vertical interactions in the corresponding Ising model vanish.
These relations gives the equivalence between the partition functions of the eight-vertex model, and the (2,4)-spin Ising model. Consequently a solution in either model would lead immediately to a solution in the other.
See also
Six-vertex model
Transfer-matrix method
Ising model
Notes
References
Exactly solvable models
Statistical mechanics
Lattice models | Eight-vertex model | [
"Physics",
"Materials_science"
] | 938 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
20,096,022 | https://en.wikipedia.org/wiki/Ice-type%20model | In statistical mechanics, the ice-type models or six-vertex models are a family of vertex models for crystal lattices with hydrogen bonds. The first such model was introduced by Linus Pauling in 1935 to account for the residual entropy of water ice. Variants have been proposed as models of certain ferroelectric and antiferroelectric crystals.
In 1967, Elliott H. Lieb found the exact solution to a two-dimensional ice model known as "square ice". The exact solution in three dimensions is only known for a special "frozen" state.
Description
An ice-type model is a lattice model defined on a lattice of coordination number 4. That is, each vertex of the lattice is connected by an edge to four "nearest neighbours". A state of the model consists of an arrow on each edge of the lattice, such that the number of arrows pointing inwards at each vertex is 2. This restriction on the arrow configurations is known as the ice rule. In graph theoretic terms, the states are Eulerian orientations of an underlying 4-regular undirected graph. The partition function also counts the number of nowhere-zero 3-flows.
For two-dimensional models, the lattice is taken to be the square lattice. For more realistic models, one can use a three-dimensional lattice appropriate to the material being considered; for example, the hexagonal ice lattice is used to analyse ice.
At any vertex, there are six configurations of the arrows which satisfy the ice rule (justifying the name "six-vertex model"). The valid configurations for the (two-dimensional) square lattice are the following:
The energy of a state is understood to be a function of the configurations at each vertex. For square lattices, one assumes that the total energy is given by
for some constants , where here denotes the number of vertices with the th configuration from the above figure. The value is the energy associated with vertex configuration number .
One aims to calculate the partition function of an ice-type model, which is given by the formula
where the sum is taken over all states of the model, is the energy of the state, is the Boltzmann constant, and is the system's temperature.
Typically, one is interested in the thermodynamic limit in which the number of vertices approaches infinity. In that case, one instead evaluates the free energy per vertex in the limit as , where is given by
Equivalently, one evaluates the partition function per vertex in the thermodynamic limit, where
The values and are related by
Physical justification
Several real crystals with hydrogen bonds satisfy the ice model, including ice and potassium dihydrogen phosphate (KDP). Indeed, such crystals motivated the study of ice-type models.
In ice, each oxygen atom is connected by a bond to four hydrogens, and each bond contains one hydrogen atom between the terminal oxygens. The hydrogen occupies one of two symmetrically located positions, neither of which is in the middle of the bond. Pauling argued that the allowed configuration of hydrogen atoms is such that there are always exactly two hydrogens close to each oxygen, thus making the local environment imitate that of a water molecule, . Thus, if we take the oxygen atoms as the lattice vertices and the hydrogen bonds as the lattice edges, and if we draw an arrow on a bond which points to the side of the bond on which the hydrogen atom sits, then ice satisfies the ice model. Similar reasoning applies to show that KDP also satisfies the ice model.
In recent years, ice-type models have been explored as descriptions of pyrochlore spin ice and artificial spin ice systems, in which geometrical frustration in the interactions between bistable magnetic moments ("spins") leads to "ice-rule" spin configurations being favoured. Recently such analogies have been extended to explore the circumstances under which spin-ice systems may be accurately described by the Rys F-model.
Specific choices of vertex energies
On the square lattice, the energies associated with vertex configurations 1-6 determine the relative probabilities of states, and thus can influence the macroscopic behaviour of the system. The following are common choices for these vertex energies.
The ice model
When modeling ice, one takes , as all permissible vertex configurations are understood to be equally likely. In this case, the partition function equals the total number of valid states. This model is known as the ice model (as opposed to an ice-type model).
The KDP model of a ferroelectric
Slater argued that KDP could be represented by an ice-type model with energies
For this model (called the KDP model), the most likely state (the least-energy state) has all horizontal arrows pointing in the same direction, and likewise for all vertical arrows. Such a state is a ferroelectric state, in which all hydrogen atoms have a preference for one fixed side of their bonds.
Rys F model of an antiferroelectric
The Rys model is obtained by setting
The least-energy state for this model is dominated by vertex configurations 5 and 6. For such a state, adjacent horizontal bonds necessarily have arrows in opposite directions and similarly for vertical bonds, so this state is an antiferroelectric state.
The zero field assumption
If there is no ambient electric field, then the total energy of a state should remain unchanged under a charge reversal, i.e. under flipping all arrows. Thus one may assume without loss of generality that
This assumption is known as the zero field assumption, and holds for the ice model, the KDP model, and the Rys F model.
History
The ice rule was introduced by Linus Pauling in 1935 to account for the residual entropy of ice that had been measured by William F. Giauque and J. W. Stout. The residual entropy, , of ice is given by the formula
where is the Boltzmann constant, is the number of oxygen atoms in the piece of ice, which is always taken to be large (the thermodynamic limit) and is the number of configurations of the hydrogen atoms according to Pauling's ice rule. Without the ice rule we would have since the number of hydrogen atoms is and each hydrogen has two possible locations. Pauling estimated that the ice rule reduces this to , a number that would agree extremely well with the Giauque-Stout measurement of . It can be said that Pauling's calculation of for ice is one of the simplest, yet most accurate applications of statistical mechanics to real substances ever made. The question that remained was whether, given the model, Pauling's calculation of , which was very approximate, would be sustained by a rigorous calculation. This became a significant problem in combinatorics.
Both the three-dimensional and two-dimensional models were computed numerically by John F. Nagle in 1966 who found that in three-dimensions and in two-dimensions. Both are amazingly close to Pauling's rough calculation, 1.5.
In 1967, Lieb found the exact solution of three two-dimensional ice-type models: the ice model, the Rys model, and the KDP model. The solution for the ice model gave the exact value of in two-dimensions as
which is known as Lieb's square ice constant.
Later in 1967, Bill Sutherland generalised Lieb's solution of the three specific ice-type models to a general exact solution for square-lattice ice-type models satisfying the zero field assumption.
Still later in 1967, C. P. Yang generalised Sutherland's solution to an exact solution for square-lattice ice-type models in a horizontal electric field.
In 1969, John Nagle derived the exact solution for a three-dimensional version of the KDP model, for a specific range of temperatures. For such temperatures, the model is "frozen" in the sense that (in the thermodynamic limit) the energy per vertex and entropy per vertex are both zero. This is the only known exact solution for a three-dimensional ice-type model.
Relation to eight-vertex model
The eight-vertex model, which has also been exactly solved, is a generalisation of the (square-lattice) six-vertex model: to recover the six-vertex model from the eight-vertex model, set the energies for vertex configurations 7 and 8 to infinity. Six-vertex models have been solved in some cases for which the eight-vertex model has not; for example, Nagle's solution for the three-dimensional KDP model and Yang's solution of the six-vertex model in a horizontal field.
Boundary conditions
This ice model provide an important 'counterexample' in statistical mechanics:
the bulk free energy in the thermodynamic limit depends on boundary conditions. The model was analytically solved for periodic boundary conditions, anti-periodic, ferromagnetic and domain wall boundary conditions. The six vertex model with domain wall boundary conditions on a square lattice has specific significance in combinatorics, it helps to enumerate alternating sign matrices. In this case the partition function can be represented as a determinant of a matrix (whose dimension is equal to the size of the lattice), but in other cases the enumeration of does not come out in such a simple closed form.
Clearly, the largest is given by free boundary conditions (no constraint at all on the configurations on the boundary), but the same occurs, in the thermodynamic limit, for periodic boundary conditions, as used originally to derive .
3-colorings of a lattice
The number of states of an ice type model on the internal edges of a finite simply connected union of squares of a lattice is equal to one third of the number of ways to 3-color the squares, with no two adjacent squares having the same color. This correspondence between states is due to Andrew Lenard and is given as follows. If a square has color i = 0, 1, or 2, then the arrow
on the edge to an adjacent square goes left or right (according to an observer in the square) depending on whether the color in the adjacent square is i+1 or i−1 mod 3. There are 3 possible ways to color a fixed initial square, and once this initial color is chosen this gives a 1:1 correspondence between colorings and arrangements of arrows satisfying the ice-type condition.
See also
Eight-vertex model
Notes
Further reading
Exactly solvable models
Statistical mechanics
Lattice models
Ice | Ice-type model | [
"Physics",
"Materials_science"
] | 2,142 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
20,101,779 | https://en.wikipedia.org/wiki/Time-stretch%20analog-to-digital%20converter | The time-stretch analog-to-digital converter (TS-ADC), also known as the time-stretch enhanced recorder (TiSER), is an analog-to-digital converter (ADC) system that has the capability of digitizing very high bandwidth signals that cannot be captured by conventional electronic ADCs. Alternatively, it is also known as the photonic time-stretch (PTS) digitizer, since it uses an optical frontend. It relies on the process of time-stretch, which effectively slows down the analog signal in time (or compresses its bandwidth) before it can be digitized by a standard electronic ADC.
Background
There is a huge demand for very high-speed analog-to-digital converters (ADCs), as they are needed for test and measurement equipment in laboratories and in high speed data communications systems. Most of the ADCs are based purely on electronic circuits, which have limited speeds and add a lot of impairments, limiting the bandwidth of the signals that can be digitized and the achievable signal-to-noise ratio. In the TS-ADC, this limitation is overcome by time-stretching the analog signal, which effectively slows down the signal in time prior to digitization. By doing so, the bandwidth (and carrier frequency) of the signal is compressed. Electronic ADCs that would have been too slow to digitize the original signal can now be used to capture and process this slowed down signal.
Operation principle
The time-stretch processor, which is generally an optical frontend, stretches the signal in time. It also divides the signal into multiple segments using a filter, for example, a wavelength-division multiplexing (WDM) filter, to ensure that the stretched replica of the original analog signal segments do not overlap each other in time after stretching. The time-stretched and slowed down signal segments are then converted into digital samples by slow electronic ADCs. Finally, these samples are collected by a digital signal processor (DSP) and rearranged in a manner such that output data is the digital representation of the original analog signal. Any distortion added to the signal by the time-stretch preprocessor is also removed by the DSP.
An optical front-end is commonly used to accomplish this process of time-stretching. An ultrashort optical pulse (typically 100 to 200 femtoseconds long), also called a supercontinuum pulse, which has a broad optical bandwidth, is time-stretched by dispersing it in a highly dispersive medium (such as a dispersion compensating fiber). This process results in (an almost) linear time-to-wavelength mapping in the stretched pulse, because different wavelengths travel at different speeds in the dispersive medium. The obtained pulse is called a chirped pulse as its frequency is changing with time, and it is typically a few nanoseconds long. The analog signal is modulated onto this chirped pulse using an electro-optic intensity modulator. Subsequently, the modulated pulse is stretched further in the second dispersive medium which has much higher dispersion value. Finally, this obtained optical pulse is converted to the electrical domain by a photodetector, giving the stretched replica of the original analog signal.
For continuous operation, a train of supercontinuum pulses is used. The chirped pulses arriving at the electro-optic modulator should be wide enough (in time) such that the trailing edge of one pulse overlaps the leading edge of the next pulse. For segmentation, optical filters separate the signal into multiple wavelength channels at the output of the second dispersive medium. For each channel, a separate photodetector and backend electronic ADC is used. Finally the output of these ADCs are passed on to the DSP which generates the desired digital output.
Impulse response of the photonic time-stretch (PTS) system
The PTS processor is based on specialized analog optical (or microwave photonic) fiber links such as those used in cable TV distribution. While the dispersion of fiber is a nuisance in conventional analog optical links, time-stretch technique exploits it to slow down the electrical waveform in the optical domain. In the cable TV link, the light source is a continuous-wave (CW) laser. In PTS, the source is a chirped pulse laser.
In a conventional analog optical link, dispersion causes the upper and lower modulation sidebands, foptical ± felectrical, to slip in relative phase. At certain frequencies, their beats with the optical carrier interfere destructively, creating nulls in the frequency response of the system. For practical systems the first null is at tens of GHz, which is sufficient for handling most electrical signals of interest. Although it may seem that the dispersion penalty places a fundamental limit on the impulse response (or the bandwidth) of the time-stretch system, it can be eliminated. The dispersion penalty vanishes with single-sideband modulation. Alternatively, one can use the modulator's secondary (inverse) output port to eliminate the dispersion penalty, in much the same way as two antennas can eliminate spatial nulls in wireless communication (hence the two antennas on top of a WiFi access point). This configuration is termed phase-diversity. Combining the complementary outputs using a maximal ratio combining (MRC) algorithm results in a transfer function with a flat response in the frequency domain. Thus, the impulse response (bandwidth) of a time-stretch system is limited only by the bandwidth of the electro-optic modulator, which is about 120 GHz—a value that is adequate for capturing most electrical waveforms of interest.
Extremely large stretch factors can be obtained using long lengths of fiber, but at the cost of larger loss—a problem that has been overcome by employing Raman amplification within the dispersive fiber itself, leading to the world's fastest real-time digitizer. Also, using PTS, capture of very high-frequency signals with a world record resolution in 10-GHz bandwidth range has been achieved.
Comparison with time lens imaging
Another technique, temporal imaging using a time lens, can also be used to slow down (mostly optical) signals in time. The time-lens concept relies on the mathematical equivalence between spatial diffraction and temporal dispersion, the so-called space-time duality. A lens held at a distance from an object produces a magnified image of the object. The lens imparts a quadratic phase shift to the spatial frequency components of the optical waves; in conjunction with the free space propagation (object to lens, lens to eye), this generates a magnified image. Owing to the mathematical equivalence between paraxial diffraction and temporal dispersion, an optical waveform can be temporally imaged by a three-step process of dispersing it in time, subjecting it to a phase shift that is quadratic in time (the time lens itself), and dispersing it again. Theoretically, a focused aberration-free image is obtained under a specific condition when the two dispersive elements and the phase shift satisfy the temporal equivalent of the classic lens equation. Alternatively, the time lens can be used without the second dispersive element to transfer the waveform's temporal profile to the spectral domain, analogous to the property that an ordinary lens produces the spatial Fourier transform of an object at its focal points.
In contrast to the time-lens approach, PTS is not based on the space-time duality – there is no lens equation that needs to be satisfied to obtain an error-free slowed-down version of the input waveform. Time-stretch technique also offers continuous-time acquisition performance, a feature needed for mainstream applications of oscilloscopes.
Another important difference between the two techniques is that the time lens requires the input signal to be subjected to high amount of dispersion before further processing. For electrical waveforms, the electronic devices that have the required characteristics: (1) high dispersion to loss ratio, (2) uniform dispersion, and (3) broad bandwidths, do not exist. This renders time lens not suitable for slowing down wideband electrical waveforms. In contrast, PTS does not have such a requirement. It was developed specifically for slowing down electrical waveforms and enable high speed digitizers.
Relation to phase stretch transform
The phase stretch transform or PST is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. phase stretch transform is a spin-off from research on the time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index).
Application to imaging and spectroscopy
In addition to wideband A/D conversion, photonic time-stretch (PTS) is also an enabling technology for high-throughput real-time instrumentation such as imaging and spectroscopy. The first artificial intelligence facilitated high-speed phase microscopy is demonstrated to improve the diagnosis accuracy of cancer cells out of blood cells by simultaneous measurement of phase and intensity spatial profiles. The world's fastest optical imaging method called serial time-encoded amplified microscopy (STEAM) makes use of the PTS technology to acquire image using a single-pixel photodetector and commercial ADC.
Wavelength-time spectroscopy, which also relies on photonic time-stretch technique, permits real-time single-shot measurements of rapidly evolving or fluctuating spectra.
Time stretch quantitative phase imaging (TS-QPI) is an imaging technique based on time-stretch technology for simultaneous measurement of phase and intensity spatial profiles. In time stretched imaging, the object's spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds. Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging.
References
Further reading
G. C. Valley, "Photonic analog-to-digital converters," Opt. Express, vol. 15, no. 5, pp. 1955–1982, March 2007.
Photonic Bandwidth Compression for Instantaneous Wideband A/D Conversion (PHOBIAC) project.
Short time Fourier transform for time-frequency analysis of ultrawideband signals
Photonics
Analog circuits
Optical devices
Fiber-optic communications
Electronic engineering
Measuring instruments | Time-stretch analog-to-digital converter | [
"Materials_science",
"Technology",
"Engineering"
] | 2,174 | [
"Glass engineering and science",
"Computer engineering",
"Optical devices",
"Analog circuits",
"Measuring instruments",
"Electronic engineering",
"Electrical engineering"
] |
22,672,945 | https://en.wikipedia.org/wiki/Cytel | Cytel is a multinational statistical software developer and contract research organization, headquartered in Cambridge, Massachusetts, USA. Cytel provides clinical trial design and implementation services, and statistical software products primarily for the biotech and pharmaceutical development markets.
Cytel specializes in adaptive trials – a type of randomized clinical trial that allows modifications of ongoing trials while aiming to preserve the statistical validity and integrity of the study. Based on either frequentist or Bayesian statistics, adaptive trial designs are now widely accepted by government regulatory agencies including the United States Food and Drug Administration (FDA), European Medicines Agency (EMA), and Medicines and Healthcare products Regulatory Agency (MHRA) in early and later stage clinical studies.
As of January 2024, Cytel asserts that its software products and services are used by 30 large biopharmaceutical companies. With a presence spanning North America, Europe, and Asia, the company has a workforce exceeding 1,900 resources.
Background
Company founders Cyrus Mehta, Ph.D. and Nitin Patel, Ph.D. are among the pioneering statisticians credited for developing the underlying statistical methods behind so-called “flexible” designs: group sequential and adaptive trials.
As of 2024, Cytel statisticians have collectively published over 140 papers in peer-reviewed statistical and medical journals.
Cytel consulting
Cytel's consulting arm focuses on optimizing approaches for biopharma clinical research development objectives. Functional elements their Strategic Consulting team claims to provide include:
Adaptive Trial Design and Implementation
Collaborative Research Projects
Program and Portfolio Optimization
Regulatory Interactions
Multiplicity
Missing Data
DMC Membership
Independent Statistical Committee
Advanced Real-world Analytics
Health Economics and Outcomes Research
Clinical research services
Cytel's clinical research services arm focuses on improving the probability of success for biopharma clinical research development efforts. Functional elements their clinical research services team claim to provide include:
Support for DMCs
Randomization Services
Clinical Data Management
Biostatistics
Statistical Programming
Medical Writing
CDISC Migration
Regulatory Submissions
Quantitative Pharmacology
Pharmacometrics
Data Science
Complex and Innovative Designs
Software products
East Horizon
In 2021, Cytel released Solara, the industry's first-to-market clinical trial strategy platform for simulation-guided clinical study design and selection. In 2024, Cytel expanded the capabilities of Solara by incorporating statistical tests from its Windows-based software East and added the ability to extend its native tests by pulling in custom R functions, and rebranded the product East Horizon.
East
East clinical trial statistical software supports the design, simulation and monitoring of adaptive, group sequential and fixed sample size trials. As of 2024, East 6.5 is in use at over 140 pharmaceutical and biotechnology companies, research centers and regulatory agencies including the FDA's Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research and Center for Devices and Radiological Health divisions.
First introduced by the Cytel Software Corporation in 1995 as “East DOS”, the name is derived from the benefit of 'early stopping' a trial due to futility: a failure of the tested treatment to demonstrate significant improvement over an existing treatment and/or placebo.
Enforesys
Introduced by Cytel in 2015, Enforesys is a feasibility study decision-making tool for predicting recruitment milestones. Enforesys uses historical study site-level data and simulation models to calculate a numerical probability of success for study enrollment strategies.
Compass
Compass is used by biostatisticians and clinicians to plan and design earlier stage adaptive clinical trials (traditionally known as phase 1 human tolerance and phase 2 dose-selection studies).
Compass was the first commercially-offered adaptive trial composition software with both frequentist and Bayesian methods. Other key capabilities include R code integration, trial simulation compute engines, plus various tables, charts and graphs to visualize and communicate trial design attributes.
StatXact
Statistical software based on the exact branch of statistics used for small-sample categorical and nonparametric data problem-solving. Used by statisticians and researchers in all fields of study, StatXact now has 150 different non-parametric statistical tests and procedures.
Initially offered in 1989 as StatXact DOS, StatXact 12 was released in 2021. The StatXact PROCs variant integrates with the popular SAS statistical software.
LogXact
A logistic regression predictive modeling software package suited particularly to cases involving small samples and/or missing data. Logistic regression is used extensively in the medical and social sciences as well as marketing applications to predict subject behavior.
First made available in 1996 under the name LogXact Turbo, LogXact was introduced in 2007 and is currently in its eleventh release. The LogXact PROCs variant integrates with the popular SAS statistical software.
ACES
Cytel's web-based Access Controlled Execution System. ACES simplifies compliance with the related FDA guidance and EMA guidelines by a secure means of communicating a clinical trial's interim analysis results and recommendations between the Data Monitoring Committee (DMC/DSMB), Independent Statistical Center and clinical team members. The validated system automatically creates an audit trial, allowing regulators to readily determine "who saw what and when".
OKGO
OKGO is the first commercially available software to support the implementation of a quantitative go/no-go decision-making framework in clinical trials.
Locations
United States
Cambridge, Massachusetts (HQ)
Seattle, WA (Axio Research)
Canada
Toronto, ON
Vancouver, BC
Europe
Geneva, CH
Paris, FR
Barcelona, ES
Basel, CH
London, UK
Rotterdam, NL
Asia
Shanghai, CN
Singapore, SG
India
Pune
Hyderabad
Ahmedabad
See also
Biostatistics
Clinical research
Drug development
Journal of the American Statistical Association
Randomization
Sample size determination
References
Biotechnology companies of the United States
Companies based in Massachusetts
Contract research organizations
Companies formerly listed on the Nasdaq
Drug discovery companies
Statistical service organizations | Cytel | [
"Chemistry"
] | 1,176 | [
"Drug discovery companies",
"Drug discovery"
] |
22,673,422 | https://en.wikipedia.org/wiki/Dynamics%20of%20the%20celestial%20spheres | Ancient, medieval and Renaissance astronomers and philosophers developed many different theories about the dynamics of the celestial spheres. They explained the motions of the various nested spheres in terms of the materials of which they were made, external movers such as celestial intelligences, and internal movers such as motive souls or impressed forces. Most of these models were qualitative, although a few of them incorporated quantitative analyses that related speed, motive force and resistance.
The celestial material and its natural motions
In considering the physics of the celestial spheres, scholars followed two different views about the material composition of the celestial spheres. For Plato, the celestial regions were made "mostly out of fire" on account of fire's mobility. Later Platonists, such as Plotinus, maintained that although fire moves naturally upward in a straight line toward its natural place at the periphery of the universe, when it arrived there, it would either rest or move naturally in a circle. This account was compatible with Aristotle's meteorology of a fiery region in the upper air, dragged along underneath the circular motion of the lunar sphere. For Aristotle, however, the spheres themselves were made entirely of a special fifth element, Aether (Αἰθήρ), the bright, untainted upper atmosphere in which the gods dwell, as distinct from the dense lower atmosphere, Aer (Ἀήρ). While the four terrestrial elements (earth, water, air and fire) gave rise to the generation and corruption of natural substances by their mutual transformations, aether was unchanging, moving always with a uniform circular motion that was uniquely suited to the celestial spheres, which were eternal. Earth and water had a natural heaviness (gravitas), which they expressed by moving downward toward the center of the universe. Fire and air had a natural lightness (levitas), such that they moved upward, away from the center. Aether, being neither heavy nor light, moved naturally around the center.
The causes of celestial motion
As early as Plato, philosophers considered the heavens to be moved by immaterial agents. Plato believed the cause to be a world-soul, created according to mathematical principles, which governed the daily motion of the heavens (the motion of the Same) and the opposed motions of the planets along the zodiac (the motion of the Different). Aristotle proposed the existence of divine unmoved movers which act as final causes; the celestial spheres mimic the movers, as best they could, by moving with uniform circular motion. In his Metaphysics, Aristotle maintained that an individual unmoved mover would be required to insure each individual motion in the heavens. While stipulating that the number of spheres, and thus gods, is subject to revision by astronomers, he estimated the total as 47 or 55, depending on whether one followed the model of Eudoxus or Callippus. In On the Heavens, Aristotle presented an alternate view of eternal circular motion as moving itself, in the manner of Plato's world-soul, which lent support to three principles of celestial motion: an internal soul, an external unmoved mover, and the celestial material (aether).
Later Greek interpreters
In his Planetary Hypotheses, Ptolemy () rejected the Aristotelian concept of an external prime mover, maintaining instead that the planets have souls and move themselves with a voluntary motion. Each planet sends out motive emissions that direct its own motion and the motions of the epicycle and deferent that make up its system, just as a bird sends out emissions to its nerves that direct the motions of its feet and wings.
John Philoponus (490–570) considered that the heavens were made of fire, not of aether, yet maintained that circular motion is one of the two natural motions of fire. In a theological work, On the Creation of the World (De opificio mundi), he denied that the heavens are moved by either a soul or by angels, proposing that "it is not impossible that God, who created all these things, imparted a motive force to the Moon, the Sun, and other stars – just as the inclination to heavy and light bodies, and the movements due to the internal soul to all living beings – in order that the angels do not move them by force." This is interpreted as an application of the concept of impetus to the motion of the celestial spheres. In an earlier commentary on Aristotle's Physics, Philoponus compared the innate power or nature that accounts for the rotation of the heavens to the innate power or nature that accounts for the fall of rocks.
Islamic interpreters
The Islamic philosophers al-Farabi () and Avicenna (–1037), following Plotinus, maintained that Aristotle's movers, called intelligences, came into being through a series of emanations beginning with God. A first intelligence emanated from God, and from the first intelligence emanated a sphere, its soul, and a second intelligence. The process continued down through the celestial spheres until the sphere of the Moon, its soul, and a final intelligence. They considered that each sphere was moved continually by its soul, seeking to emulate the perfection of its intelligence. Avicenna maintained that besides an intelligence and its soul, each sphere was also moved by a natural inclination (mayl).
An interpreter of Aristotle from Muslim Spain, al-Bitruji (d. ), proposed a radical transformation of astronomy that did away with epicycles and eccentrics, in which the celestial spheres were driven by a single unmoved mover at the periphery of the universe. The spheres thus moved with a "natural nonviolent motion". The mover's power diminished with increasing distance from the periphery so that the lower spheres lagged behind in their daily motion around the Earth; this power reached even as far as the sphere of water, producing the tides.
More influential for later Christian thinkers were the teachings of Averroes (1126–1198), who agreed with Avicenna that the intelligences and souls combine to move the spheres but rejected his concept of emanation. Considering how the soul acts, he maintained that the soul moves its sphere without effort, for the celestial material has no tendency to a contrary motion.
Later in the century, the mutakallim Adud al-Din al-Iji (1281–1355) rejected the principle of uniform and circular motion, following the Ash'ari doctrine of atomism, which maintained that all physical effects were caused directly by God's will rather than by natural causes. He maintained that the celestial spheres were "imaginary things" and "more tenuous than a spider's web". His views were challenged by al-Jurjani (1339–1413), who argued that even if the celestial spheres "do not have an external reality, yet they are things that are correctly imagined and correspond to what [exists] in actuality."
Medieval Western Europe
In the Early Middle Ages, Plato's picture of the heavens was dominant among European philosophers, which led Christian thinkers to question the role and nature of the world-soul. With the recovery of Aristotle's works in the twelfth and thirteenth centuries, Aristotle's views supplanted the earlier Platonism, and a new set of questions regarding the relationships of the unmoved movers to the spheres and to God emerged.
In the early phases of the Western recovery of Aristotle, Robert Grosseteste (–1253), influenced by medieval Platonism and by the astronomy of al-Bitruji, rejected the idea that the heavens are moved by either souls or intelligences. Adam Marsh's (–1259) treatise On the Ebb and Flow of the Sea, which was formerly attributed to Grosseteste, maintained al-Bitruji's opinion that the celestial spheres and the seas are moved by a peripheral mover whose motion weakens with distance.
Thomas Aquinas (–1274), following Avicenna, interpreted Aristotle to mean that there were two immaterial substances responsible for the motion of each celestial sphere, a soul that was an integral part of its sphere, and an intelligence that was separate from its sphere. The soul shares the motion of its sphere and causes the sphere to move through its love and desire for the unmoved separate intelligence. Avicenna, al-Ghazali, Moses Maimonides, and most Christian scholastic philosophers identified Aristotle's intelligences with the angels of revelation, thereby associating an angel with each of the spheres. Moreover, Aquinas rejected the idea that celestial bodies are moved by an internal nature, similar to the heaviness and lightness that moves terrestrial bodies. Attributing souls to the spheres was theologically controversial, as that could make them animals. After the Condemnations of 1277, most philosophers came to reject the idea that the celestial spheres had souls.
Robert Kilwardby (–1279) discussed three alternative explanations of the motions of the celestial spheres, rejecting the views that celestial bodies are animated and are moved by their own spirits or souls, or that the celestial bodies are moved by angelic spirits, which govern and move them. He maintained, instead, that "celestial bodies are moved by their own natural inclinations similar to weight." Just as heavy bodies are naturally moved by their own weight, which is an intrinsic active principle, so the celestial bodies are naturally moved by a similar intrinsic principle. Since the heavens are spherical, the only motion that could be natural to them is rotation. Kilwardby's idea had been earlier held by another Oxford scholar, John Blund (–1248).
In two slightly different discussions, John Buridan () suggested that when God created the celestial spheres, he began to move them, impressing in them a circular impetus that would be neither corrupted nor diminished, since there was neither an inclination to other movements nor any resistance in the celestial region. He noted that this would allow God to rest on the seventh day, but he left the matter to be resolved by the theologians.
Nicole Oresme (-1382) explained the motion of the spheres in traditional terms of the action of intelligences but noted that, contrary to Aristotle, some intelligences are moved; for example, the intelligence that moves the Moon's epicycle shares the motion of the lunar orb in which the epicycle is embedded. He related the spheres' motions to the proportion of motive power to resistance that was impressed in each sphere when God created the heavens. In discussing the relation of the moving power of the intelligence, the resistance of the sphere, and the circular velocity, he said "this ratio ought not to be called a ratio of force to resistance except by analogy, because an intelligence moves by will alone ... and the heavens do not resist it."
According to Grant, except for Oresme, scholastic thinkers did not consider the force-resistance model to be properly applicable to the motion of celestial bodies, although some, such as Bartholomeus Amicus, thought analogically in terms of force and resistance. By the end of the Middle Ages it was the common opinion among philosophers that the celestial bodies were moved by external intelligences, or angels, and not by some kind of an internal mover.
The movers and Copernicanism
Although Nicolaus Copernicus (1473–1543) transformed Ptolemaic astronomy and Aristotelian cosmology by moving the Earth from the center of the universe, he retained both the traditional model of the celestial spheres and the medieval Aristotelian views of the causes of its motion. Copernicus follows Aristotle to maintain that circular motion is natural to the form of a sphere. However, he also appears to have accepted the traditional philosophical belief that the spheres are moved by an external mover.
Johannes Kepler's (1571–1630) cosmology eliminated the celestial spheres, but he held that the planets were moved both by an external motive power, which he located in the Sun, and a motive soul associated with each planet. In an early manuscript discussing the motion of Mars, Kepler considered the Sun to cause the circular motion of the planet. He then attributed the inward and outward motion of the planet, which transforms its overall motion from circular to oval, to a moving soul in the planet since the motion is "not a natural motion, but more of an animate one". In various writings, Kepler often attributed a kind of intelligence to the inborn motive faculties associated with the stars.
In the aftermath of Copernicanism the planets came to be seen as bodies moving freely through a very subtle aethereal medium. Although many scholastics continued to maintain that intelligences were the celestial movers, they now associated the intelligences with the planets themselves, rather than with the celestial spheres.
See also
Christian angelic hierarchy
Notes
References
Primary sources
Aristotle, Metaphysics
Aristotle, On the heavens
Plato, Timaeus
Secondary sources
Ancient Greek astronomy
Early scientific cosmologies
Physical cosmology | Dynamics of the celestial spheres | [
"Physics",
"Astronomy"
] | 2,673 | [
"Astrophysics",
"Theoretical physics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
22,674,559 | https://en.wikipedia.org/wiki/Material%20flow | Material flow is the description of the transportation of raw materials, pre-fabricates, parts, components, integrated objects and final products as a flow of entities. The term applies mainly to advanced modeling of supply chain management.
As industrial material flow can easily become very complex, several different specialized simulation tools have been developed for complex systems. Typical tools are:
AnyLogic
AutoMod for logistics systems
Plant Simulation for production system
References
Control engineering
Industrial ecology
Industrial engineering
Systems ecology
Resource economics | Material flow | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 96 | [
"Systems ecology",
"Industrial engineering",
"Control engineering",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
30,138,821 | https://en.wikipedia.org/wiki/Quantum%20cognition | Quantum cognition uses the mathematical formalism of quantum probability theory to model psychology phenomena when classical probability theory fails. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory), and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals). Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.
Quantum cognition can be applied to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception.
Challenges for classical probability theory
Classical probability theory is a rational approach to inference which does not easily explain some observations of human inference in psychology.
Some cases where quantum probability theory has advantages include the conjunction fallacy, the disjunction fallacy, the failures of the sure-thing principle, and question-order bias in judgement.
Conjunction fallacy
If participants in a psychology experiment are told about "Linda", described as looking like a feminist but not like a bank teller, then asked to rank the probability, that Linda is feminist, a bank teller or a feminist and a bank teller, they respond with values that indicate:
Rational classical probability theory makes the incorrect prediction: it expects humans to rank the conjunction less probable than the bank teller option. Many variations of this experiment demonstrate that the fallacy represents human cognition in this case and not an artifact of one presentation.
Quantum cognition models this probability-estimation scenario with quantum probability theory which always ranks sequential probability, , greater than the direct probability, . The idea is that a person's understanding of "bank teller" is affected by the context of the question involving "feminist". The two questions are "incompatible": to treat them with classical theory would require separate reasoning steps.
Main subjects of research
Quantum-like models of information processing
The quantum cognition concept is based on the observation that various cognitive phenomena are more adequately described by quantum probability theory than by the classical probability theory (see examples below). Thus, the quantum formalism is considered an operational formalism that describes non-classical processing of probabilistic data.
Here, contextuality is the key word (see the monograph of Khrennikov for detailed representation of this viewpoint). Quantum mechanics is fundamentally contextual. Quantum systems do not have objective properties which can be defined independently of measurement context. As has been pointed out by Niels Bohr, the whole experimental arrangement must be taken into account. Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability, and constructive or destructive interference effects. Thus, the quantum cognition approach can be considered an attempt to formalize contextuality of mental processes, by using the mathematical apparatus of quantum mechanics.
Decision making
Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:
When subjects believe they won the first round, the majority of subjects choose to play again on the second round.
When subjects believe they lost the first round, the majority of subjects choose to play again on the second round.
Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round.
But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round.
This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics. Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.
The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes. These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.
Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.
Human probability judgments
Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.
The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.
Knowledge representation
Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect, and the overextension and underextension of typicality and membership weight for conjunction and disjunction. By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.
Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine
Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations
Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.
The large amount of data collected by Hampton on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence. And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.
Semantic analysis and information retrieval
The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR, (b) Widdows and Peters utilised a quantum logical negation for a concrete search system, and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis. Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.
History
Ideas for applying the formalisms of quantum theory to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science.
See also
References
Further reading
External links
Quantum information theory
Cognitive modeling
Cognitive science
Decision theory
Quantum mind | Quantum cognition | [
"Physics"
] | 2,235 | [
"Quantum mind",
"Quantum mechanics"
] |
30,139,095 | https://en.wikipedia.org/wiki/Mann%20Eddy | The Mann Eddy is a very small feature of ocean currents in the Atlantic. It is a persistent clockwise circulation in the middle of the North Atlantic ocean, specifically "a
mesoscale anticyclone, adjacent to the path of the North Atlantic Current (NAC) in the Newfoundland basin". The eddy has persisted since its initial discovery in 1967.
The peak in the Eddy Kinetic Energy (EKE) associated with the Mann Eddy lies at around 43°N 43°W, within the North Atlantic Current travelling to the North along the Grand Banks.
The oceanographer Dr Rory Bingham from Newcastle University (UK) describes it as "a persistent pocket of water in the Atlantic that just goes around and around."
References
Physical oceanography
Currents of the Atlantic Ocean | Mann Eddy | [
"Physics"
] | 155 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
30,139,196 | https://en.wikipedia.org/wiki/Mode%20coupling | In the term mode coupling, as used in physics and electrical engineering, the word "mode" refers to eigenmodes of an idealized, "unperturbed", linear system. The superposition principle says that eigenmodes of linear systems are independent of each other: it is possible to excite or to annihilate a specific mode without influencing any other mode; there is no dissipation. In most real systems, however, there is at least some perturbation that causes energy transfer between different modes. This perturbation, interpreted as an interaction between the modes, is what is called "mode coupling".
Important applications are:
In fiber optics
In lasers (compare mode-locking)
In condensed-matter physics, critical slowing down can be described by a Coupled mode theory.
See also
Nonlinear optics
Nonlinear acoustics
Equilibrium mode distribution
References
Condensed matter physics
Nonlinear optics
Fiber-optic communications | Mode coupling | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 188 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
30,141,033 | https://en.wikipedia.org/wiki/C9H8N2 | {{DISPLAYTITLE:C9H8N2}}
The molecular formula C9H8N2 (molar mass: 144.17 g/mol, exact mass: 144.0687 u) may refer to:
4-Aminoquinoline
8-Aminoquinoline
Molecular formulas | C9H8N2 | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
30,148,162 | https://en.wikipedia.org/wiki/Self-drying%20concrete%20technology | Self-drying concrete technology is found in certain cementitious patching and leveling materials and tile-setting mortars used in the flooring industry. Self-drying technology allows the cement mix to consume all of its mix water while curing, eliminating the need for excess water to evaporate prior to installing flooring. Traditional floor coverings, such as VCT, sheet vinyl, carpet and ceramic tile, can be installed before the material is completely dry and as soon as it hardens, which typically happens in the first two hours after placement.
Traditional concrete has a water:cement ratio of about 0.5, which refers to the weight of the water divided by the weight of the cement. A water:cement ratio of 0.5 provides good workability while keeping the amount of excess water in the mix fairly low. Without at least this much extra water, the concrete would be too dry to place.
The chemical reaction of Portland cement and water that is known as hydration, which is necessary for the strengthening of the concrete, requires a water:cement ratio of only about 0.25. With a water:cement ratio of 0.5, there is twice the amount of water in the concrete mix than what is needed for hydration. This excess water needs to evaporate before flooring can be installed. Note: The magical number of 28 days defines only the designed strength of the concrete but has nothing to do with the dryness of it. E.g. A 10-year-old concrete slab can contain more moisture than a 28-day-old slab! Conversely, a self-drying concrete blend consumes all of its mix water with a water:cement ratio of up to 0.6, maintaining good workability while allowing flooring to be installed before it is completely dry.
There are also cement products that are partially self-drying, meaning that they use a high percentage of their mix water for hydration as opposed to using 100% of it. This type of product might be used when the flooring does not need to be installed the same day but must still be installed more quickly than traditional concrete would allow. For instance, products that are 80% self-drying allow flooring to be installed the next day, typically after a 16-hour cure.
Self-drying technology was developed by Ardex in Germany and was introduced in the United States in 1978.
Concrete
Composite materials | Self-drying concrete technology | [
"Physics",
"Engineering"
] | 490 | [
"Structural engineering",
"Composite materials",
"Materials",
"Concrete",
"Matter"
] |
30,151,814 | https://en.wikipedia.org/wiki/SkQ | SkQ is a class of mitochondria-targeted antioxidants, developed by Professor Vladimir Skulachev and his team. In a broad sense, SkQ is a lipophilic cation, linked via saturated hydrocarbon chain to an antioxidant. Due to its lipophilic properties, SkQ can effectively penetrate through various cell membranes. The positive charge provides directed transport of the whole molecule including antioxidant moiety into the negatively charged mitochondrial matrix. Substances of this type, various drugs that are based on them, as well as methods of their use are patented in Russia and other countries such as United States, China, Japan, and in Europe. Sometimes the term SkQ is used in a narrow sense for the denomination of a cationic derivative of the plant antioxidant plastoquinone.
History
In 1969, triphenylphosphonium (TPP, charged triphenylphosphine) was proposed for use for the first time. This compound with a low molecular weight consists of a positively charged phosphorus atom and surrounded by three hydrophobic phenyls that are accumulating in mitochondria. In 1970, the use of the TPP for targeting the delivery of compounds to the mitochondrial matrix was proposed. In 1974, the TPP, as well as its derivatives and other penetrating ions, were named "Skulachev's Ions" by the famous American biochemist David E. Green.
In 1999, the first work on the directed delivery of the antioxidant alpha-tocopherol linked by a hydrocarbon chain to TPP to the mitochondria was published. The compound was named TPPB or MitoVitE. Several years later MitoQ, a better version of a mitochondria-targeted compound was synthesized. Its antioxidant part is represented by ubiquinone, which is linked with a 10-carbon aliphatic chain to TPP.
In the early 2000s, a group of researchers led by prof V. P. Skulachev in Moscow State University began the development of SkQ — the mitochondrial-targeted antioxidant, similar to MitoQ, but with the ubiquinone replaced with plastoquinone (more active analog of ubiquinone derived from plant chloroplasts). Since 2005, several modified SkQ compounds were synthesized and tested in vitro, the efficiency and the antioxidant effects of the tested compounds were higher than previous analogs by hundreds of times. All of these compounds have abbreviated names derived from the names of Skulachev (Sk), letters for quinone (Q) and denote the modification (alpha and/or numeric symbol, for example, R1 for a derivative of rhodamine and plastoquinone). The largest amount of data was obtained for SkQ1 and SkQR1.
Later SkQ properties were tested in vitro on fibroblasts and in vivo in different organisms: mice, drosophilids, yeast, and many others. It was found that SkQ is able to protect cells from death from oxidative stress and is effective as a treatment of age-related diseases in animals.
Since 2008, the development of pharmaceuticals based on SkQ has been started. In 2012, The Ministry of Health of the Russian Federation approved the use of the eye drops "Visomitin" based on SkQ1 for the treatment of dry eye syndrome and the early stage of cataract. Testing of the efficacy of SkQ-drugs against other diseases, both in Russia and in the United States is currently in progress.
In 2016, phase 1 of a clinical trial of an oral drug containing SkQ1 was conducted in Russia. In 2017, it was found that SkQ has a strong antibacterial effect and is able to inhibit the activity of multidrug-resistant enzymes in bacteria Since 2019 the Skulachev project is developing mitochondrial antioxidants in several areas: synthesis and testing of new SkQ compounds, testing the effects on a variety of model systems and in different diseases.
Classification
SkQ compound consists of three parts: antioxidant, C-aliphatic linker and lipophilic cation.
A list of some of SkQ and substances with similar structure:
By type of cation
Lipophilic cation determines the efficiency of penetration through the membranes into the mitochondrial matrix. The best properties are shown by SkQ-compounds with triphenylphosphonium ion (TPP): MitoQ, SkQ1, and others. Similar penetration efficiency was shown for compounds with rhodamine 19, such as SkQR1. Rhodamine has fluorescence properties, so its derivatives are used in the visualization of mitochondria. The SkQ derivatives with acetylcarnitine (SkQ2M) tributyl ammonium (SkQ4) as lipophilic cations have weak penetrating properties.
The cations with the well-known medical properties — berberine and palmatine were also tested. SkQBerb and SkQPalm – SkQ derivatives, do not differ much in properties from SkQ1 and SkQR1.
The length of the linker
In SkQ compounds, a decamethylenic linker (an aliphatic chain of 10 carbon atoms) is used. Reduction of the length of the chain leads to a deterioration in the penetrating ability of ions. The compound with such pentamethylenic linker is demonstrated on SkQ5. Molecular dynamics in the membrane calculated with a computer have shown that the length of the linker of 10 is optimal for the manifestation of antioxidant properties of SkQ1. The quinone residue is located right next to the C9 or C13 atoms of the fatty acids of the membrane that has to be protected from oxidative damage.
The type of antioxidant
Compounds without antioxidant part are used to control the effect of SkQ compound. For example, C12-TPP and C12R1 penetrate the mitochondria but do not inhibit the oxidation. Interestingly, these compounds partially demonstrate the positive effects of SkQ. This happens due to the phenomenon of soft depolarization (mild uncoupling) of the mitochondrial membrane. Compounds with tocopherol and ubiquinone for historical reasons are called MitoVitE and MitoQ, although formally they can be attributed to the class of SkQ-compounds. MitoQ is traditionally used for comparison with the SkQ compound.
The highest antioxidant activity was shown for the compounds with thymoquinone (SkQT1 and SkQTК1). Thymoquinone is a derivative of plastoquinone, but with one methyl substituent in the aromatic ring. Next in the sequence of antioxidant activity connection is plastoquinone (SkQ1 and SkQR1), with two methyl substituents. SkQ3 is a less active compound, with three methyl substituents. SkQB without methyl substituents exhibits the weakest antioxidant properties.
In general, SkQ-like compounds can be arrange by its antioxidant activity as follows: SkQB < MitoQ < DMMQ ≈ SkQ3 < SkQ1 < SkQT.
Mechanism of action
The positive effect of SkQ is associated with its following properties:
penetration into the mitochondria — the main source of reactive oxygen species (ROS) of the cells
inhibition of ROS at the site of their formation in two different ways:
direct neutralization of ROS due to the oxidation of plastoquinone,
reduction of mitochondrial membrane potential
Penetration into the mitochondria
Due to its lipophilic properties, SkQ-substances can penetrate the lipid bilayer. The transportation is caused by the electrical potential due to the presence of a positive charge in SkQ. Mitochondria are the only intracellular organelles with a negative charge. Therefore, SkQ effectively penetrates and accumulates there.
The accumulation coefficient can be estimated using the Nernst equation. To do this, we must take into account that the potential of the plasma membrane of the cell is about 60 mV (the cytoplasm has a negative charge), and the potential of the mitochondrial membrane is about 180 mV (the matrix has a negative charge). As a result, the electric gradient SkQ between the extracellular medium and the mitochondrial matrix is 104.
It should also be taken into account that SkQ has a high coefficient of distribution between lipid and water, about 104. Taking this into account, the total concentration gradient of SkQ inside the inner layer of the inner mitochondria membrane can be up to 108.
Direct inhibition of ROS
Oxidation of organic substances by ROS is a chain process. Several types of active free radicals — peroxide (RO2*), alkoxyl (RO*), alkyl (R*), and ROS (superoxide anion, singlet oxygen), participate in these chain reactions.
One of the main targets of ROS — cardiolipin, polyunsaturated phospholipid of the inner membrane of mitochondria, which is especially sensitive to peroxidation. After a radical attack on the C11 atom of linoleic acid, cardiolipin forms peroxyl radical, which is stabilized at positions C9 and C13 due to its neighboring double bonds.
The location of the SkQ1 in the mitochondrial membrane is that the plastoquinone residue is exactly near of C9 or C13 of cardiolipin (depending on the SkQ conformation). Thus, it can quickly and effectively quench the peroxyl radical of cardiolipin.
Another important property of SkQ is its recyclability. After ROS neutralization the SkQ antioxidant moiety is converted to its oxidized form (plastoquinone or semi-quinone). Then it can be quickly restored by the complex III of the respiratory chain. Thus, due to the functioning of the respiratory chain, SkQ exists mainly in a restored, active form.
Uncoupling properties
In some cases (for example, in experiments on the lifespan of Drosophila or plant models) compound C12-TPP (without the plastoquinone residue) can successfully substitute for SkQ1.
This phenomenon is explained by the fact that any hydrophobic compound with a delocalized positive charge is able to transfer anions of fatty acids from one side of the membrane to another, thus lowering the transmembrane potential. This phenomenon is called uncoupling of respiration and ATP synthesis on the mitochondrial membrane. In the cell, this function is normally performed by uncoupling proteins (or UCPs, including thermogenin from brown fat adipocytes) and ATP/ADP antiporter.
Weak depolarization of the membrane leads to a multiple reductions in the amount of ROS produced by mitochondria.
Pro-oxidant effect
At high concentrations (micromolar and more) SkQ-compounds exhibit pro-oxidant properties stimulating ROS production.
The advantage of SkQ1 is that the difference in concentrations between pro- and antioxidant activity is about 1000 fold. Experiments on mitochondria have shown that SkQ1 begins to exhibit antioxidant properties already at concentrations of 1 nM, and pro-oxidant properties at concentrations of about 1 μM. For comparison, this "concentration window" of MitoQ is only about 2-5 fold. The manifestation of antioxidant activity of MitoQ begins only with concentrations of 0.3 μM while it begins to demonstrate pro-oxidant effect at 0.6-1.0 μM.
Anti-inflammatory effect
In several experimental models (including experiments on laboratory animals) SkQ1 and SkQR1 showed a pronounced anti-inflammatory effect.
Suppression of multiple drug resistance
SkQ1 and C12-TPP are substrates of ABC-transporters. The main function of these enzymes is the protection of cells from xenobiotics. Lipophilic cations compete with other substrates of these carriers and thus weaken the protection of cells from external influences.
Use
Medicine
SkQ is able to delay the development of several traits of aging and increase the life span in a variety of animals. Depending on the type of SkQ molecule, the substance may reduce early mortality, increase life expectancy and extend the maximum age of experimental animals. Also in various experiments, SkQ has slowed down the development of several age-dependent pathologies and signs of aging.
It was shown that SkQ accelerates wound healing, as well as treats age-related diseases such as osteoporosis, cataracts, retinopathy, and others.
At the end of 2008, preparations for the official approval of SkQ-based pharmaceuticals in Russia has started. The efficiency of eye drops against "dry eye syndrome" was also confirmed in the following double-blind placebo-controlled studies: (a) international multicenter study in Russia and Ukraine, phase II study in the United States. A clinical study on patients with age-related cataracts was also successfully conducted. In Russia in 2019 clinical studies are in progress for two improved versions of SkQ1-based eye drops – Visomitin Forte (phase II study on patients with age-related macular degeneration) and Visomitin Ultra (Phase I clinical study).
In 2018-2021, both attempts at Phase III clinical trials in the United States failed to show any statistically significant results among 452 (VISTA-1/NCT03764735) and 610 (VSTA-2/NCT04206020) participants respectively.
Cosmetology
SkQ1 is included in the composition of cosmetic products such as Mitovitan Active, Mitovitan, and Exomitin.
Veterinary
The drug "Visomitin" on the basis of SkQ1 used in veterinary practice for the treatment of ophthalmologic diseases in pets. In particular, the effectiveness is shown for the treatment of retinopathy in dogs, cats, and horses.
Else
Experiments have shown an unexpected effect of SkQ on plants. The substance stimulated differentiation (in the treatment of callus) and seed germination (patent US 8,557,733), increased the yield of different crops (Ph.D. thesis of A.I. Uskov).
See also
Geroprotector
References
External links
Biomedical project "Skulachev ions" in Moscow
Membrane biology
Mitochondria
Dietary antioxidants
Senescence | SkQ | [
"Chemistry",
"Biology"
] | 3,076 | [
"Mitochondria",
"Membrane biology",
"Senescence",
"Cellular processes",
"Molecular biology",
"Metabolism"
] |
307,065 | https://en.wikipedia.org/wiki/Tissue%20engineering | Tissue engineering is a biomedical engineering discipline that uses a combination of cells, engineering, materials methods, and suitable biochemical and physicochemical factors to restore, maintain, improve, or replace different types of biological tissues. Tissue engineering often involves the use of cells placed on tissue scaffolds in the formation of new viable tissue for a medical purpose, but is not limited to applications involving cells and tissue scaffolds. While it was once categorized as a sub-field of biomaterials, having grown in scope and importance, it can be considered as a field of its own.
While most definitions of tissue engineering cover a broad range of applications, in practice, the term is closely associated with applications that repair or replace portions of or whole tissues (i.e. organs, bone, cartilage, blood vessels, bladder, skin, muscle etc.). Often, the tissues involved require certain mechanical and structural properties for proper functioning. The term has also been applied to efforts to perform specific biochemical functions using cells within an artificially-created support system (e.g. an artificial pancreas, or a bio artificial liver). The term regenerative medicine is often used synonymously with tissue engineering, although those involved in regenerative medicine place more emphasis on the use of stem cells or progenitor cells to produce tissues.
Overview
A commonly applied definition of tissue engineering, as stated by Langer and Vacanti, is "an interdisciplinary field that applies the principles of engineering and life sciences toward the development of biological substitutes that restore, maintain, or improve [Biological tissue] function or a whole organ". In addition, Langer and Vacanti also state that there are three main types of tissue engineering: cells, tissue-inducing substances, and a cells + matrix approach (often referred to as a scaffold). Tissue engineering has also been defined as "understanding the principles of tissue growth, and applying this to produce functional replacement tissue for clinical use". A further description goes on to say that an "underlying supposition of tissue engineering is that the employment of natural biology of the system will allow for greater success in developing therapeutic strategies aimed at the replacement, repair, maintenance, or enhancement of tissue function".
Developments in the multidisciplinary field of tissue engineering have yielded a novel set of tissue replacement parts and implementation strategies. Scientific advances in biomaterials, stem cells, growth and differentiation factors, and biomimetic environments have created unique opportunities to fabricate or improve existing tissues in the laboratory from combinations of engineered extracellular matrices ("scaffolds"), cells, and biologically active molecules. Among the major challenges now facing tissue engineering is the need for more complex functionality, biomechanical stability, and vascularization in laboratory-grown tissues destined for transplantation.
Etymology
The historical origin of the term is unclear as the definition of the word has changed throughout the past few decades. The term first appeared in a 1984 publication that described the organization of an endothelium-like membrane on the surface of a long-implanted, synthetic ophthalmic prosthesis.
The first modern use of the term as recognized today was in 1985 by the researcher, physiologist and bioengineer Yuan-Cheng Fung of the Engineering Research Center. He proposed the joining of the terms tissue (in reference to the fundamental relationship between cells and organs) and engineering (in reference to the field of modification of said tissues). The term was officially adopted in 1987.
History
Ancient era (pre-17th century)
A rudimentary understanding of the inner workings of human tissues may date back further than most would expect. As early as the Neolithic period, sutures were being used to close wounds and aid in healing. Later on, societies such as ancient Egypt developed better materials for sewing up wounds such as linen sutures. Around 2500 BC in ancient India, skin grafts were developed by cutting skin from the buttock and suturing it to wound sites in the ear, nose, or lips. Ancient Egyptians often would graft skin from corpses onto living humans and even attempted to use honey as a type of antibiotic and grease as a protective barrier to prevent infection. In the 1st and 2nd centuries AD, Gallo-Romans developed wrought iron implants and dental implants could be found in ancient Mayans.
Enlightenment (17th century–19th century)
While these ancient societies had developed techniques that were way ahead of their time, they still lacked a mechanistic understanding of how the body was reacting to these procedures. This mechanistic approach came along in tandem with the development of the empirical method of science pioneered by René Descartes. Sir Isaac Newton began to describe the body as a "physiochemical machine" and postured that disease was a breakdown in the machine.
In the 17th century, Robert Hooke discovered the cell and a letter from Benedict de Spinoza brought forward the idea of the homeostasis between the dynamic processes in the body. Hydra experiments performed by Abraham Trembley in the 18th century began to delve into the regenerative capabilities of cells. During the 19th century, a better understanding of how different metals reacted with the body led to the development of better sutures and a shift towards screw and plate implants in bone fixation. Further, it was first hypothesized in the mid-1800s that cell-environment interactions and cell proliferation were vital for tissue regeneration.
Modern era (20th and 21st centuries)
As time progresses and technology advances, there is a constant need for change in the approach researchers take in their studies. Tissue engineering has continued to evolve over centuries. In the beginning people used to look at and use samples directly from human or animal cadavers. Now, tissue engineers have the ability to remake many of the tissues in the body through the use of modern techniques such as microfabrication and three-dimensional bioprinting in conjunction with native tissue cells/stem cells. These advances have allowed researchers to generate new tissues in a much more efficient manner. For example, these techniques allow for more personalization which allow for better biocompatibility, decreased immune response, cellular integration, and longevity. There is no doubt that these techniques will continue to evolve, as we have continued to see microfabrication and bioprinting evolve over the past decade.
In 1960, Wichterle and Lim were the first to publish experiments on hydrogels for biomedical applications by using them in contact lens construction. Work on the field developed slowly over the next two decades, but later found traction when hydrogels were repurposed for drug delivery. In 1984, Charles Hull developed bioprinting by converting a Hewlett-Packard inkjet printer into a device capable of depositing cells in 2-D. Three dimensional (3-D) printing is a type of additive manufacturing which has since found various applications in medical engineering, due to its high precision and efficiency. With biologist James Thompson's development of first human stem cell lines in 1998 followed by transplantation of first laboratory-grown internal organs in 1999 and creation of the first bioprinter in 2003 by the University of Missouri when they printed spheroids without the need of scaffolds, 3-D bioprinting became more conventionally used in medical field than ever before. So far, scientists have been able to print mini organoids and organs-on-chips that have rendered practical insights into the functions of a human body. Pharmaceutical companies are using these models to test drugs before moving on to animal studies. However, a fully functional and structurally similar organ has not been printed yet. A team at University of Utah has reportedly printed ears and successfully transplanted those onto children born with defects that left their ears partially developed.
Today hydrogels are considered the preferred choice of bio-inks for 3-D bioprinting since they mimic cells' natural ECM while also containing strong mechanical properties capable of sustaining 3-D structures. Furthermore, hydrogels in conjunction with 3-D bioprinting allow researchers to produce different scaffolds which can be used to form new tissues or organs. 3-D printed tissues still face many challenges such as adding vasculature. Meanwhile, 3-D printing parts of tissues definitely will improve our understanding of the human body, thus accelerating both basic and clinical research.
Examples
As defined by Langer and Vacanti, examples of tissue engineering fall into one or more of three categories: "just cells," "cells and scaffold," or "tissue-inducing factors."
In vitro meat: Edible artificial animal muscle tissue cultured in vitro.
Bioartificial liver device, "Temporary Liver", Extracorporeal Liver Assist Device (ELAD): The human hepatocyte cell line (C3A line) in a hollow fiber bioreactor can mimic the hepatic function of the liver for acute instances of liver failure. A fully capable ELAD would temporarily function as an individual's liver, thus avoiding transplantation and allowing regeneration of their own liver.
Artificial pancreas: Research involves using islet cells to regulate the body's blood sugar, particularly in cases of diabetes . Biochemical factors may be used to cause human pluripotent stem cells to differentiate (turn into) cells that function similarly to beta cells, which are in an islet cell in charge of producing insulin.
Artificial bladders: Anthony Atala (Wake Forest University) has successfully implanted artificial bladders, constructed of cultured cells seeded onto a bladder-shaped scaffold, into seven out of approximately 20 human test subjects as part of a long-term experiment.
Cartilage: lab-grown cartilage, cultured in vitro on a scaffold, was successfully used as an autologous transplant to repair patients' knees.
Scaffold-free cartilage: Cartilage generated without the use of exogenous scaffold material. In this methodology, all material in the construct is cellular produced directly by the cells.
Bioartificial heart: Doris Taylor's lab constructed a biocompatible rat heart by re-cellularising a de-cellularised rat heart. This scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a "landmark". The lab first stripped the cells away from a rat heart (a process called "decellularization") and then injected rat stem cells into the decellularized rat heart.
Tissue-engineered blood vessels: Blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. Tissue engineered blood vessels have been developed by many different approaches. They could be implanted as pre-seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts.
Artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio-printed constructs for battlefield burn repairs.
Artificial bone marrow: Bone marrow cultured in vitro to be transplanted serves as a "just cells" approach to tissue engineering.
Tissue engineered bone: A structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. Materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. Various types of cells can be added directly into the matrix to expedite the process.
Laboratory-grown penis: Decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. The organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma.
Oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa.
Cells as building blocks
Cells are one of the main components for the success of tissue engineering approaches. Tissue engineering uses cells as strategies for creation/replacement of new tissue. Examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair (MACI–FDA approved product), and hepatocytes used in liver support systems
Cells can be used alone or with support matrices for tissue engineering applications. An adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell-based building blocks. Manipulation of any of these cell processes create alternative avenues for the development of new tissue (e.g., cell reprogramming - somatic cells, vascularization).
Isolation
Techniques for cell isolation depend on the cell source. Centrifugation and apheresis are techniques used for extracting cells from biofluids (e.g., blood). Whereas digestion processes, typically using enzymes to remove the extracellular matrix (ECM), are required prior to centrifugation or apheresis techniques to extract cells from tissues/organs. Trypsin and collagenase are the most common enzymes used for tissue digestion. While trypsin is temperature dependent, collagenase is less sensitive to changes in temperature.
Cell sources
Primary cells are those directly isolated from host tissue. These cells provide an ex-vivo model of cell behavior without any genetic, epigenetic, or developmental changes; making them a closer replication of in-vivo conditions than cells derived from other methods. This constraint however, can also make studying them difficult. These are mature cells, often terminally differentiated, meaning that for many cell types proliferation is difficult or impossible. Additionally, the microenvironments these cells exist in are highly specialized, often making replication of these conditions difficult.
Secondary cells A portion of cells from a primary culture is moved to a new repository/vessel to continue being cultured. Medium from the primary culture is removed, the cells that are desired to be transferred are obtained, and then cultured in a new vessel with fresh growth medium. A secondary cell culture is useful in order to ensure that cells have both the room and nutrients that they require to grow. Secondary cultures are most notably used in any scenario in which a larger quantity of cells than can be found in the primary culture is desired. Secondary cells share the constraints of primary cells (see above) but have an added risk of contamination when transferring to a new vessel.
Genetic classifications of cells
Autologous: The donor and the recipient of the cells are the same individual. Cells are harvested, cultured or stored, and then reintroduced to the host. As a result of the host's own cells being reintroduced, an antigenic response is not elicited. The body's immune system recognizes these re-implanted cells as its own, and does not target them for attack. Autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. Adipose-derived and bone marrow-derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients.
Allogenic: Cells are obtained from the body of a donor of the same species as the recipient. While there are some ethical constraints to the use of human cells for in vitro studies (i.e. human brain tissue chimera development), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin.
Xenogenic: These cells are derived isolated cells from alternate species from the recipient. A notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. Chimeric human-animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals.
Syngeneic or isogenic: These cells describe those borne from identical genetic code. This imparts an immunologic benefit similar to autologous cell lines (see above). Autologous cells can be considered syngenic, but the classification also extends to non-autologously derived cells such as those from an identical twin, from genetically identical (cloned) research models, or induced stem cells (iSC) as related to the donor.
Stem cells
Stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. Stem cells are divided into "adult" and "embryonic" stem cells according to their source. While there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source – induced pluripotent stem cellsmay be useful for the repair of diseased or damaged tissues, or may be used to grow new organs.
Totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra-embryonic tissue.
Pluripotent cells are stem cells which can differentiate into any cell type in the body except extra-embryonic tissue. induced pluripotent stem cells (iPSCs) are subclass of pluripotent stem cells resembling embryonic stem cells (ESCs) that have been derived from adult differentiated cells. iPSCs are created by altering the expression of transcriptional factors in adult cells until they become like embryonic stem cells.
Multipotent stem cells can be differentiated into any cell within the same class, such as blood or bone. A common example of multipotent cells is Mesenchymal stem cells (MSCs).
Scaffolds
Scaffolds are materials that have been engineered to cause desirable cellular interactions to contribute to the formation of new functional tissues for medical purposes. Cells are often 'seeded' into these structures capable of supporting three-dimensional tissue formation. Scaffolds mimic the extracellular matrix of the native tissue, recapitulating the in vivo milieu and allowing cells to influence their own microenvironments. They usually serve at least one of the following purposes: allowing cell attachment and migration, delivering and retaining cells and biochemical factors, enabling diffusion of vital cell nutrients and expressed products, and exerting certain mechanical and biological influences to modify the behaviour of the cell phase.
In 2009, an interdisciplinary team led by the thoracic surgeon Thorsten Walles implanted the first bioartificial transplant that provides an innate vascular network for post-transplant graft supply successfully into a patient awaiting tracheal reconstruction.
To achieve the goal of tissue reconstruction, scaffolds must meet some specific requirements. High porosity and adequate pore size are necessary to facilitate cell seeding and diffusion throughout the whole structure of both cells and nutrients. Biodegradability is often an essential factor since scaffolds should preferably be absorbed by the surrounding tissues without the necessity of surgical removal. The rate at which degradation occurs has to coincide as much as possible with the rate of tissue formation: this means that while cells are fabricating their own natural matrix structure around themselves, the scaffold is able to provide structural integrity within the body and eventually it will break down leaving the newly formed tissue which will take over the mechanical load. Injectability is also important for clinical uses.
Recent research on organ printing is showing how crucial a good control of the 3D environment is to ensure reproducibility of experiments and offer better results.
Materials
Material selection is an essential aspect of producing a scaffold. The materials utilized can be natural or synthetic and can be biodegradable or non-biodegradable. Additionally, they must be biocompatible, meaning that they do not cause any adverse effects to cells. Silicone, for example, is a synthetic, non-biodegradable material commonly used as a drug delivery material, while gelatin is a biodegradable, natural material commonly used in cell-culture scaffolds
The material needed for each application is different, and dependent on the desired mechanical properties of the material. Tissue engineering of long bone defects for example, will require a rigid scaffold with a compressive strength similar to that of cortical bone (100-150 MPa), which is much higher compared to a scaffold for skin regeneration.
There are a few versatile synthetic materials used for many different scaffold applications. One of these commonly used materials is polylactic acid (PLA), a synthetic polymer. PLA – polylactic acid. This is a polyester which degrades within the human body to form lactic acid, a naturally occurring chemical which is easily removed from the body. Similar materials are polyglycolic acid (PGA) and polycaprolactone (PCL): their degradation mechanism is similar to that of PLA, but PCL degrades slower and PGA degrades faster. PLA is commonly combined with PGA to create poly-lactic-co-glycolic acid (PLGA). This is especially useful because the degradation of PLGA can be tailored by altering the weight percentages of PLA and PGA: More PLA – slower degradation, more PGA – faster degradation. This tunability, along with its biocompatibility, makes it an extremely useful material for scaffold creation.
Scaffolds may also be constructed from natural materials: in particular different derivatives of the extracellular matrix have been studied to evaluate their ability to support cell growth. Protein based materials – such as collagen, or fibrin, and polysaccharidic materials- like chitosan or glycosaminoglycans (GAGs), have all proved suitable in terms of cell compatibility. Among GAGs, hyaluronic acid, possibly in combination with cross linking agents (e.g. glutaraldehyde, water-soluble carbodiimide, etc.), is one of the possible choices as scaffold material.
Due to the covalent attachment of thiol groups to these polymers, they can crosslink via disulfide bond formation. The use of thiolated polymers (thiomers) as scaffold material for tissue engineering was initially introduced at the 4th Central European Symposium on Pharmaceutical Technology in Vienna 2001. As thiomers are biocompatible, exhibit cellular mimicking properties and efficiently support proliferation and differentiation of various cell types, they are extensively used as scaffolds for tissue engineering. Furthermore thiomers such as thiolated hyaluronic acid and thiolated chitosan were shown to exhibit wound healing properties and are subject of numerous clinical trials. Additionally, a fragment of an extracellular matrix protein, such as the RGD peptide, can be coupled to a non-bioactive material to promote cell attachment. Another form of scaffold is decellularized tissue. This is a process where chemicals are used to extracts cells from tissues, leaving just the extracellular matrix. This has the benefit of a fully formed matrix specific to the desired tissue type. However, the decellurised scaffold may present immune problems with future introduced cells.
Synthesis
A number of different methods have been described in the literature for preparing porous structures to be employed as tissue engineering scaffolds. Each of these techniques presents its own advantages, but none are free of drawbacks.
Nanofiber self-assembly
Molecular self-assembly is one of the few methods for creating biomaterials with properties similar in scale and chemistry to that of the natural in vivo extracellular matrix (ECM), a crucial step toward tissue engineering of complex tissues. Moreover, these hydrogel scaffolds have shown superiority in in vivo toxicology and biocompatibility compared to traditional macro-scaffolds and animal-derived materials.
Textile technologies
These techniques include all the approaches that have been successfully employed for the preparation of non-woven meshes of different polymers. In particular, non-woven polyglycolide structures have been tested for tissue engineering applications: such fibrous structures have been found useful to grow different types of cells. The principal drawbacks are related to the difficulties in obtaining high porosity and regular pore size.
Solvent casting and particulate leaching
Solvent casting and particulate leaching (SCPL) allows for the preparation of structures with regular porosity, but with limited thickness. First, the polymer is dissolved into a suitable organic solvent (e.g. polylactic acid could be dissolved into dichloromethane), then the solution is cast into a mold filled with porogen particles. Such porogen can be an inorganic salt like sodium chloride, crystals of saccharose, gelatin spheres or paraffin spheres. The size of the porogen particles will affect the size of the scaffold pores, while the polymer to porogen ratio is directly correlated to the amount of porosity of the final structure. After the polymer solution has been cast the solvent is allowed to fully evaporate, then the composite structure in the mold is immersed in a bath of a liquid suitable for dissolving the porogen: water in the case of sodium chloride, saccharose and gelatin or an aliphatic solvent like hexane for use with paraffin. Once the porogen has been fully dissolved, a porous structure is obtained. Other than the small thickness range that can be obtained, another drawback of SCPL lies in its use of organic solvents which must be fully removed to avoid any possible damage to the cells seeded on the scaffold.
Gas foaming
To overcome the need to use organic solvents and solid porogens, a technique using gas as a porogen has been developed. First, disc-shaped structures made of the desired polymer are prepared by means of compression molding using a heated mold. The discs are then placed in a chamber where they are exposed to high pressure CO2 for several days. The pressure inside the chamber is gradually restored to atmospheric levels. During this procedure the pores are formed by the carbon dioxide molecules that abandon the polymer, resulting in a sponge-like structure. The main problems resulting from such a technique are caused by the excessive heat used during compression molding (which prohibits the incorporation of any temperature labile material into the polymer matrix) and by the fact that the pores do not form an interconnected structure.
Emulsification freeze-drying
This technique does not require the use of a solid porogen like SCPL. First, a synthetic polymer is dissolved into a suitable solvent (e.g. polylactic acid in dichloromethane) then water is added to the polymeric solution and the two liquids are mixed in order to obtain an emulsion. Before the two phases can separate, the emulsion is cast into a mold and quickly frozen by means of immersion into liquid nitrogen. The frozen emulsion is subsequently freeze-dried to remove the dispersed water and the solvent, thus leaving a solidified, porous polymeric structure. While emulsification and freeze-drying allow for a faster preparation when compared to SCPL (since it does not require a time-consuming leaching step), it still requires the use of solvents. Moreover, pore size is relatively small and porosity is often irregular. Freeze-drying by itself is also a commonly employed technique for the fabrication of scaffolds. In particular, it is used to prepare collagen sponges: collagen is dissolved into acidic solutions of acetic acid or hydrochloric acid that are cast into a mold, frozen with liquid nitrogen and then lyophilized.
Thermally induced phase separation
Similar to the previous technique, the TIPS phase separation procedure requires the use of a solvent with a low melting point that is easy to sublime. For example, dioxane could be used to dissolve polylactic acid, then phase separation is induced through the addition of a small quantity of water: a polymer-rich and a polymer-poor phase are formed. Following cooling below the solvent melting point and some days of vacuum-drying to sublime the solvent, a porous scaffold is obtained. Liquid-liquid phase separation presents the same drawbacks of emulsification/freeze-drying.
Electrospinning
Electrospinning is a highly versatile technique that can be used to produce continuous fibers ranging in diameter from a few microns to a few nanometers. In a typical electrospinning set-up, the desired scaffold material is dissolved within a solvent and placed within a syringe. This solution is fed through a needle and a high voltage is applied to the tip and to a conductive collection surface. The buildup of electrostatic forces within the solution causes it to eject a thin fibrous stream towards the oppositely charged or grounded collection surface. During this process the solvent evaporates, leaving solid fibers leaving a highly porous network. This technique is highly tunable, with variation to solvent, voltage, working distance (distance from the needle to collection surface), flow rate of solution, solute concentration, and collection surface. This allows for precise control of fiber morphology.
On a commercial level however, due to scalability reasons, there are 40 or sometimes 96 needles involved operating at once. The bottle-necks in such set-ups are: 1) Maintaining the aforementioned variables uniformly for all of the needles and 2) formation of "beads" in single fibers that we as engineers, want to be of a uniform diameter. By modifying variables such as the distance to collector, magnitude of applied voltage, or solution flow rateresearchers can dramatically change the overall scaffold architecture.
Historically, research on electrospun fibrous scaffolds dates back to at least the late 1980s when Simon showed that electrospinning could be used to produce nano- and submicron-scale fibrous scaffolds from polymer solutions specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo.
CAD/CAM technologies
Because most of the above techniques are limited when it comes to the control of porosity and pore size, computer assisted design and manufacturing techniques have been introduced to tissue engineering. First, a three-dimensional structure is designed using CAD software. The porosity can be tailored using algorithms within the software. The scaffold is then realized by using ink-jet printing of polymer powders or through Fused Deposition Modeling of a polymer melt.
A 2011 study by El-Ayoubi et al. investigated "3D-plotting technique to produce (biocompatible and biodegradable) poly-L-Lactide macroporous scaffolds with two different pore sizes" via solid free-form fabrication (SSF) with computer-aided-design (CAD), to explore therapeutic articular cartilage replacement as an "alternative to conventional tissue repair". The study found the smaller the pore size paired with mechanical stress in a bioreactor (to induce in vivo-like conditions), the higher the cell viability in potential therapeutic functionality via decreasing recovery time and increasing transplant effectiveness.
Laser-assisted bioprinting
In a 2012 study, Koch et al. focused on whether Laser-assisted BioPrinting (LaBP) can be used to build multicellular 3D patterns in natural matrix, and whether the generated constructs are functioning and forming tissue. LaBP arranges small volumes of living cell suspensions in set high-resolution patterns. The investigation was successful, the researchers foresee that "generated tissue constructs might be used for in vivo testing by implanting them into animal models" (14). As of this study, only human skin tissue has been synthesized, though researchers project that "by integrating further cell types (e.g. melanocytes, Schwann cells, hair follicle cells) into the printed cell construct, the behavior of these cells in a 3D in vitro microenvironment similar to their natural one can be analyzed", which is useful for drug discovery and toxicology studies.
Self-assembled recombinant spider silk nanomembranes
Gustafsson et al. demonstrated free‐standing, bioactive membranes of cm-sized area, but only 250 nm thin, that were formed by self‐assembly of spider silk at the interface of an aqueous solution. The membranes uniquely combine nanoscale thickness, biodegradability, ultrahigh strain and strength, permeability to proteins and promote rapid cell adherence and proliferation. They demonstrated growing a coherent layer of keratinocytes. These spider silk nanomembranes have also been used to create a static in-vitro model of a blood vessel.
Tissue engineering in situ
In situ tissue regeneration is defined as the implantation of biomaterials (alone or in combination with cells and/or biomolecules) into the tissue defect, using the surrounding microenvironment of the organism as a natural bioreactor. This approach has found application in bone regeneration, allowing the formation of cell-seeded constructs directly in the operating room.
Assembly methods
A persistent problem within tissue engineering is mass transport limitations. Engineered tissues generally lack an initial blood supply, thus making it difficult for any implanted cells to obtain sufficient oxygen and nutrients to survive, or function properly.
Self-assembly
Self-assembly methods have been shown to be promising methods for tissue engineering. Self-assembly methods have the advantage of allowing tissues to develop their own extracellular matrix, resulting in tissue that better recapitulates biochemical and biomechanical properties of native tissue. Self-assembling engineered articular cartilage was introduced by Jerry Hu and Kyriacos A. Athanasiou in 2006 and applications of the process have resulted in engineered cartilage approaching the strength of native tissue. Self-assembly is a prime technology to get cells grown in a lab to assemble into three-dimensional shapes. To break down tissues into cells, researchers first have to dissolve the extracellular matrix that normally binds them together. Once cells are isolated, they must form the complex structures that make up our natural tissues.
Liquid-based template assembly
The air-liquid surface established by Faraday waves is explored as a template to assemble biological entities for bottom-up tissue engineering. This liquid-based template can be dynamically reconfigured in a few seconds, and the assembly on the template can be achieved in a scalable and parallel manner. Assembly of microscale hydrogels, cells, neuron-seeded micro-carrier beads, cell spheroids into various symmetrical and periodic structures was demonstrated with good cell viability. Formation of 3-D neural network was achieved after 14-day tissue culture.
Additive manufacturing
It might be possible to print organs, or possibly entire organisms using additive manufacturing techniques. A recent innovative method of construction uses an ink-jet mechanism to print precise layers of cells in a matrix of thermo-reversible gel. Endothelial cells, the cells that line blood vessels, have been printed in a set of stacked rings. When incubated, these fused into a tube. This technique has been referred to as "bioprinting" within the field as it involves the printing of biological components in a structure resembling the organ of focus.
The field of three-dimensional and highly accurate models of biological systems is pioneered by multiple projects and technologies including a rapid method for creating tissues and even whole organs involve a 3-D printer that can bio-print the scaffolding and cells layer by layer into a working tissue sample or organ. The device is presented in a TED talk by Dr. Anthony Atala, M.D. the Director of the Wake Forest Institute for Regenerative Medicine, and the W.H. Boyce Professor and Chair of the Department of Urology at Wake Forest University, in which a kidney is printed on stage during the seminar and then presented to the crowd. It is anticipated that this technology will enable the production of livers in the future for transplantation and theoretically for toxicology and other biological studies as well.
In 2015 Multi-Photon Processing (MPP) was employed for in vivo experiments by engineering artificial cartilage constructs. An ex vivo histological examination showed that certain pore geometry and the pre-growing of chondrocytes (Cho) prior to implantation significantly improves the performance of the created 3-D scaffolds. The achieved biocompatibility was comparable to the commercially available collagen membranes. The successful outcome of this study supports the idea that hexagonal-pore-shaped hybrid organic-inorganic micro-structured scaffolds in combination with Cho seeding may be successfully implemented for cartilage tissue engineering.
Recently, tissue engineering has advanced with a focus on vascularization. Using Two-Photon Polymerization-based additive manufacturing, synthetic 3D microvessel networks are created from tubular hydrogel structures. These networks can perfuse tissues several cubic millimeters in size, enabling long-term viability and cell growth in vitro. This innovation marks a significant step forward in tissue engineering, facilitating the development of complex human tissue models.
Scaffolding
In 2013, using a 3-D scaffolding of Matrigel in various configurations, substantial pancreatic organoids was produced in vitro. Clusters of small numbers of cells proliferated into 40,000 cells within one week. The clusters transform into cells that make either digestive enzymes or hormones like insulin, self-organizing into branched pancreatic organoids that resemble the pancreas.
The cells are sensitive to the environment, such as gel stiffness and contact with other cells. Individual cells do not thrive; a minimum of four proximate cells was required for subsequent organoid development. Modifications to the medium composition produced either hollow spheres mainly composed of pancreatic progenitors, or complex organoids that spontaneously undergo pancreatic morphogenesis and differentiation. Maintenance and expansion of pancreatic progenitors require active Notch and FGF signaling, recapitulating in vivo niche signaling interactions.
The organoids were seen as potentially offering mini-organs for drug testing and for spare insulin-producing cells.
Aside from Matrigel 3-D scaffolds, other collagen gel systems have been developed. Collagen/hyaluronic acid scaffolds have been used for modeling the mammary gland In Vitro while co-coculturing epithelial and adipocyte cells. The HyStem kit is another 3-D platform containing ECM components and hyaluronic acid that has been used for cancer research. Additionally, hydrogel constituents can be chemically modified to assist in crosslinking and enhance their mechanical properties.
Tissue culture
In many cases, creation of functional tissues and biological structures in vitro requires extensive culturing to promote survival, growth and inducement of functionality. In general, the basic requirements of cells must be maintained in culture, which include oxygen, pH, humidity, temperature, nutrients and osmotic pressure maintenance.
Tissue engineered cultures also present additional problems in maintaining culture conditions. In standard cell culture, diffusion is often the sole means of nutrient and metabolite transport. However, as a culture becomes larger and more complex, such as the case with engineered organs and whole tissues, other mechanisms must be employed to maintain the culture, such as the creation of capillary networks within the tissue.
Another issue with tissue culture is introducing the proper factors or stimuli required to induce functionality. In many cases, simple maintenance culture is not sufficient. Growth factors, hormones, specific metabolites or nutrients, chemical and physical stimuli are sometimes required. For example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. Others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. Mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium.
Bioreactors
In tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. A physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. Therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. These systems can be two- or three-dimensional setups. Bioreactors can be used in both academic and industry applications. General-use and application-specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation.
Cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. Bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown (e.g., flex and fluid shearing for heart tissue growth). This can allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. A successful stem-cell-based bioreactor is effective at expanding stem cells with uniform properties and/or promoting controlled, reproducible differentiation into selected mature cell types.
There are a variety of bioreactors designed for 3D cell cultures. There are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. The bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. They are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100% humidity throughout. This humidity is important to achieve maximum cell growth and function. The bioreactor chamber is part of a larger device that rotates to ensure equal cell growth in each direction across three dimensions.
QuinXell Technologies now under Quintech Life Sciences from Singapore has developed a bioreactor known as the TisXell Biaxial Bioreactor which is specially designed for the purpose of tissue engineering. It is the first bioreactor in the world to have a spherical glass chamber with biaxial rotation; specifically to mimic the rotation of the fetus in the womb; which provides a conducive environment for the growth of tissues.
Multiple forms of mechanical stimulation have also been combined into a single bioreactor. Using gene expression analysis, one academic study found that applying a combination of cyclic strain and ultrasound stimulation to pre-osteoblast cells in a bioreactor accelerated matrix maturation and differentiation. The technology of this combined stimulation bioreactor could be used to grow bone cells more quickly and effectively in future clinical stem cell therapies.
MC2 Biotek has also developed a bioreactor known as ProtoTissue that uses gas exchange to maintain high oxygen levels within the cell chamber; improving upon previous bioreactors, since the higher oxygen levels help the cell grow and undergo normal cell respiration.
Active areas of research on bioreactors includes increasing production scale and refining the physiological environment, both of which could improve the efficiency and efficacy of bioreactors in research or clinical use. Bioreactors are currently used to study, among other things, cell and tissue level therapies, cell and tissue response to specific physiological environment changes, and development of disease and injury.
Long fiber generation
In 2013, a group from the University of Tokyo developed cell laden fibers up to a meter in length and on the order of 100 μm in size. These fibers were created using a microfluidic device that forms a double coaxial laminar flow. Each 'layer' of the microfluidic device (cells seeded in ECM, a hydrogel sheath, and finally a calcium chloride solution). The seeded cells culture within the hydrogel sheath for several days, and then the sheath is removed with viable cell fibers. Various cell types were inserted into the ECM core, including myocytes, endothelial cells, nerve cell fibers, and epithelial cell fibers. This group then showed that these fibers can be woven together to fabricate tissues or organs in a mechanism similar to textile weaving. Fibrous morphologies are advantageous in that they provide an alternative to traditional scaffold design, and many organs (such as muscle) are composed of fibrous cells.
Bioartificial organs
An artificial organ is an engineered device that can be extra corporeal or implanted to support impaired or failing organ systems. Bioartificial organs are typically created with the intent to restore critical biological functions like in the replacement of diseased hearts and lungs, or provide drastic quality of life improvements like in the use of engineered skin on burn victims. While some examples of bioartificial organs are still in the research stage of development due to the limitations involved with creating functional organs, others are currently being used in clinical settings experimentally and commercially.
Lung
Extracorporeal membrane oxygenation (ECMO) machines, otherwise known as heart and lung machines, are an adaptation of cardiopulmonary bypass techniques that provide heart and lung support. It is used primarily to support the lungs for a prolonged but still temporary timeframe (1–30 days) and allow for recovery from reversible diseases. Robert Bartlett is known as the father of ECMO and performed the first treatment of a newborn using an ECMO machine in 1975.
Skin
Tissue-engineered skin is a type of bioartificial organ that is often used to treat burns, diabetic foot ulcers, or other large wounds that cannot heal well on their own. Artificial skin can be made from autografts, allografts, and xenografts. Autografted skin comes from a patient's own skin, which allows the dermis to have a faster healing rate, and the donor site can be re-harvested a few times. Allograft skin often comes from cadaver skin and is mostly used to treat burn victims. Lastly, xenografted skin comes from animals and provides a temporary healing structure for the skin. They assist in dermal regeneration, but cannot become part of the host skin. Tissue-engineered skin is now available in commercial products. Integra, originally used to only treat burns, consists of a collagen matrix and chondroitin sulfate that can be used as a skin replacement. The chondroitin sulfate functions as a component of proteoglycans, which helps to form the extracellular matrix. Integra can be repopulated and revascularized while maintaining its dermal collagen architecture, making it a bioartificial organ Dermagraft, another commercial-made tissue-engineered skin product, is made out of living fibroblasts. These fibroblasts proliferate and produce growth factors, collagen, and ECM proteins, that help build granulation tissue.
Heart
Since the number of patients awaiting a heart transplant is continuously increasing over time, and the number of patients on the waiting list surpasses the organ availability, artificial organs used as replacement therapy for terminal heart failure would help alleviate this difficulty. Artificial hearts are usually used to bridge the heart transplantation or can be applied as replacement therapy for terminal heart malfunction. The total artificial heart (TAH), first introduced by Dr. Vladimir P. Demikhov in 1937, emerged as an ideal alternative. Since then it has been developed and improved as a mechanical pump that provides long-term circulatory support and replaces diseased or damaged heart ventricles that cannot properly pump the blood, restoring thus the pulmonary and systemic flow. Some of the current TAHs include AbioCor, an FDA-approved device that comprises two artificial ventricles and their valves, and does not require subcutaneous connections, and is indicated for patients with biventricular heart failure. In 2010 SynCardia released the portable freedom driver that allows patients to have a portable device without being confined to the hospital.
Kidney
While kidney transplants are possible, renal failure is more often treated using an artificial kidney. The first artificial kidneys and the majority of those currently in use are extracorporeal, such as with hemodialysis, which filters blood directly, or peritoneal dialysis, which filters via a fluid in the abdomen. In order to contribute to the biological functions of a kidney such as producing metabolic factors or hormones, some artificial kidneys incorporate renal cells. There has been progress in the way of making these devices smaller and more transportable, or even implantable . One challenge still to be faced in these smaller devices is countering the limited volume and therefore limited filtering capabilities.
Bioscaffolds have also been introduced to provide a framework upon which normal kidney tissue can be regenerated. These scaffolds encompass natural scaffolds (e.g., decellularized kidneys, collagen hydrogel, or silk fibroin), synthetic scaffolds (e.g., poly[lactic-co-glycolic acid] or other polymers), or a combination of two or more natural and synthetic scaffolds. These scaffolds can be implanted into the body either without cell treatment or after a period of stem cell seeding and incubation. In vitro and In vivo studies are being conducted to compare and optimize the type of scaffold and to assess whether cell seeding prior to implantation adds to the viability, regeneration and effective function of the kidneys. A recent systematic review and meta-analysis compared the results of published animal studies and identified that improved outcomes are reported with the use of hybrid (mixed) scaffolds and cell seeding; however, the meta-analysis of these results were not in agreement with the evaluation of descriptive results from the review. Therefore, further studies involving larger animals and novel scaffolds, and more transparent reproduction of previous studies are advisable.
Biomimetics
Biomimetics is a field that aims to produce materials and systems that replicate those present in nature. In the context of tissue engineering, this is a common approach used by engineers to create materials for these applications that are comparable to native tissues in terms of their structure, properties, and biocompatibility. Material properties are largely dependent on physical, structural, and chemical characteristics of that material. Subsequently, a biomimetic approach to system design will become significant in material integration, and a sufficient understanding of biological processes and interactions will be necessary. Replication of biological systems and processes may also be used in the synthesis of bio-inspired materials to achieve conditions that produce the desired biological material. Therefore, if a material is synthesized having the same characteristics of biological tissues both structurally and chemically, then ideally the synthesized material will have similar properties. This technique has an extensive history originating from the idea of using natural phenomenon as design inspiration for solutions to human problems. Many modern advancements in technology have been inspired by nature and natural systems, including aircraft, automobiles, architecture, and even industrial systems. Advancements in nanotechnology initiated the application of this technique to micro- and nano-scale problems, including tissue engineering. This technique has been used to develop synthetic bone tissues, vascular technologies, scaffolding materials and integration techniques, and functionalized nanoparticles.
Constructing neural networks in soft material
In 2018, scientists at Brandeis University reported their research on soft material embedded with chemical networks which can mimic the smooth and coordinated behavior of neural tissue. This research was funded by the U.S. Army Research Laboratory. The researchers presented an experimental system of neural networks, theoretically modeled as reaction-diffusion systems. Within the networks was an array of patterned reactors, each performing the Belousov-Zhabotinsky (BZ) reaction. These reactors could function on a nanoliter scale.
The researchers state that the inspiration for their project was the movement of the blue ribbon eel. The eel's movements are controlled by electrical impulses determined by a class of neural networks called the central pattern generator. Central Pattern Generators function within the autonomic nervous system to control bodily functions such as respiration, movement, and peristalsis.
Qualities of the reactor that were designed were the network topology, boundary conditions, initial conditions, reactor volume, coupling strength, and the synaptic polarity of the reactor (whether its behavior is inhibitory or excitatory). A BZ emulsion system with a solid elastomer polydimethylsiloxane (PDMS) was designed. Both light and bromine permeable PDMS have been reported as viable methods to create a pacemaker for neural networks.
Market
The history of the tissue engineering market can be divided into three major parts. The time before the crash of the biotech market in the early 2000s, the crash and the time afterward.
Beginning
Most early progress in tissue engineering research was done in the US. This is due to less strict regulations regarding stem cell research and more available funding than in other countries. This leads to the creation of academic startups many of them coming from Harvard or MIT. Examples are BioHybrid Technologies whose founder, Bill Chick, went to Harvard Medical School and focused on the creation of artificial pancreas. Another example would be Organogenesis Inc. whose founder went to MIT and worked on skin engineering products. Other companies with links to the MIT are TEI Biosciences, Therics and Guilford Pharmaceuticals. The renewed interest in biotechnologies in the 1980s leads to many private investors investing in these new technologies even though the business models of these early startups were often not very clear and did not present a path to long term profitability. Government sponsors were more restrained in their funding as tissue engineering was considered a high-risk investment.
In the UK the market got off to a slower start even though the regulations on stem cell research were not strict as well. This is mainly due to more investors being less willing to invest in these new technologies which were considered to be high-risk investments. Another problem faced by British companies was getting the NHS to pay for their products. This especially because the NHS runs a cost-effectiveness analysis on all supported products. Novel technologies often do not do well in this respect.
In Japan, the regulatory situation was quite different. First cell cultivation was only allowed in a hospital setting and second academic scientists employed by state-owned universities were not allowed outside employment until 1998. Moreover, the Japanese authorities took longer to approve new drugs and treatments than there US and European counterparts.
For these reasons in the early days of the Japanese market, the focus was mainly on getting products that were already approved elsewhere in Japan and selling them. Contrary to the US market the early actors in Japan were mainly big firms or sub-companies of such big firms, such as J-TEC, Menicon and Terumo, and not small startups. After regulatory changes in 2014, which allowed cell cultivation outside of a hospital setting, the speed of research in Japan increased and Japanese companies also started to develop their own products.
Crash
Soon after the big boom, the first problems started to appear. There were problems getting products approved by the FDA and if they got approved there were often difficulties in getting insurance providers to pay for the products and getting it accepted by health care providers.
For example, organogenesis ran into problems marketing its product and integrating its product in the health system. This partially due to the difficulties of handling living cells and the increased difficulties faced by physicians in using these products over conventional methods.
Another example would be Advanced Tissue Sciences Dermagraft skin product which could not create a high enough demand without reimbursements from insurance providers. Reasons for this were $4000 price-tag and the circumstance that Additionally Advanced Tissue Sciences struggled to get their product known by physicians.
The above examples demonstrate how companies struggled to make profit. This, in turn, lead investors to lose patience and stopping further funding. In consequence, several Tissue Engineering companies such as Organogenesis and Advanced Tissue Sciences filed for bankruptcy in the early 2000s. At this time, these were the only ones having commercial skin products on the market.
Reemergence
The technologies of the bankrupt or struggling companies were often bought by other companies which continued the development under more conservative business models. Examples of companies who sold their products after folding were Curis and Intercytex.
Many of the companies abandoned their long-term goals of developing fully functional organs in favor of products and technologies that could turn a profit in the short run. Examples of these kinds of products are products in the cosmetic and testing industry.
In other cases such as in the case of Advanced Tissue Sciences, the founders started new companies.
In the 2010s the regulatory framework also started to facilitate faster time to market especially in the US as new centres and pathways were created by the FDA specifically aimed at products coming from living cells such as the Center for Biologics Evaluation and Research.
The first tissue engineering products started to get commercially profitable in the 2010s.
Regulation
In Europe, regulation is currently split into three areas of regulation: medical devices, medicinal products, and biologics. Tissue engineering products are often of hybrid nature, as they are often composed of cells and a supporting structure. While some products can be approved as medicinal products, others need to gain approval as medical devices. Derksen explains in her thesis that tissue engineering researchers are sometimes confronted with regulation that does not fit the characteristics of tissue engineering.
New regulatory regimes have been observed in Europe that tackle these issues. An explanation for the difficulties in finding regulatory consensus in this matter is given by a survey conducted in the UK. The authors attribute these problems to the close relatedness and overlap with other technologies such as xenotransplantation. It can therefore not be handled separately by regulatory bodies. Regulation is further complicated by the ethical controversies associated with this and related fields of research (e.g. stem cells controversy, ethics of organ transplantation). The same survey as mentioned above shows on the example of autologous cartilage transplantation that a specific technology can be regarded as 'pure' or 'polluted' by the same social actor.
Two regulatory movements are most relevant to tissue engineering in the European Union. These are Directive 2004/23/EC on standards of quality and safety for the sourcing and processing of human tissues which was adopted by the European Parliament in 2004 and a proposed Human Tissue-Engineered Products regulation. The latter was developed under the auspices of the European Commission DG Enterprise and presented in Brussels in 2004.
See also
Biomedical engineering
Biological engineering
Biomolecular engineering
Biochemical engineering
Cell engineering
Chemical engineering
ECM Biomaterial
In vivo bioreactor
Induced stem cells
Molecular processor
Molecular self-assembly
Muscle tissue engineering
National Institutes of Health
National Science Foundation
Quality control in tissue engineering
Regeneration in humans
Soft tissues
Thiomers
Tissue Engineering and Regenerative Medicine International Society
Tissue engineering of heart valves
Xenotransplantation
Notes
References
External links
Cell-Based Bone Tissue Engineering
Clinical Tissue Engineering Center State of Ohio Initiative for Tissue Engineering (National Center for Regenerative Medicine)
Organ Printing Multi-site NSF-funded initiative
LOEX Center Université Laval Initiative for Tissue Engineering
Cell culture techniques
Biomedical engineering | Tissue engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 12,269 | [
"Biochemistry methods",
"Biological engineering",
"Biomedical engineering",
"Cloning",
"Chemical engineering",
"Cell culture techniques",
"Tissue engineering",
"Medical technology"
] |
307,155 | https://en.wikipedia.org/wiki/Primitive%20equations | The primitive equations are a set of nonlinear partial differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:
A continuity equation: Representing the conservation of mass.
Conservation of momentum: Consisting of a form of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere under the assumption that vertical motion is much smaller than horizontal motion (hydrostasis) and that the fluid layer depth is small compared to the radius of the sphere
A thermal energy equation: Relating the overall temperature of the system to heat sources and sinks
The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined.
In general, nearly all forms of the primitive equations relate the five variables u, v, ω, T, W, and their evolution over space and time.
The equations were first written down by Vilhelm Bjerknes.
Definitions
is the zonal velocity (velocity in the east–west direction tangent to the sphere)
is the meridional velocity (velocity in the north–south direction tangent to the sphere)
is the vertical velocity in isobaric coordinates
is the temperature
is the geopotential
is the term corresponding to the Coriolis force, and is equal to , where is the angular rotation rate of the Earth ( radians per sidereal hour), and is the latitude
is the gas constant
is the pressure
is the density
is the specific heat on a constant pressure surface
is the heat flow per unit time per unit mass
is the precipitable water
is the Exner function
is the potential temperature
is the Absolute vorticity
Forces that cause atmospheric motion
Forces that cause atmospheric motion include the pressure gradient force, gravity, and viscous friction. Together, they create the forces that accelerate our atmosphere.
The pressure gradient force causes an acceleration forcing air from regions of high pressure to regions of low pressure. Mathematically, this can be written as:
The gravitational force accelerates objects at approximately 9.8 m/s2 directly towards the center of the Earth.
The force due to viscous friction can be approximated as:
Using Newton's second law, these forces (referenced in the equations above as the accelerations due to these forces) may be summed to produce an equation of motion that describes this system. This equation can be written in the form:
Therefore, to complete the system of equations and obtain 6 equations and 6 variables:
where n is the number density in mol, and T:=RT is the temperature equivalent value in Joule/mol.
Forms of the primitive equations
The precise form of the primitive equations depends on the vertical coordinate system chosen, such as pressure coordinates, log pressure coordinates, or sigma coordinates. Furthermore, the velocity, temperature, and geopotential variables may be decomposed into mean and perturbation components using Reynolds decomposition.
Pressure coordinate in vertical, Cartesian tangential plane
In this form pressure is selected as the vertical coordinate and the horizontal coordinates are written for the Cartesian tangential plane (i.e. a plane tangent to some point on the surface of the Earth). This form does not take the curvature of the Earth into account, but is useful for visualizing some of the physical processes involved in formulating the equations due to its relative simplicity.
Note that the capital D time derivatives are material derivatives. Five equations in five unknowns comprise the system.
the inviscid (frictionless) momentum equations:
the hydrostatic equation, a special case of the vertical momentum equation in which vertical acceleration is considered negligible:
the continuity equation, connecting horizontal divergence/convergence to vertical motion under the hydrostatic approximation ():
and the thermodynamic energy equation, a consequence of the first law of thermodynamics
When a statement of the conservation of water vapor substance is included, these six equations form the basis for any numerical weather prediction scheme.
Primitive equations using sigma coordinate system, polar stereographic projection
According to the National Weather Service Handbook No. 1 – Facsimile Products, the primitive equations can be simplified into the following equations:
Zonal wind:
Meridional wind:
Temperature:
The first term is equal to the change in temperature due to incoming solar radiation and outgoing longwave radiation, which changes with time throughout the day. The second, third, and fourth terms are due to advection. Additionally, the variable T with subscript is the change in temperature on that plane. Each T is actually different and related to its respective plane. This is divided by the distance between grid points to get the change in temperature with the change in distance. When multiplied by the wind velocity on that plane, the units kelvins per meter and meters per second give kelvins per second. The sum of all the changes in temperature due to motions in the x, y, and z directions give the total change in temperature with time.
Precipitable water:
This equation and notation works in much the same way as the temperature equation. This equation describes the motion of water from one place to another at a point without taking into account water that changes form. Inside a given system, the total change in water with time is zero. However, concentrations are allowed to move with the wind.
Pressure thickness:
These simplifications make it much easier to understand what is happening in the model. Things like the temperature (potential temperature), precipitable water, and to an extent the pressure thickness simply move from one spot on the grid to another with the wind. The wind is forecast slightly differently. It uses geopotential, specific heat, the Exner function π, and change in sigma coordinate.
Solution to the linearized primitive equations
The analytic solution to the linearized primitive equations involves a sinusoidal oscillation in time and longitude, modulated by coefficients related to height and latitude.
where s and are the zonal wavenumber and angular frequency, respectively. The solution represents atmospheric waves and tides.
When the coefficients are separated into their height and latitude components, the height dependence takes the form of propagating or evanescent waves (depending on conditions), while the latitude dependence is given by the Hough functions.
This analytic solution is only possible when the primitive equations are linearized and simplified. Unfortunately many of these simplifications (i.e. no dissipation, isothermal atmosphere) do not correspond to conditions in the actual atmosphere. As a result, a numerical solution which takes these factors into account is often calculated using general circulation models and climate models.
See also
Barometric formula
Climate model
Euler equations
Fluid dynamics
General circulation model
Numerical weather prediction
References
Beniston, Martin. From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models. Berlin: Springer, 1998.
Firth, Robert. Mesoscale and Microscale Meteorological Model Grid Construction and Accuracy. LSMSA, 2006.
Thompson, Philip. Numerical Weather Analysis and Prediction. New York: The Macmillan Company, 1961.
Pielke, Roger A. Mesoscale Meteorological Modeling. Orlando: Academic Press, Inc., 1984.
U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Weather Service. National Weather Service Handbook No. 1 – Facsimile Products. Washington, DC: Department of Commerce, 1979.
External links
National Weather Service – NCSU
Collaborative Research and Training Site, Review of the Primitive Equations.
Partial differential equations
Equations of fluid dynamics
Numerical climate and weather models
Atmospheric dynamics | Primitive equations | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,550 | [
"Equations of fluid dynamics",
"Atmospheric models",
"Atmospheric dynamics",
"Equations of physics",
"Environmental modelling",
"Fluid dynamics"
] |
307,552 | https://en.wikipedia.org/wiki/Bio-based%20material | A bio-based material is a material intentionally made, either wholly or partially, from substances derived from living (or once-living) organisms, such as plants, animals, enzymes, and microorganisms, including bacteria, fungi and yeast.
Due to their main characteristics of being renewable and to their ability to store carbon over their growth, recent years assisted to their upsurge as a valid alternative compared to more traditional materials in view of climate mitigation.
In European context, more specifically, European Union, which has set 2050 as a target date to reach climate neutrality, is trying to implement, among other measures, the production and utilization of bio-based materials in many diverse sectors. Indeed, several European regulations, such as the European Industrial Strategy, the EU Biotechnology and Biomanufacturing Initiative and the Circular Action Plan, emphasize bio-materials. These regulations aim to support innovation, investment, and market adoption of bio-materials while enhancing the transition towards a circular economy where resources are used more efficiently. In this regard, the application of bio-based materials has been already tested on several market segments, ranging from the production of chemicals, to packaging and textiles, till the fabrication of full construction components.
Bio-based materials can differ depending on the origin of the biomass they're mostly constituted. Moreover, they can be differently manufactured, resulting in either simple or more complex engineered bio-products, which can be used for many applications. Among processed materials, it is possible to distinguish between bio-based polymers, bio-based plastics, bio-based chemical fibres, bio-based leather, bio-based rubber, bio-based coatings, bio-based material additives, bio-based composites. Unprocessed materials, instead, may be called biotic material.
Bio-based, organic, and bio-degradable materials
Bio-based materials vs. biodegradable materials
Bio-based materials are often biodegradable, but this is not always the case.
By definition, biodegradable materials are formed or organic compounds which can thus be broken down by living organisms, such as bacteria, fungi, or water molds, and reabsorbed by the natural environment.
Whether a material is biodegradable is determined by its chemical structure, not the origin of the material from which it is made. Indeed, the sustainability benefits of drop-in biobased plastics occur at the beginning of the material life cycle, but still, when manufactured, their structure is identical to their fossil-based counterparts. Therefore, these plastics, known as ‘drop-ins’, are not biodegradable, and should be recycled in existing recycling systems.
In this regard, biodegradability does not support circularity unless biodegradable materials are recovered and processed by a system that can either recapture or upgrade their value. Ensuring a proper infrastructure for these materials to remain in the material management system, for instance through industrial composting or anaerobic digestion, is thus considered to be essential.
Bio-based materials vs. organic materials
Similarly, bio-based materials are not necessarily organic, as the term "bio-based" simply indicates the material origin. The term "organic" instead refers to the cultivation of plants or the keeping of the animals in compliance with the requirements of the European organic farming standard. Consequently, a bio-product can be both "bio-based" and "organic," but it is not necessarily so.
Bio-based materials vs. fossil-based materials
It is not given that bio-based materials always perform better than fossil-based materials.
Their environmental performance depends on a series of factors, related to the sourced material and to the amount and typology of manufacturing processes the raw natural material need to undergo to become a bio-product.
One of the main factors influencing the sustainability of bio-materials is land consumption, land competition for food production and soil depletion. In this regard, in the European context many studies have been conducted to analyze the actual availability of land for the production of bio-materials, while bio-residues and wastes coming from either the agro-industrial and forestry sectors are gaining interest.
Moreover, manufacturing processes needed for the production of competitive bio-alternatives to fossil-based products might lead to higher energy consumptions or to "linear", non-circular, products. Therefore, it is recommended to maintain a critical mindset based on Life Cycle Assessment analysis, as some bio-products could require either extra material or processing to ensure the same quality, resulting necessarily in more energy consumption.
See also
Bio-based building materials
References
Biomaterials
Green chemistry | Bio-based material | [
"Physics",
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 948 | [
"Biomaterials",
"Green chemistry",
"Chemical engineering",
"Environmental chemistry",
"Materials",
"nan",
"Matter",
"Medical technology"
] |
307,809 | https://en.wikipedia.org/wiki/C-reactive%20protein | C-reactive protein (CRP) is an annular (ring-shaped) pentameric protein found in blood plasma, whose circulating concentrations rise in response to inflammation. It is an acute-phase protein of hepatic origin that increases following interleukin-6 secretion by macrophages and T cells. Its physiological role is to bind to lysophosphatidylcholine expressed on the surface of dead or dying cells (and some types of bacteria) in order to activate the complement system via C1q.
CRP is synthesized by the liver in response to factors released by macrophages, T cells and fat cells (adipocytes). It is a member of the pentraxin family of proteins. It is not related to C-peptide (insulin) or protein C (blood coagulation). C-reactive protein was the first pattern recognition receptor (PRR) to be identified.
History and etymology
Discovered by Tillett and Francis in 1930, it was initially thought that CRP might be a pathogenic secretion since it was elevated in a variety of illnesses, including cancer. The later discovery of hepatic synthesis (made in the liver) demonstrated that it is a native protein. Initially, CRP was measured using the quellung reaction which gave a positive or a negative result. More precise methods nowadays use dynamic light scattering after reaction with CRP-specific antibodies.
CRP was so named because it was first identified as a substance in the serum of patients with acute inflammation that reacted with the cell wall polysaccharide (C-polysaccharide) of pneumococcus.
Genetics and structure
It is a member of the small pentraxins family (also known as short pentraxins). The polypeptide encoded by this gene has 224 amino acids. The full-length polypeptide is not present in the body in significant quantities due to signal peptide, which is removed by signal peptidase before translation is completed. The complete protein, composed of five monomers, has a total mass of approximately 120,000 Da. In serum, it assembles into stable pentameric structure with a discoid shape.
Function
CRP binds to the phosphocholine expressed on the surface of bacterial cells such as pneumococcus bacteria. This activates the complement system, promoting phagocytosis by macrophages, which clears necrotic and apoptotic cells and bacteria. With this mechanism, CRP also binds to ischemic/hypoxic cells, which could regenerate with more time. However, the binding of CRP causes them to be disposed of prematurely. CRP binds to the Fc-gamma receptor IIa, to which IgG isotype antibodies also bind. In addition, CRP activates the classical complement pathway via C1q binding. CRP thus forms immune complexes in the same way as IgG antibodies.
This so-called acute phase response occurs as a result of increasing concentrations of interleukin-6 (IL-6), which is produced by macrophages as well as adipocytes in response to a wide range of acute and chronic inflammatory conditions such as bacterial, viral, or fungal infections; rheumatic and other inflammatory diseases; malignancy; and tissue injury and necrosis. These conditions cause release of IL-6 and other cytokines that trigger the synthesis of CRP and fibrinogen by the liver.
CRP binds to phosphocholine on micro-organisms. It is thought to assist in complement binding to foreign and damaged cells and enhances phagocytosis by macrophages (opsonin-mediated phagocytosis), which express a receptor for CRP. It plays a role in innate immunity as an early defense system against infections.
Serum levels
Measurement methods
Traditional CRP measurement only detected CRP in the range of 10 to 1,000 mg/L, whereas high sensitivity CRP (hs-CRP) detects CRP in the range of 0.5 to 10 mg/L. hs-CRP can detect cardiovascular disease risk when in excess of 3 mg/L, whereas below 1 mg/L would be low risk. Traditional CRP measurement is faster and less costly than hs-CRP, and can be adequate for some applications, such as monitoring hemodialysis patients. Current immunoassay methods for CRP have similar precision to hsCRP performed by nephelometry and could probably replace hsCRP for cardiovascular risk assessment, however, in the United States this would represent off-label use, making it a laboratory-developed test under FDA regulations.
Normal
In healthy adults, the normal concentrations of CRP varies between 0.8 mg/L and 3.0 mg/L. However, some healthy adults show elevated CRP at 10 mg/L. CRP concentrations also increase with age, possibly due to subclinical conditions. There are also no seasonal variations of CRP concentrations. Gene polymorphism of interleukin-1 family, interleukin 6, and polymorphic GT repeat of the CRP gene do affect the usual CRP concentrations when a person does not have any medical illnesses.
Acute inflammation
When there is a stimulus, the CRP level can increase 10,000-fold from less than 50 μg/L to more than 500 mg/L. Its concentration can increase to 5 mg/L by 6 hours and peak at 48 hours. The plasma half-life of CRP is 19 hours, and is constant in all medical conditions. Therefore, the only factor that affects the blood CRP concentration is its production rate, which increases with inflammation, infection, trauma, necrosis, malignancy, and allergic reactions. Other inflammatory mediators that can increase CRP are TGF beta 1, and tumor necrosis factor alpha. In acute inflammation, CRP can increase as much as 50 to 100 mg/L within 4 to 6 hours in mild to moderate inflammation or an insult such as skin infection, cystitis, or bronchitis. It can double every 8 hours and reaches its peak at 36 to 50 hours following injury or inflammation. CRP between 100 and 500 mg/L is considered highly predictive of inflammation due to bacterial infection. Once inflammation subsides, CRP level falls quickly because of its relatively short half-life.
Metabolic inflammation
CRP concentrations between 2 and 10 mg/L are considered as metabolic inflammation: metabolic pathways that cause arteriosclerosis and type II diabetes mellitus.
Clinical significance
Diagnostic use
CRP is used mainly as an inflammation marker. Apart from liver failure, there are few known factors that interfere with CRP production. Interferon alpha inhibits CRP production from liver cells which may explain the relatively low levels of CRP found during viral infections compared to bacterial infections
Measuring and charting CRP values can prove useful in determining disease progress or the effectiveness of treatments. ELISA and radial immunodiffusion methods are available for research use, while immunoturbidimetry is used clinically for CRP and nephelometry is typically used for hsCRP.Cutoffs for cardiovascular risk assessment have included:
low: hs-CRP level under 1.0 mg/L
average: between 1.0 and 3.0 mg/L
high: above 3.0 mg/L
Normal levels increase with aging. Higher levels are found in late pregnant women, mild inflammation and viral infections (10–40 mg/L), active inflammation, bacterial infection (40–200 mg/L), severe bacterial infections and burns (>200 mg/L).
CRP cut-off levels indicating bacterial from non-bacterial illness can vary due to co-morbidities such as malaria, HIV and malnutrition and the stage of disease presentation. In patients presenting to the emergency department with suspected sepsis, a CRP/albumin ratio of less than 32 has a negative predictive value of 89% for ruling out sepsis.
CRP is a more sensitive and accurate reflection of the acute phase response than the ESR (erythrocyte sedimentation rate). ESR may be normal while CRP is elevated. CRP returns to normal more quickly than ESR in response to therapy.
Cardiovascular disease
Recent research suggests that patients with elevated basal levels of CRP are at an increased risk of diabetes, hypertension and cardiovascular disease. A study of over 700 nurses showed that those in the highest quartile of trans fat consumption had blood levels of CRP that were 73% higher than those in the lowest quartile. Although one group of researchers indicated that CRP may be only a moderate risk factor for cardiovascular disease, this study (known as the Reykjavik Study) was found to have some problems for this type of analysis related to the characteristics of the population studied, and there was an extremely long follow-up time, which may have attenuated the association between CRP and future outcomes. Others have shown that CRP can exacerbate ischemic necrosis in a complement-dependent fashion and that CRP inhibition can be a safe and effective therapy for myocardial and cerebral infarcts; this has been demonstrated in animal models and humans.
It has been hypothesized that patients with high CRP levels might benefit from use of statins. This is based on the JUPITER trial that found that elevated CRP levels without hyperlipidemia benefited. Statins were selected because they have been proven to reduce levels of CRP. Studies comparing effect of various statins in hs-CRP revealed similar effects of different statins. A subsequent trial however failed to find that CRP was useful for determining statin benefit.
In a meta-analysis of 20 studies involving 1,466 patients with coronary artery disease, CRP levels were found to be reduced after exercise interventions. Among those studies, higher CRP concentrations or poorer lipid profiles before beginning exercise were associated with greater reductions in CRP.
To clarify whether CRP is a bystander or active participant in atherogenesis, a 2008 study compared people with various genetic CRP variants. Those with a high CRP due to genetic variation had no increased risk of cardiovascular disease compared to those with a normal or low CRP. A study published in 2011 shows that CRP is associated with lipid responses to low-fat and high-polyunsaturated fat diets.
Coronary heart disease risk
Arterial damage results from white blood cell invasion and inflammation within the wall. CRP is a general marker for inflammation and infection, so it can be used as a very rough proxy for heart disease risk. Since many things can cause elevated CRP, this is not a very specific prognostic indicator. Nevertheless, a level above 2.4 mg/L has been associated with a doubled risk of a coronary event compared to levels below 1 mg/L; however, the study group in this case consisted of patients who had been diagnosed with unstable angina pectoris; whether elevated CRP has any predictive value of acute coronary events in the general population of all age ranges remains unclear. Currently, C-reactive protein is not recommended as a cardiovascular disease screening test for average-risk adults without symptoms.
The American Heart Association and U.S. Centers for Disease Control and Prevention have defined risk groups as follows:
Low Risk: less than 1.0 mg/L
Average risk: 1.0 to 3.0 mg/L
High risk: above 3.0 mg/L
But hs-CRP is not to be used alone and should be combined with elevated levels of cholesterol, LDL-C, triglycerides, and glucose level. Smoking, hypertension and diabetes also increase the risk level of cardiovascular disease.
Fibrosis and inflammation
Scleroderma, polymyositis, and dermatomyositis elicit little or no CRP response. CRP levels also tend not to be elevated in systemic lupus erythematosus (SLE) unless serositis or synovitis is present. Elevations of CRP in the absence of clinically significant inflammation can occur in kidney failure. CRP level is an independent risk factor for atherosclerotic disease. Patients with high CRP concentrations are more likely to develop stroke, myocardial infarction, and severe peripheral vascular disease. Elevated level of CRP can also be observed in inflammatory bowel disease (IBD), including Crohn's disease and ulcerative colitis.
High levels of CRP has been associated to point mutation Cys130Arg in the APOE gene, coding for apolipoprotein E, establishing a link between lipid values and inflammatory markers modulation.
Cancer
The role of inflammation in cancer is not well understood. Some organs of the body show greater risk of cancer when they are chronically inflamed. While there is an association between increased levels of C-reactive protein and risk of developing cancer, there is no association between genetic polymorphisms influencing circulating levels of CRP and cancer risk.
In a 2004 prospective cohort study on colon cancer risk associated with CRP levels, people with colon cancer had higher average CRP concentrations than people without colon cancer. It can be noted that the average CRP levels in both groups were well within the range of CRP levels usually found in healthy people. However, these findings may suggest that low inflammation level can be associated with a lower risk of colon cancer, concurring with previous studies that indicate anti-inflammatory drugs could lower colon cancer risk.
Obstructive sleep apnea
C-reactive protein (CRP), a marker of systemic inflammation, is also increased in obstructive sleep apnea (OSA). CRP and interleukin-6 (IL-6) levels were significantly higher in patients with OSA compared to obese control subjects. Patients with OSA have higher plasma CRP concentrations that increased corresponding to the severity of their apnea-hypopnea index score. Treatment of OSA with CPAP (continuous positive airway pressure) significantly alleviated the effect of OSA on CRP and IL-6 levels.
Rheumatoid arthritis
In the context of rheumatoid arthritis (RA), CRP is one of the acute phase reactants, whose assessment is defined as part of the joint 2010 ACR/EULAR classification criteria for RA with abnormal levels accounting for a single point within the criteria. Higher levels of CRP are associated with more severe disease and a higher likelihood of radiographic progression. Rheumatoid arthritis associated antibodies together with 14-3-3η YWHAH have been reported to complement CRP in predicting clinical and radiographic outcomes in patients with recent onset inflammatory polyarthritis. Elevated levels of CRP appear to be associated with common comorbidities including cardiovascular disease, metabolic syndrome, diabetes and interstitial lung (pulmonary) disease. Mechanistically, CRP also appears to influence osteoclast activity leading to bone resorption and also stimulates RANKL expression in peripheral blood monocytes.
It has previously been speculated that single-nucleotide polymorphisms in the CRP gene may affect clinical decision-making based on CRP in rheumatoid arthritis, e.g. DAS28 (Disease Activity Score 28 joints). A recent study showed that CRP genotype and haplotype were only marginally associated with serum CRP levels and without any association to the DAS28 score. Thus, that DAS28, which is the core parameter for inflammatory activity in RA, can be used for clinical decision-making without adjustment for CRP gene variants.
Viral infections
Increased blood CRP levels were higher in people with avian flu H7N9 compared to those with H1N1 (more common) influenza, with a review reporting that severe H1N1 influenza had elevated CRP. In 2020, people infected with COVID-19 in Wuhan, China, had elevated CRP.
Additional images
References
External links
Inflammation, Heart Disease and Stroke: The Role of C-Reactive Protein (American Heart Association)
CRP: analyte monograph - The Association for Clinical Biochemistry and Laboratory Medicine
George Vrousgos, N.D. - Southern Cross University
Biomarkers
Acute-phase proteins
Blood tests
Chemical pathology
Diagnostic cardiology
Diagnostic intensive care medicine
Immunologic tests | C-reactive protein | [
"Chemistry",
"Biology"
] | 3,383 | [
"Blood tests",
"Biomarkers",
"Immunologic tests",
"Biochemistry",
"Chemical pathology"
] |
308,058 | https://en.wikipedia.org/wiki/Magnetic%20declination | Magnetic declination (also called magnetic variation) is the angle between magnetic north and true north at a particular location on the Earth's surface. The angle can change over time due to polar wandering.
Magnetic north is the direction that the north end of a magnetized compass needle points, which corresponds to the direction of the Earth's magnetic field lines. True north is the direction along a meridian towards the geographic North Pole.
Somewhat more formally, Bowditch defines variation as "the angle between the magnetic and geographic meridians at any place, expressed in degrees and minutes east or west to indicate the direction of magnetic north from true north. The angle between magnetic and grid meridians is called grid magnetic angle, grid variation, or grivation."
By convention, declination is positive when magnetic north is east of true north, and negative when it is to the west. Isogonic lines are lines on the Earth's surface along which the declination has the same constant value, and lines along which the declination is zero are called agonic lines. The lowercase Greek letter δ (delta) is frequently used as the symbol for magnetic declination.
The term magnetic deviation is sometimes used loosely to mean the same as magnetic declination, but more correctly it refers to the error in a compass reading induced by nearby metallic objects, such as iron on board a ship or aircraft.
Magnetic declination should not be confused with magnetic inclination, also known as magnetic dip, which is the angle that the Earth's magnetic field lines make with the downward side of the horizontal plane.
Declination change over time and location
Magnetic declination varies both from place to place and with the passage of time. As a traveller cruises the east coast of the United States, for example, the declination varies from 16 degrees west in Maine, to 6 in Florida, to 0 degrees in Louisiana, to 4 degrees east in Texas. The declination at London, UK was one degree west (2014), reducing to zero as of early 2020. Reports of measured magnetic declination for distant locations became commonplace in the 17th century, and Edmund Halley made a map of declination for the Atlantic Ocean in 1700.
In most areas, the spatial variation reflects the irregularities of the flows deep in the Earth; in some areas, deposits of iron ore or magnetite in the Earth's crust may contribute strongly to the declination. Similarly, secular changes to these flows result in slow changes to the field strength and direction at the same point on the Earth.
The magnetic declination in a given area may (most likely will) change slowly over time, possibly as little as 2–2.5 degrees every hundred years or so, depending on where it is measured. For a location close to the pole like Ivujivik, the declination may change by 1 degree every three years. This may be insignificant to most travellers, but can be important if using magnetic bearings from old charts or metes (directions) in old deeds for locating places with any precision.
As an example of how variation changes over time, see the two charts of the same area (western end of Long Island Sound), below, surveyed 124 years apart. The 1884 chart shows a variation of 8 degrees, 20 minutes West. The 2008 chart shows 13 degrees, 15 minutes West.
Determination
Field measurement
The magnetic declination at any particular place can be measured directly by reference to the celestial poles—the points in the heavens around which the stars appear to revolve, which mark the direction of true north and true south. The instrument used to perform this measurement is known as a declinometer.
The approximate position of the north celestial pole is indicated by Polaris (the North Star). In the northern hemisphere, declination can therefore be approximately determined as the difference between the magnetic bearing and a visual bearing on Polaris. Polaris currently traces a circle 0.73° in radius around the north celestial pole, so this technique is accurate to within a degree. At high latitudes a plumb-bob is helpful to sight Polaris against a reference object close to the horizon, from which its bearing can be taken.
Determination from maps
A rough estimate of the local declination (within a few degrees) can be determined from a general isogonic chart of the world or a continent, such as those illustrated above. Isogonic lines are also shown on aeronautical and nautical charts.
Larger-scale local maps may indicate current local declination, often with the aid of a schematic diagram. Unless the area depicted is very small, declination may vary measurably over the extent of the map, so the data may be referred to a specific location on the map. The current rate and direction of change may also be shown, for example in arcminutes per year. The same diagram may show the angle of grid north (the direction of the map's north–south grid lines), which may differ from true north.
On the topographic maps of the U.S. Geological Survey (USGS), for example, a diagram shows the relationship between magnetic north in the area concerned (with an arrow marked "MN") and true north (a vertical line with a five-pointed star at its top), with a label near the angle between the MN arrow and the vertical line, stating the size of the declination and of that angle, in degrees, mils, or both. However, the diagram itself is not an accurate depiction of the stated numerical declination angle, but is intentionally exaggerated by the cartographer for purposes of legibility.
Models and software
Worldwide empirical model of the deep flows described above are available for describing and predicting features of the Earth's magnetic field, including the magnetic declination for any given location at any time in a given timespan. One such model is World Magnetic Model (WMM) of the US and UK. It is built with all the information available to the map-makers at the start of the five-year period it is prepared for. It reflects a highly predictable rate of change, and is usually more accurate than a map—which is likely months or years out of date. For historical data, the IGRF and GUFM models may be used. Tools for using such models include:
Web apps hosted by the National Geophysical Data Center, a division of the National Oceanic and Atmospheric Administration of the United States.
C demo program that for WMM by the National Geospatial-Intelligence Agency, along with various other third-party implementations.
The WMM, IGRF, and GUFM models only describe the magnetic field as emitted at the core-mantle boundary. In practice, the magnetic field is also distorted by the Earth crust, the distortion being magnetic anomaly. For more precise estimates, a larger crust-aware model such as the Enhanced Magnetic Model may be used. (See cited page for a comparison of declination contours.)
Compass Declination Adjustment
Rotating dial compasses
A magnetic compass points to magnetic north, not geographic (true) north. Compasses of the style commonly used for hiking (i.e., baseplate or protractor compass) utilize a dial or bezel which rotates 360 degrees and is independent of the magnetic needle. To manually establish a declination for true north, the bezel is rotated until the desired number of degrees lie between the bezel's designation N (for North) and the direction (east or west) of magnetic north indicated by the polarized tip of the needle (usually painted red). The entire compass is then rotated until the magnetic needle lies within the outlined orienting arrow or box on the bottom of the capsule, and the course heading (in degrees) is displayed at the base of the direction-of-travel arrow on the baseplate. A compass thus adjusted provides a course bearing in relation to true north instead of magnetic north as long as it remains within an area on the same isogonic line.
In the image at the right, the bezel's N has been aligned with the direction indicated by the magnetic end of the compass needle, adjusted for local declination (10 degrees west of magnetic north). The direction-of-travel arrow on the baseplate thus reflects a true north heading.
After determining local declination, a rotating dial compass may be altered to give true north readings by taping or painting a small delta-point or arrowhead on the compass baseplate west or east of magnetic north pointing to true north on the compass bezel. Other compasses of this design utilize an adjustable declination mechanism integrated with the compass bezel, resulting in true north readings each time the needle is aligned with the orienting arrow.
Floating magnetic card compasses
Compasses that utilize a floating magnetized dial or card are commonly found in marine compasses and in certain models used for land navigation that feature a lensatic or prismatic sighting system. A floating card compass always gives bearings in relation to magnetic north and cannot be adjusted for declination. True north must be computed by adding or subtracting local magnetic declination. The example on the left demonstrates a typical conversion of a magnetic bearing from a floating card compass to a true bearing by adding the magnetic declination. The declination in the example is 14°E (+14°). If, instead, the declination was 14°W (−14°), you would still “add” it to the magnetic bearing to obtain the true bearing: 40°+ (−14°) = 26°.
Conversely, local declination is subtracted from a true bearing to obtain a magnetic bearing. With a local declination of 14°E, a true bearing (i.e. obtained from a map) of 54° is converted to a magnetic bearing (for use in the field) by subtracting declination: 54° – 14° = 40°. If the local declination was 14°W (−14°), it is again subtracted from the true bearing to obtain a magnetic bearing: 54°- (−14°) = 68°.
Navigation
On aircraft or vessels there are three types of bearing: true, magnetic, and compass bearing. Compass error is divided into two parts, namely magnetic variation and magnetic deviation, the latter originating from magnetic properties of the vessel or aircraft. Variation and deviation are signed quantities. As discussed above, positive (easterly) variation indicates that magnetic north is east of geographic north. Likewise, positive (easterly) deviation indicates that the compass needle is east of magnetic north.
Compass, magnetic and true bearings are related by:
The general equation relating compass and true bearings is
Where:
is Compass bearing
is Magnetic bearing
is True bearing
is magnetic Variation
is compass Deviation
for westerly Variation and Deviation
for easterly Variation and Deviation
For example, if the compass reads 32°, the local magnetic variation is −5.5° (i.e. West) and the deviation is 0.5° (i.e. East), the true bearing will be:
To calculate true bearing from compass bearing (and known deviation and variation):
Compass bearing + deviation = magnetic bearing
Magnetic bearing + variation = true bearing
To calculate compass bearing from true bearing (and known deviation and variation):
True bearing - variation = Magnetic bearing
Magnetic bearing - deviation = Compass bearing
These rules are often combined with the mnemonic "West is best, East is least"; that is to say, add W declinations when going from True bearings to Magnetic bearings, and subtract E ones.
Another simple way to remember which way to apply the correction for continental USA is:
For locations east of the agonic line (zero declination), roughly east of the Mississippi: the magnetic bearing is always bigger.
For locations west of the agonic line (zero declination), roughly west of the Mississippi: the magnetic bearing is always smaller.
Common abbreviations are:
TC = true course;
V = variation (of the Earth's magnetic field);
MC = magnetic course (what the course would be in the absence of local deviation);
D = deviation caused by magnetic material (mostly iron and steel) on the vessel;
CC = compass course.
Deviation
Magnetic deviation is the angle from a given magnetic bearing to the related bearing mark of the compass. Deviation is positive if a compass bearing mark (e.g., compass north) is right of the related magnetic bearing (e.g., magnetic north) and vice versa. For example, if the boat is aligned to magnetic north and the compass' north mark points 3° more east, deviation is +3°. Deviation varies for every compass in the same location and depends on such factors as the magnetic field of the vessel, wristwatches, etc. The value also varies depending on the orientation of the boat. Magnets and/or iron masses can correct for deviation, so that a particular compass accurately displays magnetic bearings. More commonly, however, a correction card lists errors for the compass, which can then be compensated for arithmetically. Deviation must be added to compass bearing to obtain magnetic bearing.
Air navigation
Air navigation is based on magnetic directions thus it is necessary to periodically revise navigational aids to reflect the drift in magnetic declination over time. This requirement applies to VOR beacons, runway numbering, airway labeling, and aircraft vectoring directions given by air traffic control, all of which are based on magnetic direction.
Runways are designated by a number between 01 and 36, which is generally one tenth of the magnetic azimuth of the runway's heading: a runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). However, due to magnetic declination, changes in runway designators have to occur at times to keep their designation in line with the runway's magnetic heading. An exception is made for runways within the Northern Domestic Airspace of Canada; these are numbered relative to true north because proximity to the magnetic North Pole makes the magnetic declination large and changes in it happen at a high pace.
Radionavigation aids located on the ground, such as VORs, are also checked and updated to keep them aligned with magnetic north to allow pilots to use their magnetic compasses for accurate and reliable in-plane navigation.
For simplicity aviation sectional charts are drawn using true north so the entire chart need not be rotated as magnetic declination changes. Instead individual printed elements on the chart (such as VOR compass roses) are updated with each revision of the chart to reflect changes in magnetic declination. For an example refer to the sectional chart slightly west of Winston-Salem, North Carolina in March 2021, magnetic north is 8 degrees west of true north (Note the dashed line marked 8°W).
When plotting a course, some small aircraft pilots may plot a trip using true north on a sectional chart (map), then convert the true north bearings to magnetic north for in-plane navigation using the magnetic compass. These bearings are then converted on a pre-flight plan by adding or subtracting the local variation displayed on a sectional chart.
GPS systems used for aircraft navigation also display directions in terms of magnetic north even though their intrinsic coordinate system is based on true north. This is accomplished by means of lookup tables inside the GPS which account for magnetic declination. If flying under visual flight rules it is acceptable to fly with an outdated GPS declination database however if flying IFR the database must be updated every 28 days per FAA regulation.
As a fail-safe even the most advanced airliner will still have a magnetic compass in the cockpit. When onboard electronics fail, pilots can still rely on paper charts and the ancient and highly reliable device—the magnetic compass.
References
External links
USGS Geomagnetism Program
Looks up your IP address location and tells you your declination.
Online declination calculator at the National Geophysical Data Center (NGDC)
Online declination and field strength calculator at the NGDC
Mobile web-app for magnetic declination at the NGDC
Historical magnetic declination viewer at the NGDC
Magnetic declination calculator at Natural Resources Canada
A Google spreadsheet application to bulk calculate magnetic declination
World Magnetic Model source code download site
Orientation (geometry)
Geomagnetism
Angle | Magnetic declination | [
"Physics",
"Mathematics"
] | 3,379 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Angle",
"Orientation (geometry)"
] |
308,411 | https://en.wikipedia.org/wiki/Tuned%20mass%20damper | A tuned mass damper (TMD), also known as a harmonic absorber or seismic damper, is a device mounted in structures to reduce mechanical vibrations, consisting of a mass mounted on one or more damped springs. Its oscillation frequency is tuned to be similar to the resonant frequency of the object it is mounted to, and reduces the object's maximum amplitude while weighing much less than it.
TMDs can prevent discomfort, damage, or outright structural failure. They are frequently used in power transmission, automobiles and buildings.
Principle
Tuned mass dampers stabilize against violent motion caused by harmonic vibration. They use a comparatively lightweight component to reduce the vibration of a system so that its worst-case vibrations are less intense. Roughly speaking, practical systems are tuned to either move the main mode away from a troubling excitation frequency, or to add damping to a resonance that is difficult or expensive to damp directly. An example of the latter is a crankshaft torsional damper. Mass dampers are frequently implemented with a frictional or hydraulic component that turns mechanical kinetic energy into heat, like an automotive shock absorber.
Given a motor with mass m1 attached via motor mounts to the ground, the motor vibrates as it operates and the soft motor mounts act as a parallel spring and damper, k1 and c1. The force on the motor mounts is F0. In order to reduce the maximum force on the motor mounts as the motor operates over a range of speeds, a smaller mass, m2, is connected to m1 by a spring and a damper, k2 and c2. F1 is the effective force on the motor due to its operation.
The graph shows the effect of a tuned mass damper on a simple spring–mass–damper system, excited by vibrations with an amplitude of one unit of force applied to the main mass, m1. An important measure of performance is the ratio of the force on the motor mounts to the force vibrating the motor, . This assumes that the system is linear, so if the force on the motor were to double, so would the force on the motor mounts. The blue line represents the baseline system, with a maximum response of 9 units of force at around 9 units of frequency. The red line shows the effect of adding a tuned mass of 10% of the baseline mass. It has a maximum response of 5.5, at a frequency of 7. As a side effect, it also has a second normal mode and will vibrate somewhat more than the baseline system at frequencies below about 6 and above about 10.
The heights of the two peaks can be adjusted by changing the stiffness of the spring in the tuned mass damper. Changing the damping also changes the height of the peaks, in a complex fashion. The split between the two peaks can be changed by altering the mass of the damper (m2).
The Bode plot is more complex, showing the phase and magnitude of the motion of each mass, for the two cases, relative to F1.
In the plots at right, the black line shows the baseline response (m2 = 0). Now considering m2 = , the blue line shows the motion of the damping mass and the red line shows the motion of the primary mass. The amplitude plot shows that at low frequencies, the damping mass resonates much more than the primary mass. The phase plot shows that at low frequencies, the two masses are in phase. As the frequency increases m2 moves out of phase with m1 until at around 9.5 Hz it is 180° out of phase with m1, maximizing the damping effect by maximizing the amplitude of x2 − x1, this maximizes the energy dissipated into c2 and simultaneously pulls on the primary mass in the same direction as the motor mounts.
Mass dampers in automobiles
Motorsport
The tuned mass damper was introduced as part of the suspension system by Renault on its 2005 F1 car (the Renault R25), at the 2005 Brazilian Grand Prix. The system reportedly reduced lap times by 0.3 seconds: a phenomenal gain for a relatively simple device. The stewards of the meeting deemed it legal, but the FIA appealed against that decision.
Two weeks later, the FIA International Court of Appeal deemed the mass damper illegal. It was deemed to be illegal because the mass was not rigidly attached to the chassis; the influence the damper had on the pitch attitude of the car in turn affected the gap under the car and the ground effects of the car. As such, the damper was considered to be a movable aerodynamic device and hence an illegal influence on the performance of the aerodynamics.
Production cars
Tuned mass dampers are widely used in production cars, typically on the crankshaft pulley to control torsional vibration and, more rarely, the bending modes of the crankshaft. They are also used on the driveline for gearwhine, and elsewhere for other noises or vibrations on the exhaust, body, suspension or anywhere else. Almost all modern cars will have one mass damper, and some may have ten or more.
The usual design of damper on the crankshaft consists of a thin band of rubber between the hub of the pulley and the outer rim. This device, often called a harmonic damper, is located on the other end of the crankshaft opposite of where the flywheel and the transmission are. An alternative design is the centrifugal pendulum absorber which is used to reduce the internal combustion engine's torsional vibrations.
All four wheels of the Citroën 2CV incorporated a tuned mass damper (referred to as a in the original French) of very similar design to that used in the Renault F1 car, from the start of production in 1949 on all four wheels, before being removed from the rear and eventually the front wheels in the mid 1970s.
Mass dampers in bridges
The tuned mass damper is widely used as a method to add damping to bridges. One use case for tuned mass dampers in bridges is to prevent large vibrations due to resonance with pedestrian loads. By adding a tuned mass damper, damping is added to the structure which causes the vibration of the structure to be reduced as the vibration steady state amplitude is inversely proportional to the damping of the structure.
Mass dampers in spacecraft
One proposal to reduce vibration on NASA's Ares solid fuel booster was to use 16 tuned mass dampers as part of a design strategy to reduce peak loads from 6g to 0.25g, with the TMDs being responsible for the reduction from 1g to 0.25g, the rest being done by conventional vibration isolators between the upper stages and the booster.
Dampers in power transmission lines
High-tension lines often have small barbell-shaped Stockbridge dampers hanging from the wires to reduce the high-frequency, low-amplitude oscillation termed flutter.
Dampers in wind turbines
A standard tuned mass damper for wind turbines consists of an auxiliary mass which is attached to the main structure by means of springs and dashpot elements. The natural frequency of the tuned mass damper is basically defined by its spring constant and the damping ratio determined by the dashpot. The tuned parameter of the tuned mass damper enables the auxiliary mass to oscillate with a phase shift with respect to the motion of the structure. In a typical configuration, an auxiliary mass hung below the nacelle of a wind turbine supported by dampers or friction plates.
Dampers in buildings and related structures
When installed in buildings, dampers are typically huge concrete blocks or steel bodies mounted in skyscrapers or other structures, which move in opposition to the resonance frequency oscillations of the structure by means of springs, fluid, or pendulums.
Sources of vibration and resonance
Unwanted vibration may be caused by environmental forces acting on a structure, such as wind or earthquake, or by a seemingly innocuous vibration source causing resonance that may be destructive, unpleasant or simply inconvenient.
Earthquakes
The seismic waves caused by an earthquake will make buildings sway and oscillate in various ways depending on the frequency and direction of ground motion, and the height and construction of the building. Seismic activity can cause excessive oscillations of the building which may lead to structural failure. To enhance the building's seismic performance, a proper building design is performed engaging various seismic vibration control technologies.
As mentioned above, damping devices had been used in the aeronautics and automobile industries long before they were standard in mitigating seismic damage to buildings. In fact, the first specialized damping devices for earthquakes were not developed until late in 1950.
Mechanical human sources
Masses of people walking up and down stairs at once, or great numbers of people stomping in unison, can cause serious problems in large structures like stadiums if those structures lack damping measures.
Wind
The force of wind against tall buildings can cause the top of skyscrapers to move more than a meter. This motion can be in the form of swaying or twisting, and can cause the upper floors of such buildings to move. Certain angles of wind and aerodynamic properties of a building can accentuate the movement and cause motion sickness in people. A TMD is usually tuned to its building's resonant frequency to work efficiently. However, during their lifetimes, high-rise and slender buildings may experience natural resonant frequency changes under wind speed, ambient temperature and relative humidity variations, among other factors, which requires a robust TMD design.
Examples of buildings and structures with tuned mass dampers
Australia
Sydney Tower in Sydney has a water tank used to dampen oscillations from high winds and potentially from earthquakes.
Brazil
Senna Tower in Balneário Camboriú
Canada
One Wall Centre in Vancouver employs tuned liquid column dampers, a unique form of tuned mass damper at the time of their installation.
CN Tower in Toronto
China
Shanghai Tower in Shanghai, the third tallest building in the world
Shanghai World Financial Center in Shanghai
Czech Republic
Ještěd Tower, Ještěd (1973)
Germany
Berlin Television Tower () – tuned mass damper located in the spire
VLF transmitter DHO38 – cylindrical containers filled with granulate in the mast structure
India
ATC Tower Delhi Airport in New Delhi – a 50-ton tuned mass damper installed just beneath the ATC floor at 90 m
Statue of Unity near Kevadia, Gujarat – two tuned mass dampers of 250 tons each located at the chest level of Sardar Patel statue
Iran
Tehran International Tower
Ireland
Dublin Spire in Dublin – designed with a tuned mass damper to ensure aerodynamic stability during a wind storm.
Japan
Akashi Kaikyō Bridge, between Honshu and Shikoku, formerly the world's longest suspension bridge, uses pendulums within its suspension towers as tuned mass dampers
Ribbon Chapel in Hiroshima uses a TMD to damp vibrations in two intertwined helical stairways
Tokyo Skytree, Tokyo
Yokohama Landmark Tower, Yokohama
Chiba Port Tower, Chiba
Kazakhstan
Almaty Tower, Almaty
"Kazakh Eli" monument at Independence Square, Nur-Sultan
Russia
Victory Monument at Poklonnaya Hill in Moscow
Olympic torch at Sochi Olympic Park, Sochi
Steel chimneys in Moscow (Thermal Power Plant 27), Ryazan Power Station, Sochi Thermal Power Plant etc.
Sakhalin-I – an offshore drilling platform
Taiwan
Taipei 101 skyscraper – damper, formerly the world's heaviest, located on 87th to 92nd floors
United Arab Emirates
Burj al-Arab in Dubai – 11 tuned mass dampers
United Kingdom
Millennium Bridge, London – nicknamed 'The Wobbly Bridge' due to swaying under heavy foot traffic. Dampers were fitted in response.
One Canada Square, London – prior to the topping out of the Shard in 2012, this was the tallest building in the UK.
United States
111 West 57th Street in New York City contains the heaviest solid damper in the world, at .
432 Park Avenue in New York City
Bally's-to-Bellagio, Bally's-to-Caesars Palace, and Treasure Island-to-The Venetian pedestrian bridges in Las Vegas
Bloomberg Tower/731 Lexington in New York City
Citigroup Center in New York City – designed by William LeMessurier and completed in 1977, it was one of the first skyscrapers to use a tuned mass damper to reduce sway. Its damper is concrete.
Comcast Center in Philadelphia contains the largest tuned liquid column damper (TLCD) in the world at .
Comcast Technology Center in Philadelphia – a set of five tuned dampers containing 125,000 gallons of water – about 500 tons – are located on the 57th floor between the hotel's rooms and lobby.
Grand Canyon Skywalk, Arizona
John Hancock Tower in Boston (1976) – the first building to use a tuned mass damper, which was added after the building was completed
One Madison in New York City
One Rincon Hill South Tower, San Francisco – first building in California to have a liquid tuned mass damper
Park Tower in Chicago – the first building in the United States to be designed with a tuned mass damper from the outset
Random House Tower, New York City, uses two liquid filled dampers
Theme Building at Los Angeles International Airport, Los Angeles
Trump World Tower in New York City
See also
Antiresonance
References
External links
Structures Incorporating Tuned Mass Dampers
Shock absorbers
Resonance
Weights
Earthquake and seismic risk mitigation | Tuned mass damper | [
"Physics",
"Chemistry",
"Engineering"
] | 2,720 | [
"Resonance",
"Structural engineering",
"Physical phenomena",
"Waves",
"Scattering",
"Weights",
"Physical objects",
"Earthquake and seismic risk mitigation",
"Matter"
] |
308,682 | https://en.wikipedia.org/wiki/Living%20polymerization | In polymer chemistry, living polymerization is a form of chain growth polymerization where the ability of a growing polymer chain to terminate has been removed. This can be accomplished in a variety of ways. Chain termination and chain transfer reactions are absent and the rate of chain initiation is also much larger than the rate of chain propagation. The result is that the polymer chains grow at a more constant rate than seen in traditional chain polymerization and their lengths remain very similar (i.e. they have a very low polydispersity index). Living polymerization is a popular method for synthesizing block copolymers since the polymer can be synthesized in stages, each stage containing a different monomer. Additional advantages are predetermined molar mass and control over end-groups.
Living polymerization is desirable because it offers precision and control in macromolecular synthesis. This is important since many of the novel/useful properties of polymers result from their microstructure and molecular weight. Since molecular weight and dispersity are less controlled in non-living polymerizations, this method is more desirable for materials design
In many cases, living polymerization reactions are confused or thought to be synonymous with controlled polymerizations. While these polymerization reactions are very similar, there is a distinction between the definitions of these two reactions. While living polymerizations are defined as polymerization reactions where termination or chain transfer is eliminated, controlled polymerization reactions are reactions where termination is suppressed, but not eliminated, through the introduction of a dormant state of the polymer. However, this distinction is still up for debate in the literature.
The main living polymerization techniques are:
Living anionic polymerization
Living cationic polymerization
Living ring-opening metathesis polymerization
Living free radical polymerization
Living chain-growth polycondensations
History
Living polymerization was demonstrated by Michael Szwarc in 1956 in the anionic polymerization of styrene with an alkali metal / naphthalene system in tetrahydrofuran (THF). Szwarc showed that electron transfer occurred from radical anion of naphthalene to styrene. The initial radical anion of styrene converts to a dianion (or equivalently disodio-) species, which rapidly added styrene to form a "two – ended living polymer." An important aspect of his work, Szwarc employed the aprotic solvent tetrahydrofuran, which dissolves but is otherwise unreactive toward the organometallic intermediates. After initial addition of monomer to the initiator system, the viscosity increased (due to increased polymer chain growth), but eventually cease after depletion of monomer concentration. However, he found that addition of more monomer caused an increase in viscosity, indicating growth of the polymer chain, and thus concluded that the polymer chains had never been terminated. This was a major step in polymer chemistry, since control over when the polymer was quenched, or terminated, was generally not a controlled step. With this discovery, the list of potential applications expanded dramatically.
Today, living polymerizations are used widely in the production of many types of polymers or plastics. For instance, poly(phthalaldehyde) polymer, first developed in 1967, can be synthesized via both living cationic and living anionic polymerization reactions producing both the cyclic or linear form of the polymer respectively. The approach offers control of the chemical makeup of the polymer and, thus, the structural and electronic properties of the material. This level of control rarely exists in non-living polymerization reactions.
Fast rate of initiation: low polydispersity
One of the key characteristics of a living polymerization is that the chain termination and transfer reactions are essentially eliminated from the four elementary reactions of chain-growth polymerization leaving only initiation and (chain) propagation reactions.
A key characteristic of living polymerization is that the rate of initiation (meaning the dormant chemical species generates the active chain propagating species) is much faster than the rate of chain propagation. Thus all of the chains grow at the same rate (the rate of propagation).
The high rate of initiation (together with absence of termination) results in low (or narrow) polydispersity index (PDI), an indication of the broadness in the distribution of polymer chains. The extended lifetime of the propagating chain allowing for co-block polymer formation and end group functionalization to be performed on the living chain. These factors also allow predictable molecular weights, expressed as the number average molecular weight (Mn). For an ideal living system, assuming efficiency for generating active species is 100%, where each initiator generates only one active species the Kinetic chain length (average number of monomers the active species reacts with during its lifetime) at a given time can be estimated by knowing the concentration of monomer remaining. The number average molecular weight, Mn, increases linearly with percent conversion during a living polymerization
Techniques
Living anionic polymerization
As early as 1936, Karl Ziegler proposed that anionic polymerization of styrene and butadiene by consecutive addition of monomer to an alkyl lithium initiator occurred without chain transfer or termination. Twenty years later, living polymerization was demonstrated by Szwarc through the anionic polymerization of styrene in THF using sodium naphthalene as an initiator.
The naphthalene anion initiates polymerization by reducing styrene to its radical anion, which dimerizes to the dilithiodiphenylbutane, which then initiates the polymerization. These experiments relied on Szwarc's ability to control the levels of impurities which would destroy the highly reactive organometallic intermediates.
Living α-olefin polymerization
α-olefins can be polymerized through an anionic coordination polymerization in which the metal center of the catalyst is considered the counter cation for the anionic end of the alkyl chain (through a M-R coordination). Ziegler-Natta initiators were developed in the mid-1950s and are heterogeneous initiators used in the polymerization of alpha-olefins. Not only were these initiators the first to achieve relatively high molecular weight poly(1-alkenes) (currently the most widely produced thermoplastic in the world PE(Polyethylene) and PP (Polypropylene) but the initiators were also capable of stereoselective polymerizations which is attributed to the chiral Crystal structure of the heterogeneous initiator. Due to the importance of this discovery Ziegler and Natta were presented with the 1963 Nobel Prize in chemistry. Although the active species formed from the Ziegler-Natta initiator generally have long lifetimes (on the scale of hours or longer) the lifetimes of the propagating chains are shortened due to several chain transfer pathways (Beta-Hydride elimination and transfer to the co-initiator) and as a result are not considered living.
Metallocene initiators are considered as a type of Ziegler-Natta initiators due to the use of the two-component system consisting of a transition metal and a group I-III metal co-initiator (for example methylalumoxane (MAO) or other alkyl aluminum compounds). The metallocene initiators form homogeneous single site catalysts that were initially developed to study the impact that the catalyst structure had on the resulting polymers structure/properties; which was difficult for multi-site heterogeneous Ziegler-Natta initiators. Owing to the discrete single site on the metallocene catalyst researchers were able to tune and relate how the ancillary ligand (those not directly involved in the chemical transformations) structure and the symmetry about the chiral metal center affect the microstructure of the polymer. However, due to chain breaking reactions (mainly Beta-Hydride elimination) very few metallocene based polymerizations are known.
By tuning the steric bulk and electronic properties of the ancillary ligands and their substituents a class of initiators known as chelate initiators (or post-metallocene initiators) have been successfully used for stereospecific living polymerizations of alpha-olefins. The chelate initiators have a high potential for living polymerizations because the ancillary ligands can be designed to discourage or inhibit chain termination pathways. Chelate initiators can be further broken down based on the ancillary ligands; ansa-cyclopentyadienyl-amido initiators, alpha-diimine chelates and phenoxy-imine chelates.
Ansa-cyclopentadienyl-amido (CpA) initiators
CpA initiators have one cyclopentadienyl substituent and one or more nitrogen substituents coordinated to the metal center (generally a Zr or Ti) (Odian). The dimethyl(pentamethylcyclopentyl)zirconium acetamidinate in figure___ has been used for a stereospecific living polymerization of 1-hexene at −10 °C. The resulting poly(1-hexene) was isotactic (stereochemistry is the same between adjacent repeat units) confirmed by 13C-NMR. The multiple trials demonstrated a controllable and predictable (from catalyst to monomer ratio) Mn with low Đ. The polymerization was further confirmed to be living by sequentially adding 2 portions of the monomer, the second portion was added after the first portion was already polymerized, and monitoring the Đ and Mn of the chain. The resulting polymer chains complied with the predicted Mn (with the total monomer concentration = portion 1 +2) and showed low Đ suggesting the chains were still active, or living, as the second portion of monomer was added (5).
α-diimine chelate initiators
α-diimine chelate initiators are characterized by having a diimine chelating ancillary ligand structure and which is generally coordinated to a late transition (i.e. Ni and Pd) metal center.
Brookhart et al. did extensive work with this class of catalysts and reported living polymerization for α-olefins and demonstrated living α-olefin carbon monoxide alternating copolymers.
Living cationic polymerization
Monomers for living cationic polymerization are electron-rich alkenes such as vinyl ethers, isobutylene, styrene, and N-vinylcarbazole. The initiators are binary systems consisting of an electrophile and a Lewis acid. The method was developed around 1980 with contributions from Higashimura, Sawamoto and Kennedy. Typically, generating a stable carbocation for a prolonged period of time is difficult, due to the possibility for the cation to be quenched by a β-protons attached to another monomer in the backbone, or in a free monomer. Therefore, a different approach is taken
In this example, the carbocation is generated by the addition of a Lewis acid (co-initiator, along with the halogen "X" already on the polymer – see figure), which ultimately generates the carbocation in a weak equilibrium. This equilibrium heavily favors the dormant state, thus leaving little time for permanent quenching or termination by other pathways. In addition, a weak nucleophile (Nu:) can also be added to reduce the concentration of active species even further, thus keeping the polymer "living". However, it is important to note that due to the introduction of a dormant state, as termination has only been decreased, not eliminated (though this topic is still up for debate). But, they do operate similarly, and are used in similar applications to those of true living polymerizations.
Living ring-opening metathesis polymerization
Given the right reaction conditions ring-opening metathesis polymerization (ROMP) can be rendered living. The first such systems were described by Robert H. Grubbs in 1986 based on norbornene and Tebbe's reagent and in 1978 Grubbs together with Richard R. Schrock describing living polymerization with a tungsten carbene complex.
Generally, ROMP reactions involve the conversion of a cyclic olefin with significant ring-strain (>5 kcal/mol), such as cyclobutene, norbornene, cyclopentene, etc., to a polymer that also contains double bonds. The important thing to note about ring-opening metathesis polymerizations is that the double bond is usually maintained in the backbone, which can allow it to be considered "living" under the right conditions.
For a ROMP reaction to be considered "living", several guidelines must be met:
Fast and complete initiation of the monomer. This means that the rate at which an initiating agent activates the monomer for polymerization, must happen very quickly.
How many monomers make up each polymer (the degree of polymerization) must be related linearly to the amount of monomer you started with.
The dispersity of the polymer must be < 1.5. In other words, the distribution of how long your polymer chains are in your reaction must be very low.
With these guidelines in mind, it allows you to create a polymer that is well controlled both in content (what monomer you use) and properties of the polymer (which can be largely attributed to polymer chain length). It is important to note that living ring-opening polymerizations can be anionic or cationic.
Because living polymers have had their termination ability removed, this means that once your monomer has been consumed, the addition of more monomer will result in the polymer chains continuing to grow until all of the additional monomer is consumed. This will continue until the metal catalyst at the end of the chain is intentionally removed by the addition of a quenching agent. As a result, it may potentially allow one to create a block or gradient copolymer fairly easily and accurately. This can lead to a high ability to tune the properties of the polymer to a desired application (electrical/ionic conduction, etc.)
"Living" free radical polymerization
Starting in the 1970s several new methods were discovered which allowed the development of living polymerization using free radical chemistry. These techniques involved catalytic chain transfer polymerization, iniferter mediated polymerization, stable free radical mediated polymerization (SFRP), atom transfer radical polymerization (ATRP), reversible addition-fragmentation chain transfer (RAFT) polymerization, and iodine-transfer polymerization.
In "living" radical polymerization (or controlled radical polymerization (CRP)) the chain breaking pathways are severely depressed when compared to conventional radical polymerization (RP) and CRP can display characteristics of a living polymerization. However, since chain termination is not absent, but only minimized, CRP technically does not meet the requirements imposed by IUPAC for a living polymerization (see introduction for IUPAC definition). This issue has been up for debate the view points of different researchers can be found in a special issue of the Journal of Polymer Science titled Living or Controlled ?. The issue has not yet been resolved in the literature so it is often denoted as a "living" polymerization, quasi-living polymerization, pseudo-living and other terms to denote this issue.
There are two general strategies employed in CRP to suppress chain breaking reactions and promote fast initiation relative to propagation. Both strategies are based on developing a dynamic equilibrium amongst an active propagating radical and a dormant species.
The first strategy involves a reversible trapping mechanism in which the propagating radical undergoes an activation/deactivation (i.e. Atom-transfer radical-polymerization) process with a species X. The species X is a persistent radical, or a species that can generate a stable radical, that cannot terminate with itself or propagate but can only reversibly "terminate" with the propagating radical (from the propagating polymer chain) P*. P* is a radical species that can propagate (kp) and irreversibly terminate (kt) with another P*. X is normally a nitroxide (i.e. TEMPO used in Nitroxide Mediated Radical Polymerization) or an organometallic species. The dormant species (Pn-X) can be activated to regenerate the active propagating species (P*) spontaneously, thermally, using a catalyst and optically.
The second strategy is based on a degenerative transfer (DT) of the propagating radical between transfer agent that acts as a dormant species (i.e. Reversible addition−fragmentation chain-transfer polymerization). The DT based CRP's follow the conventional kinetics of radical polymerization, that is slow initiation and fast termination, but the transfer agent (Pm-X or Pn-X) is present in a much higher concentration compared to the radical initiator. The propagating radical species undergoes a thermally neutral exchange with the dormant transfer agent through atom transfer, group transfer or addition fragment chemistry.
Living chain-growth polycondensations
Chain growth polycondensation polymerizations were initially developed under the premise that a change in substituent effects of the polymer, relative to the monomer, causes the polymers end group to be more reactive this has been referred to as "reactive intermediate polycondensation". The essential result is monomers preferentially react with the activated polymer end groups over reactions with other monomers. This preferred reactivity is the fundamental difference when categorizing a polymerization mechanism as chain-growth as opposed to step-growth in which the monomer and polymer chain end group have equal reactivity (the reactivity is uncontrolled). Several strategies were employed to minimize monomer-monomer reactions (or self-condensation) and polymerizations with low D and controllable Mn have been attained by this mechanism for small molecular weight polymers. However, for high molecular weight polymer chains (i.e. small initiator to monomer ratio) the Mn is not easily to controlled, for some monomers, since self-condensation between monomers occurred more frequently due to the low propagating species concentration.
Catalyst-transfer polycondensation
Catalyst transfer polycondensation (CTP) is a chain-growth polycondensation mechanism in which the monomers do not directly react with one another and instead the monomer will only react with the polymer end group through a catalyst-mediated mechanism. The general process consists of the catalyst activating the polymer end group followed by a reaction of the end group with a 2nd incoming monomer. The catalyst is then transferred to the elongated chain while activating the end group (as shown below).
Catalyst transfer polycondensation allows for the living polymerization of π-conjugated polymers and was discovered by Tsutomu Yokozawa in 2004 and Richard McCullough. In CTP the propagation step is based on organic cross coupling reactions (i.e. Kumada coupling, Sonogashira coupling, Negishi coupling) top form carbon carbon bonds between difunctional monomers. When Yokozawa and McCullough independently discovered the polymerization using a metal catalyst to couple a Grignard reagent with an organohalide making a new carbon-carbon bond. The mechanism below shows the formation of poly(3-alkylthiophene) using a Ni initiator (Ln can be 1,3-Bis(diphenylphosphino)propane (dppp)) and is similar to the conventional mechanism for Kumada coupling involving an oxidative addition, a transmetalation and a reductive elimination step. However, there is a key difference, following reductive elimination in CTP, an associative complex is formed (which has been supported by intra-/intermolecular oxidative addition competition experiments) and the subsequent oxidative addition occurs between the metal center and the associated chain (an intramolecular pathway). Whereas in a coupling reaction the newly formed alkyl/aryl compound diffuses away and the subsequent oxidative addition occurs between an incoming Ar–Br bond and the metal center.
The associative complex is essential to for polymerization to occur in a living fashion since it allows the metal to undergo a preferred intramolecular oxidative addition and remain with a single propagating chain (consistent with chain-growth mechanism), as opposed to an intermolecular oxidative addition with other monomers present in the solution (consistent with a step-growth, non-living, mechanism). The monomer scope of CTP has been increasing since its discovery and has included poly(phenylene)s, poly(fluorine)s, poly(selenophene)s and poly(pyrrole)s.
Living group-transfer polymerization
Group-transfer polymerization also has characteristics of living polymerization. It is applied to alkylated methacrylate monomers and the initiator is a silyl ketene acetal. New monomer adds to the initiator and to the active growing chain in a Michael reaction. With each addition of a monomer group the trimethylsilyl group is transferred to the end of the chain. The active chain-end is not ionic as in anionic or cationic polymerization but is covalent. The reaction can be catalysed by bifluorides and bioxyanions such as tris(dialkylamino)sulfonium bifluoride or tetrabutyl ammonium bibenzoate. The method was discovered in 1983 by Owen Webster and the name first suggested by Barry Trost.
Applications
Living polymerizations are used in the commercial synthesis of many polymers.
Copolymer synthesis and applications
Copolymers are polymers consisting of multiple different monomer species, and can be arranged in various orders, three of which are seen in the figure below.
While there exist others (alternating copolymers, graft copolymers, and stereoblock copolymers), these three are more common in the scientific literature. In addition, block copolymers can exist as many types, including triblock (A-B-A), alternating block (A-B-A-B-A-B), etc.
Of these three types, block and gradient copolymers are commonly synthesized through living polymerizations, due to the ease of control living polymerization provides. Copolymers are highly desired due to the increased flexibility of properties a polymer can have compared to their homopolymer counterparts. The synthetic techniques used range from ROMP to generic anionic or cationic living polymerizations.
Copolymers, due to their unique tunability of properties, can have a wide range of applications. One example (of many) is nano-scale lithography using block copolymers. One used frequently is a block copolymer made of polystyrene and poly(methyl methacrylate) (abbreviated PS-b-PMMA). This copolymer, upon proper thermal and processing conditions, can form cylinders on the order of a few tens of nanometers in diameter of PMMA, surrounded by a PS matrix. These cylinders can then be etched away under high exposure to UV light and acetic acid, leaving a porous PS matrix.
The unique property of this material is that the size of the pores (or the size of the PMMA cylinders) can be easily tuned by the ratio of PS to PMMA in the synthesis of the copolymer. This can be easily tuned due to the easy control given by living polymerization reactions, thus making this technique highly desired for various nanoscale patterning of different materials for applications to catalysis, electronics, etc.
References
External links
IUPAC Gold Book Definition
Living Ziegler-Natta Polymerization Article
Living polymers 50 years of evolution Article
Polymerization reactions | Living polymerization | [
"Chemistry",
"Materials_science"
] | 5,059 | [
"Polymerization reactions",
"Polymer chemistry"
] |
308,803 | https://en.wikipedia.org/wiki/Carnot%27s%20theorem%20%28thermodynamics%29 | Carnot's theorem, also called Carnot's rule or Carnot's law, is a principle of thermodynamics developed by Nicolas Léonard Sadi Carnot in 1824 that specifies limits on the maximum efficiency that any heat engine can obtain.
Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs cannot have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.
The maximum efficiency (i.e., the Carnot heat engine efficiency) of a heat engine operating between hot and cold reservoirs, denoted as and respectively, is the ratio of the temperature difference between the reservoirs to the hot reservoir temperature, expressed in the equation
where and are the absolute temperatures of the hot and cold reservoirs, respectively, and the efficiency is the ratio of the work done by the engine (to the surroundings) to the heat drawn out of the hot reservoir (to the engine).
is greater than zero if and only if there is a temperature difference between the two thermal reservoirs. Since is the upper limit of all reversible and irreversible heat engine efficiencies, it is concluded that work from a heat engine can be produced if and only if there is a temperature difference between two thermal reservoirs connecting to the engine.
Carnot's theorem is a consequence of the second law of thermodynamics. Historically, it was based on contemporary caloric theory, and preceded the establishment of the second law.
Proof
The proof of the Carnot theorem is a proof by contradiction or reductio ad absurdum (a method to prove a statement by assuming its falsity and logically deriving a false or contradictory statement from this assumption), based on a situation like the right figure where two heat engines with different efficiencies are operating between two thermal reservoirs at different temperature. The relatively hotter reservoir is called the hot reservoir and the other reservoir is called the cold reservoir. A (not necessarily reversible) heat engine with a greater efficiency is driving a reversible heat engine with a less efficiency , causing the latter to act as a heat pump. The requirement for the engine to be reversible is necessary to explain work and heat associated with it by using its known efficiency. However, since , the net heat flow would be backwards, i.e., into the hot reservoir:
where represents heat, denotes input to an object, for output from an object, and for the hot thermal reservoir. If heat flows from the hot reservoir then it has the sign of + while if flows from the hot reservoir then it has the sign of -. This expression can be easily derived by using the definition of the efficiency of a heat engine, , where work and heat in this expression are net quantities per engine cycle, and the conservation of energy for each engine as shown below. The sign convention of work , with which the sign of + for work done by an engine to its surroundings, is employed.
The above expression means that heat into the hot reservoir from the engine pair (can be considered as a single engine) is greater than heat into the engine pair from the hot reservoir (i.e., the hot reservoir continuously gets energy). A reversible heat engine with a low efficiency delivers more heat (energy) to the hot reservoir for a given amount of work (energy) to this engine when it is being driven as a heat pump. All these mean that heat can transfer from cold to hot places without external work, and such a heat transfer is impossible by the second law of thermodynamics.
It may seem odd that a hypothetical reversible heat pump with a low efficiency is used to violate the second law of thermodynamics, but the figure of merit for refrigerator units is not the efficiency, , but the coefficient of performance (COP), which is where this has the sign opposite to the above (+ for work done to the engine).
Let's find the values of work and heat depicted in the right figure in which a reversible heat engine with a less efficiency is driven as a heat pump by a heat engine with a more efficiency .
The definition of the efficiency is for each engine and the following expressions can be made:
The denominator of the second expression, , is made to make the expression to be consistent, and it helps to fill the values of work and heat for the engine .
For each engine, the absolute value of the energy entering the engine, , must be equal to the absolute value of the energy leaving from the engine, . Otherwise, energy is continuously accumulated in an engine or the conservation of energy is violated by taking more energy from an engine than input energy to the engine:
In the second expression, is used to find the term describing the amount of heat taken from the cold reservoir, completing the absolute value expressions of work and heat in the right figure.
Having established that the right figure values are correct, Carnot's theorem may be proven for irreversible and the reversible heat engines as shown below.
Reversible engines
To see that every reversible engine operating between reservoirs at temperatures and must have the same efficiency, assume that two reversible heat engines have different efficiencies, and let the relatively more efficient engine drive the relatively less efficient engine as a heat pump. As the right figure shows, this will cause heat to flow from the cold to the hot reservoir without external work, which violates the second law of thermodynamics. Therefore, both (reversible) heat engines have the same efficiency, and we conclude that:
All reversible heat engines that operate between the same two thermal (heat) reservoirs have the same efficiency.
The reversible heat engine efficiency can be determined by analyzing a Carnot heat engine as one of reversible heat engine.
This conclusion is an important result because it helps establish the Clausius theorem, which implies that the change in entropy is unique for all reversible processes:
as the entropy change, that is made during a transition from a thermodynamic equilibrium state to a state in a V-T (Volume-Temperature) space, is the same over all reversible process paths between these two states. If this integral were not path independent, then entropy would not be a state variable.
Irreversible engines
Consider two engines, and , which are irreversible and reversible respectively. We construct the machine shown in the right figure, with driving as a heat pump. Then if is more efficient than , the machine will violate the second law of thermodynamics. Since a Carnot heat engine is a reversible heat engine, and all reversible heat engines operate with the same efficiency between the same reservoirs, we have the first part of Carnot's theorem:
No irreversible heat engine is more efficient than a Carnot heat engine operating between the same two thermal reservoirs.
Definition of thermodynamic temperature
The efficiency of a heat engine is the work done by the engine divided by the heat introduced to the engine per engine cycle or
where is the work done by the engine, is the heat to the cold reservoir from the engine, and is the heat to the engine from the hot reservoir, per cycle. Thus, the efficiency depends only on .
Because all reversible heat engines operating between temperatures and must have the same efficiency, the efficiency of a reversible heat engine is a function of only the two reservoir temperatures:
In addition, a reversible heat engine operating between temperatures and must have the same efficiency as one consisting of two cycles, one between and another (intermediate) temperature , and the second between and (). This can only be the case if
Specializing to the case that is a fixed reference temperature: the temperature of the triple point of water as 273.16. (Of course any reference temperature and any positive numerical value could be used — the choice here corresponds to the Kelvin scale.) Then for any and ,
Therefore, if thermodynamic temperature is defined by
then the function viewed as a function of thermodynamic temperature, is
It follows immediately that
Substituting this equation back into the above equation gives a relationship for the efficiency in terms of thermodynamic temperatures:
Applicability to fuel cells
Since fuel cells can generate useful power when all components of the system are at the same temperature (), they are clearly not limited by Carnot's theorem, which states that no power can be generated when . This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells instead convert chemical energy to work. Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell energy conversion.
A Carnot battery is a type of energy storage system that stores electricity in thermal energy storage and converts the stored heat back to electricity through thermodynamic cycles.
See also
Chambadal–Novikov efficiency
Heating and cooling efficiency bounds
References
Eponymous theorems of physics
Laws of thermodynamics
Thought experiments in physics | Carnot's theorem (thermodynamics) | [
"Physics",
"Chemistry"
] | 1,936 | [
"Equations of physics",
"Eponymous theorems of physics",
"Thermodynamics",
"Laws of thermodynamics",
"Physics theorems"
] |
308,870 | https://en.wikipedia.org/wiki/Ring-opening%20polymerization | In polymer chemistry, ring-opening polymerization (ROP) is a form of chain-growth polymerization in which the terminus of a polymer chain attacks cyclic monomers to form a longer polymer (see figure). The reactive center can be radical, anionic or cationic.
Ring-opening of cyclic monomers is often driven by the relief of bond-angle strain. Thus, as is the case for other types of polymerization, the enthalpy change in ring-opening is negative. Many rings undergo ROP.
Monomers
Many cyclic monomers are amenable to ROP. These include epoxides, cyclic trisiloxanes, some lactones and lactides, cyclic anhydrides, cyclic carbonates, and amino acid N-carboxyanhydrides. Many strained cycloalkenes, e.g norbornene, are suitable monomers via ring-opening metathesis polymerization. Even highly strained cycloalkane rings, such as cyclopropane and cyclobutane derivatives, can undergo ROP.
History
Ring-opening polymerization has been used since the beginning of the 1900s to produce polymers. Synthesis of polypeptides which has the oldest history of ROP, dates back to the work in 1906 by Leuchs. Subsequently, the ROP of anhydro sugars provided polysaccharides, including synthetic dextran, xanthan gum, welan gum, gellan gum, diutan gum, and pullulan. Mechanisms and thermodynamics of ring-opening polymerization were established in the 1950s. The first high-molecular weight polymers (Mn up to 105) with a repeating unit were prepared by ROP as early as in 1976.
An industrial application is the production of nylon-6 from caprolactam.
Mechanisms
Ring-opening polymerization can proceed via radical, anionic, or cationic polymerization as described below. Additionally, radical ROP is useful in producing polymers with functional groups incorporated in the backbone chain that cannot otherwise be synthesized via conventional chain-growth polymerization of vinyl monomers. For instance, radical ROP can produce polymers with ethers, esters, amides, and carbonates as functional groups along the main chain.
Anionic ring-opening polymerization (AROP)
Anionic ring-opening polymerizations (AROP) involve nucleophilic reagents as initiators. Monomers with a three-member ring structure - such as epoxides, aziridines, and episulfides - undergo anionic ROP.
A typical example of anionic ROP is that of ε-caprolactone, initiated by an alkoxide.
Cationic ring-opening polymerization
Cationic initiators and intermediates characterize cationic ring-opening polymerization (CROP). Examples of cyclic monomers that polymerize through this mechanism include lactones, lactams, amines, and ethers. CROP proceeds through an SN1 or SN2 propagation, chain-growth process. The mechanism is affected by the stability of the resulting cationic species. For example, if the atom bearing the positive charge is stabilized by electron-donating groups, polymerization will proceed by the SN1 mechanism. The cationic species is a heteroatom and the chain grows by the addition of cyclic monomers thereby opening the ring system.
The monomers can be activated by Bronsted acids, carbenium ions, onium ions, and metal cations.
CROP can be a living polymerization and can be terminated by nucleophilic reagents such as phenoxy anions, phosphines, or polyanions. When the amount of monomers becomes depleted, termination can occur intra or intermolecularly. The active end can "backbite" the chain, forming a macrocycle. Alkyl chain transfer is also possible, where the active end is quenched by transferring an alkyl chain to another polymer.
Ring-opening metathesis polymerization
Ring-opening metathesis polymerisation (ROMP) produces unsaturated polymers from cycloalkenes or bicycloalkenes. It requires organometallic catalysts.
The mechanism for ROMP follows similar pathways as olefin metathesis. The initiation process involves the coordination of the cycloalkene monomer to the metal alkylidene complex, followed by a [2+2] type cycloaddition to form the metallacyclobutane intermediate that cycloreverts to form a new alkylidene species.
Commercially relevant unsaturated polymers synthesized by ROMP include polynorbornene, polycyclooctene, and polycyclopentadiene.
Thermodynamics
The formal thermodynamic criterion of a given monomer polymerizability is related to a sign of the free enthalpy (Gibbs free energy) of polymerization:
where:
and indicate monomer and polymer states, respectively ( and/or = l (liquid), g (gaseous), c (amorphous solid), c' (crystalline solid), s (solution));
is the enthalpy of polymerization (SI unit: joule per kelvin);
is the entropy of polymerization (SI unit: joule);
is the absolute temperature (SI unit: kelvin).
The free enthalpy of polymerization () may be expressed as a sum of standard enthalpy of polymerization () and a term related to instantaneous monomer molecules and growing macromolecules concentrations:
where:
is the gas constant;
is the monomer;
is the monomer in an initial state;
is the active monomer.
Following Flory–Huggins solution theory that the reactivity of an active center, located at a macromolecule of a sufficiently long macromolecular chain, does not depend on its degree of polymerization (), and taking in to account that (where and indicate a standard polymerization enthalpy and entropy, respectively), we obtain:
At equilibrium (), when polymerization is complete the monomer concentration () assumes a value determined by standard polymerization parameters ( and ) and polymerization temperature:
Polymerization is possible only when . Eventually, at or above the so-called ceiling temperature (), at which , formation of the high polymer does not occur.
For example, tetrahydrofuran (THF) cannot be polymerized above = 84 °C, nor cyclo-octasulfur (S8) below = 159 °C. However, for many monomers, and , for polymerization in the bulk, are well above or below the operable polymerization temperatures, respectively.
The polymerization of a majority of monomers is accompanied by an entropy decrease, due mostly to the loss in the translational degrees of freedom. In this situation, polymerization is thermodynamically allowed only when the enthalpic contribution into prevails (thus, when and , the inequality is required). Therefore, the higher the ring strain, the lower the resulting monomer concentration at equilibrium.
Additional reading
References
Polymerization reactions | Ring-opening polymerization | [
"Chemistry",
"Materials_science"
] | 1,507 | [
"Polymerization reactions",
"Polymer chemistry"
] |
308,884 | https://en.wikipedia.org/wiki/Dispersity | In chemistry, the dispersity is a measure of the heterogeneity of sizes of molecules or particles in a mixture. A collection of objects is called uniform if the objects have the same size, shape, or mass. A sample of objects that have an inconsistent size, shape and mass distribution is called non-uniform. The objects can be in any form of chemical dispersion, such as particles in a colloid, droplets in a cloud, crystals in a rock,
or polymer macromolecules in a solution or a solid polymer mass. Polymers can be described by molecular mass distribution; a population of particles can be described by size, surface area, and/or mass distribution; and thin films can be described by film thickness distribution.
IUPAC has deprecated the use of the term polydispersity index, having replaced it with the term dispersity, represented by the symbol Đ (pronounced D-stroke) which can refer to either molecular mass or degree of polymerization. It can be calculated using the equation ĐM = Mw/Mn, where Mw is the weight-average molar mass and Mn is the number-average molar mass. It can also be calculated according to degree of polymerization, where ĐX = Xw/Xn, where Xw is the weight-average degree of polymerization and Xn is the number-average degree of polymerization. In certain limiting cases where ĐM = ĐX, it is simply referred to as Đ. IUPAC has also deprecated the terms monodisperse, which is considered to be self-contradictory, and polydisperse, which is considered redundant, preferring the terms uniform and non-uniform instead. The terms monodisperse and polydisperse are however still preferentially used to describe particles in an aerosol.
Overview
A uniform polymer (often referred to as a monodisperse polymer) is composed of molecules of the same mass. Nearly all natural polymers are uniform. Synthetic near-uniform polymer chains can be made by processes such as anionic polymerization, a method using an anionic catalyst to produce chains that are similar in length. This technique is also known as living polymerization. It is used commercially for the production of block copolymers. Uniform collections can be easily created through the use of template-based synthesis, a common method of synthesis in nanotechnology.
A polymer material is denoted by the term disperse, or non-uniform, if its chain lengths vary over a wide range of molecular masses. This is characteristic of man-made polymers. Natural organic matter produced by the decomposition of plants and wood debris in soils (humic substances) also has a pronounced polydispersed character. It is the case of humic acids and fulvic acids, natural polyelectrolyte substances having respectively higher and lower molecular weights. Another interpretation of dispersity is explained in the article Dynamic light scattering (cumulant method subheading). In this sense, the dispersity values are in the range from 0 to 1.
The dispersity (Đ), also known as the polydispersity index (PDI) or heterogeneity index, is a measure of the distribution of molecular mass in a given polymer sample. Đ (PDI) of a polymer is calculated:
,
where is the weight average molecular weight and is the number average molecular weight. is more sensitive to molecules of low molecular mass, while is more sensitive to molecules of high molecular mass. The dispersity indicates the distribution of individual molecular masses in a batch of polymers. Đ has a value equal to or greater than 1, but as the polymer chains approach uniform chain length, Đ approaches unity (1). For some natural polymers Đ is almost taken as unity.
Effect of polymerization mechanism
Typical dispersities vary based on the mechanism of polymerization and can be affected by a variety of reaction conditions. In synthetic polymers, it can vary greatly due to reactant ratio, how close the polymerization went to completion, etc. For typical addition polymerization, Đ can range around 5 to 20. For typical step polymerization, most probable values of Đ are around 2 —Carothers' equation limits Đ to values of 2 and below.
Living polymerization, a special case of addition polymerization, leads to values very close to 1. Such is the case also in biological polymers, where the dispersity can be very close or equal to 1, indicating only one length of polymer is present.
Effect of reactor type
The reactor polymerization reactions take place in can also affect the dispersity of the resulting polymer. For bulk radical polymerization with low (<10%) conversion, anionic polymerization, and step growth polymerization to high conversion (>99%), typical dispersities are in the table below.
With respect to batch and plug flow reactors (PFRs), the dispersities for the different polymerization methods are the same. This is largely because while batch reactors depend entirely on time of reaction, plug flow reactors depend on distance traveled in the reactor and its length. Since time and distance are related by velocity, plug flow reactors can be designed to mirror batch reactors by controlling the velocity and length of the reactor. Continuously stirred-tank reactors (CSTRs) however have a residence time distribution and cannot mirror batch or plug flow reactors, which can cause a difference in the dispersity of final polymer.
The effects of reactor type on dispersity depend largely on the relative timescales associated with the reactor, and with the polymerization type. In conventional bulk free radical polymerization, the dispersity is often controlled by the proportion of chains that terminate via combination or disproportionation. The rate of reaction for free radical polymerization is exceedingly quick, due to the reactivity of the radical intermediates. When these radicals react in any reactor, their lifetimes, and as a result, the time needed for reaction are much shorter than any reactor residence time. For FRPs that have a constant monomer and initiator concentration, such that the DPn is constant, the dispersity of the resulting monomer is between 1.5 and 2.0. As a result, reactor type does not affect dispersity for free radical polymerization reactions in any noticeable amount as long as conversion is low.
For anionic polymerization, a form of living polymerization, the reactive anion intermediates have the ability to remain reactive for a very long time. In batch reactors or PFRs, well-controlled anionic polymerization can result in almost uniform polymer. When introduced into a CSTR however, the residence time distribution for reactants in the CSTR affects the dispersity of the anionic polymer due to the anion lifetime. For a homogeneous CSTR, the residence time distribution is the most probable distribution. Since the anionic polymerization dispersity for a batch reactor or PFR is basically uniform, the molecular weight distribution takes on the distribution of the CSTR residence times, resulting in a dispersity of 2. Heterogeneous CSTRs are similar to homogeneous CSTRs, but the mixing within the reactor is not as good as in a homogeneous CSTR. As a result, there are small sections within the reactor that act as smaller batch reactors within the CSTR and end up with different concentrations of reactants. As a result, the dispersity of the reactor lies between that of a batch and that of a homogeneous CSTR.
Step growth polymerization is most affected by reactor type. To achieve any high molecular weight polymer, the fractional conversion must exceed 0.99, and the dispersity of this reaction mechanism in a batch or PFR is 2.0. Running a step-growth polymerization in a CSTR will allow some polymer chains out of the reactor before achieving high molecular weight, while others stay in the reactor for a long time and continue to react. The result is a much more broad molecular weight distribution, which leads to much larger dispersities. For a homogeneous CSTR, the dispersity is proportional to the square root of the Damköhler number, but for a heterogeneous CSTR, dispersity is proportional to the natural log of the Damköhler number. Thus, for the similar reasons as anionic polymerization, the dispersity for heterogeneous CSTRs lies between that of a batch and a homogeneous CSTR.
Determination methods
Gel permeation chromatography (also known as size-exclusion chromatography)
Light scattering measurements such as dynamic light scattering
Direct measurement via mass spectrometry, using matrix-assisted laser desorption/ionization (MALDI) or electrospray ionization with tandem mass spectrometry (ESI-MS/MS)
See also
References
External links
Introduction to Polymers
Copolymers
Polymer chemistry
Colloidal chemistry
Colloids | Dispersity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,843 | [
"Colloidal chemistry",
"Materials science",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"Polymer chemistry"
] |
308,955 | https://en.wikipedia.org/wiki/Grand%20unification%20energy | The grand unification energy , or the GUT scale, is the energy level above which, it is believed, the electromagnetic force, weak force, and strong force become equal in strength and unify to one force governed by a simple Lie group. The exact value of the grand unification energy (if grand unification is indeed realized in nature) depends on the precise physics present at shorter distance scales not yet explored by experiments. If one assumes the Desert and supersymmetry, it is at around 1025 eV or GeV (≈ 1.6 megajoules).
Some Grand Unified Theories (GUTs) can predict the grand unification energy but, usually, with large uncertainties due to model dependent details such as the choice of the gauge group, the Higgs sector, the matter content or further free parameters. Furthermore, at the moment it seems fair to state that there is no agreed minimal GUT.
The unification of the electroweak force and the strong force with the gravitational force in a so-called "Theory of Everything" requires an even higher energy level which is generally assumed to be close to the Planck scale of GeV. In theory, at such short distances, gravity becomes comparable in strength to the other three forces of nature known to date. This statement is modified if there exist additional dimensions of space at intermediate scales. In this case, the strength of gravitational interactions increases faster at smaller distances and the energy scale at which all known forces of nature unify can be considerably lower. This effect is exploited in models of large extra dimensions.
The most powerful collider to date, the Large Hadron Collider (LHC), is designed to reach about 104 GeV in proton–proton collisions. The scale 1016 GeV is only a few orders of magnitude below the Planck energy of 1019 GeV, and thus not within reach of man-made earth bound colliders.
See also
Desert (particle physics)
Standard Model
Timeline of the Big Bang
References
Grand Unified Theory
Physics beyond the Standard Model | Grand unification energy | [
"Physics"
] | 405 | [
"Unsolved problems in physics",
"Particle physics",
"Grand Unified Theory",
"Particle physics stubs",
"Physics beyond the Standard Model"
] |
309,075 | https://en.wikipedia.org/wiki/Antinuclear%20antibody | Antinuclear antibodies (ANAs, also known as antinuclear factor or ANF) are autoantibodies that bind to contents of the cell nucleus. In normal individuals, the immune system produces antibodies to foreign proteins (antigens) but not to human proteins (autoantigens). In some cases, antibodies to human antigens are produced; these are known as autoantibodies.
There are many subtypes of ANAs such as anti-Ro antibodies, anti-La antibodies, anti-Sm antibodies, anti-nRNP antibodies, anti-Scl-70 antibodies, anti-dsDNA antibodies, anti-histone antibodies, antibodies to nuclear pore complexes, anti-centromere antibodies and anti-sp100 antibodies. Each of these antibody subtypes binds to different proteins or protein complexes within the nucleus. They are found in many disorders including autoimmunity, cancer and infection, with different prevalences of antibodies depending on the condition. This allows the use of ANAs in the diagnosis of some autoimmune disorders, including systemic lupus erythematosus, Sjögren syndrome, scleroderma, mixed connective tissue disease, polymyositis, dermatomyositis, autoimmune hepatitis and drug-induced lupus.
The ANA test detects the autoantibodies present in an individual's blood serum. The common tests used for detecting and quantifying ANAs are indirect immunofluorescence and enzyme-linked immunosorbent assay (ELISA). In immunofluorescence, the level of autoantibodies is reported as a titre. This is the highest dilution of the serum at which autoantibodies are still detectable. Positive autoantibody titres at a dilution equal to or greater than 1:160 are usually considered as clinically significant. Positive titres of less than 1:160 are present in up to 20% of the healthy population, especially the elderly. Although positive titres of 1:160 or higher are strongly associated with autoimmune disorders, they are also found in 5% of healthy individuals. Autoantibody screening is useful in the diagnosis of autoimmune disorders and monitoring levels helps to predict the progression of disease. A positive ANA test is seldom useful if other clinical or laboratory data supporting a diagnosis are not present.
Immunity and autoimmunity
The human body has many defense mechanisms against pathogens, one of which is humoral immunity. This defence mechanism produces antibodies (large glycoproteins) in response to an immune stimulus. Many cells of the immune system are required for this process, including lymphocytes (T-cells and B-cells) and antigen presenting cells. These cells coordinate an immune response upon the detection of foreign proteins (antigens), producing antibodies that bind to these antigens. In normal physiology, lymphocytes that recognise human proteins (autoantigens) either undergo programmed cell death (apoptosis) or become non-functional. This self-tolerance means that lymphocytes should not incite an immune response against human cellular antigens. Sometimes, however, this process malfunctions and antibodies are produced against human antigens, which may lead to autoimmune disease.
ANA subtypes
ANAs are found in many disorders, as well as some healthy individuals. These disorders include: systemic lupus erythematosus (SLE), rheumatoid arthritis, Sjögren syndrome, scleroderma, polymyositis, dermatomyositis, primary biliary cirrhosis, drug induced lupus, autoimmune hepatitis, multiple sclerosis, discoid lupus, thyroid disease, antiphospholipid syndrome, juvenile idiopathic arthritis, psoriatic arthritis, juvenile dermatomyositis, idiopathic thrombocytopaenic purpura, infection and cancer. These antibodies can be subdivided according to their specificity, and each subset has different propensities for specific disorders.
Extractable nuclear antigens
Extractable nuclear antigens (ENA) are a group of autoantigens that were originally identified as antibody targets in people with autoimmune disorders. They are termed ENA because they can be extracted from the cell nucleus with saline. The ENAs consist of ribonucleoproteins and non-histone proteins, named by either the name of the donor who provided the prototype serum (Sm, Ro, La, Jo), or the name of the disease setting in which the antibodies were found (SS-A, SS-B, Scl-70).
Anti-Ro/SS-A and anti-La/SS-B
Anti-Ro and anti-La antibodies, also known as SS-A and SS-B, respectively, are commonly found in primary Sjögren's syndrome, an autoimmune disorder that affects the exocrine glands. The presence of both antibodies is found in 30–60% of Sjögren's syndrome, anti-Ro antibodies alone are found in 50–70% of Sjögren's syndrome and 30% of SLE with cutaneous involvement, and anti-La antibodies are rarely found in isolation. Anti-La antibodies are also found in SLE; however, Sjögren's syndrome is normally also present. Anti-Ro antibodies are also found less frequently in other disorders including autoimmune liver diseases, coeliac disease, autoimmune rheumatic diseases, cardiac neonatal lupus erythematosus and polymyositis. During pregnancy, anti-Ro antibodies can cross the placenta and cause heart block and neonatal lupus in babies. In Sjögren's syndrome, anti-Ro and anti-La antibodies correlate with early onset, increased disease duration, parotid gland enlargement, disease outside the glands and infiltration of glands by lymphocytes. Anti-Ro antibodies are specific to components of the Ro-RNP complex, comprising 45kDa, 52kDa, 54kDa and 60kDa proteins and RNA. The 60kDa DNA/RNA binding protein and 52kDa T-cell regulatory protein are the best characterised antigens of anti-Ro antibodies. Collectively, these proteins are part of a ribonucleoprotein (RNP) complex that associate with the human Y RNAs, hY1-hY5. The La antigen is a 48kDa transcription termination factor of RNA polymerase III, which associates with the Ro-RNP complex.
The mechanism of antibody production in Sjögren's syndrome is not fully understood, but apoptosis (programmed cell death) and molecular mimicry may play a role. The Ro and La antigens are expressed on the surface of cells undergoing apoptosis and may cause the inflammation within the salivary gland by interaction with cells of the immune system. The antibodies may also be produced through molecular mimicry, where cross reactive antibodies bind to both virus and human proteins. This may occur with one of the antigens, Ro or La, and may subsequently produce antibodies to other proteins through a process known as epitope spreading. The retroviral gag protein shows similarity to the La protein and is proposed as a possible example for molecular mimicry in Sjögren's syndrome.
Anti-Sm
Anti-Smith (Anti-Sm) antibodies are a very specific marker for SLE. Approximately 99% of individuals without SLE lack anti-Sm antibodies, but only 20% of people with SLE have the antibodies. They are associated with central nervous system involvement, kidney disease, lung fibrosis and pericarditis in SLE, but they are not associated with disease activity. The antigens of the anti-Sm antibodies are the core units of the small nuclear ribonucleoproteins (snRNPs), termed A to G, and will bind to the U1, U2, U4, U5 and U6 snRNPs. Most commonly, the antibodies are specific for the B, B' and D units. Molecular and epidemiological studies suggest that anti-Sm antibodies may be induced by molecular mimicry because the protein shows some similarity to Epstein-Barr virus proteins.
Anti-nRNP/anti-U1-RNP
Anti-nuclear ribonucleoprotein (anti-nRNP) antibodies, also known as anti-U1-RNP antibodies, are found in 30–40% of SLE. They are often found with anti-Sm antibodies, but they may be associated with different clinical associations. In addition to SLE, these antibodies are highly associated with mixed connective tissue disease. Anti-nRNP antibodies recognise the A and C core units of the snRNPs and because of this they primarily bind to the U1-snRNP. The immune response to RNP may be caused by the presentation of the nuclear components on the cell membrane in apoptotic blebs. Molecular mimicry has also been suggested as a possible mechanism for the production of antibodies to these proteins because of similarity between U1-RNP polypeptides and Epstein-Barr virus polypeptides.
Anti-Scl-70/anti-topoisomerase I
Anti-Scl-70 antibodies are linked to scleroderma. The sensitivity of the antibodies for scleroderma is approximately 34%, but is higher for cases with diffuse cutaneous involvement (40%), and lower for limited cutaneous involvement (10%). The specificity of the antibodies is 98% and 99.6% in other rheumatic diseases and normal individuals, respectively. In addition to scleroderma, these antibodies are found in approximately 5% of individuals with SLE. The antigenic target of anti-Scl-70 antibodies is topoisomerase I.
Anti-Jo-1
Although anti-Jo-1 antibodies are often included with ANAs, they are actually antibodies to the cytoplasmic protein, Histidyl-tRNA synthetase – an aminoacyl-tRNA synthetase essential for the synthesis of histidine loaded tRNA. They are highly associated with polymyositis and dermatomyositis, and are rarely found in other connective tissue diseases. Around 20–40% of polymyositis is positive for Jo-1 antibodies and most will have interstitial lung disease, HLA-DR3 and HLA-DRw52 human leukocyte antigen (HLA) markers; collectively known as Jo-1 syndrome.
Anti-dsDNA
Anti-double stranded DNA (anti-dsDNA) antibodies are highly associated with SLE. They are a very specific marker for the disease, with some studies quoting nearly 100%. Data on sensitivity ranges from 25 to 85%. Anti-dsDNA antibody levels, known as titres, correlate with disease activity in SLE; high levels indicate more active lupus. The presence of anti-dsDNA antibodies is also linked with lupus nephritis and there is evidence they are the cause. Some anti-dsDNA antibodies are cross reactive with other antigens found on the glomerular basement membrane (GBM) of the kidney, such as heparan sulphate, collagen IV, fibronectin and laminin. Binding to these antigens within the kidney could cause inflammation and complement fixation, resulting in kidney damage. Presence of high DNA-binding and low C3 levels have been shown to have extremely high predictive value (94%) for the diagnosis of SLE. It is also possible that the anti-dsDNA antibodies are internalised by cells when they bind membrane antigens and then are displayed on the cell surface. This could promote inflammatory responses by T-cells within the kidney. It is important to note that not all anti-dsDNA antibodies are associated with lupus nephritis and that other factors can cause this symptom in their absence. The antigen of anti-dsDNA antibodies is double stranded DNA.
Anti-histone antibodies
Anti-histone antibodies are found in the serum of up to 75–95% of people with drug-induced lupus and 75% of idiopathic SLE. Unlike anti-dsDNA antibodies in SLE, these antibodies do not fix complement. Although they are most commonly found in drug induced lupus, they are also found in some cases of SLE, scleroderma, rheumatoid arthritis and undifferentiated connective tissue disease. Many drugs are known to cause drug induced lupus and they produce various antigenic targets within the nucleosome that are often cross reactive with several histone proteins and DNA. Procainamide causes a form of drug-induced lupus that produces antibodies to the histone H2A and H2B complex.
Anti-gp210 and anti-p62
Both anti-glycoprotein-210 (anti-gp210) and anti-nucleoporin 62 (anti-p62) antibodies are antibodies to components of the nuclear membrane and are found in primary biliary cirrhosis (PBC). Each antibody is present in approximately 25–30% of PBC. The antigens of both antibodies are constituents of the nuclear membrane. gp210 is a 200kDa protein involved in anchoring components of the nuclear pore to the nuclear membrane. The p62 antigen is a 60kDa nuclear pore complex.
Anti-centromere antibodies
Anti-centromere antibodies are associated with limited cutaneous systemic sclerosis, also known as CREST syndrome, primary biliary cirrhosis and proximal scleroderma. There are six known antigens, which are all associated with the centromere; CENP-A to CENP-F. CENP-A is a 17kDa histone H3-like protein. CENP-B is an 80kDa DNA binding protein involved in the folding of heterochromatin. CENP-C is a 140kDa protein involved in kinetochore assembly. CENP-D is a 50kDa protein of unknown function, but may be homologous to another protein involved in chromatin condensation, RCC1. CENP-E is a 312kDa protein from the kinesin motor protein family. CENP-F is a 367kDa protein from the nuclear matrix that associates with the kinetochore in late G2 phase during mitosis. CENP-A, B and C antibodies are most commonly found (16–42% of systemic sclerosis) and are associated with Raynaud's phenomenon, telangiectasias, lung involvement and early onset in systemic sclerosis.
Anti-sp100
Anti-sp100 antibodies are found in approximately 20–30% of primary biliary cirrhosis (PBC). They are found in few individuals without PBC, and therefore are a very specific marker of the disease. The sp100 antigen is found within nuclear bodies; large protein complexes in the nucleus that may have a role in cell growth and differentiation.
Anti-PM-Scl
Anti-PM-Scl antibodies are found in up to 50% of polymyositis/systemic sclerosis (PM/SSc) overlap syndrome. Around 80% of individuals with antibodies present in their blood serum will have the disorder. The presence of the antibodies is linked to limited cutaneous involvement of PM/SSc overlap syndrome. The antigenic targets of the antibodies are components of the RNA-processing exosome complex in the nucleolus. There are ten proteins in this complex and antibodies to eight of them are found at varying frequencies; PM/Scl-100 (70–80%), PM/Scl-75 (46–80%), hRrp4 (50%), hRrp42 (21%), hRrp46 (18%), hCs14 (14%), hRrp41 (10%) and hRrp40 (7%).
Anti-DFS70 antibodies
Anti-DFS70 antibodies generate a dense fine speckled pattern in indirect immunofluorescence and are found in normals and in various conditions, but are not associated with a systemic autoimmune pathology. Therefore, they can be used to help to rule out such conditions in ANA positive individuals. A significant number of patients are diagnosed as systemic lupus erythematosus or undifferentiated connective tissue disease largely based on a positive ANA. In case no defined autoantibody can be detected (e.g. anti-ENA antibodies), the testing of anti-DFS70 antibodies is recommended to verify the diagnosis. Anti-DFS70 antibody tests are available as CE-marked tests. Until now, no FDA cleared assay is available.
ANA test
The presence of ANAs in blood can be confirmed by a screening test. Although there are many tests for the detection of ANAs, the most common tests used for screening are indirect immunofluorescence and enzyme-linked immunosorbent assay (ELISA). Following detection of ANAs, various subtypes are determined.
Indirect immunofluorescence
Indirect immunofluorescence is one of the most commonly used tests for ANAs. Typically, HEp-2 cells are used as a substrate to detect the antibodies in human serum. Microscope slides are coated with HEp-2 cells and the serum is incubated with the cells. If the said and targeted antibodies are present then they will bind to the antigens on the cells; in the case of ANAs, the antibodies will bind to the nucleus. These can be visualised by adding a fluorescent tagged (usually FITC or rhodopsin B) anti-human antibody that binds to the antibodies. The molecule will fluoresce when a specific wavelength of light shines on it, which can be seen under the microscope. Depending on the antibody present in the human serum and the localisation of the antigen in the cell, distinct patterns of fluorescence will be seen on the HEp-2 cells. Levels of antibodies are analysed by performing dilutions on blood serum. An ANA test is considered positive if fluorescence is seen at a titre of 1:40/1:80. Higher titres are more clinically significant as low positives (≤1:160) are found in up to 20% of healthy individuals, especially the elderly. Only around 5% of the healthy population have ANA titres of 1:160 or higher.
HEp-2
Until around 1975, when HEp-2 cells were introduced, animal tissue was used as the standard substrate for immunofluorescence. HEp-2 cells are currently one of the most common substrates for ANA detection by immunofluorescence.
Originally started a laryngeal carcinoma strain, the cell line was contaminated and displaced by HeLa cells, and has now been identified as actually HeLa cells.
They are superior to the previously used animal tissues because of their large size and the high rate of mitosis (cell division) in the cell line. This allows the detection of antibodies to mitosis-specific antigens, such as centromere antibodies. They also allow identification of anti-Ro antibodies, because acetone is used for fixation of the cells (other fixatives can wash the antigen away).
There are many nuclear staining patterns seen on HEp-2 cells: homogeneous, speckled, nucleolar, nuclear membranous, centromeric, nuclear dot and pleomorphic. The homogeneous pattern is seen when the condensed chromosomes and interphase chromatin stain. This pattern is associated with anti-dsDNA antibodies, antibodies to nucleosomal components, and anti-histone antibodies. There are two speckled patterns: fine and coarse. The fine speckled pattern has fine nuclear staining with unstained metaphase chromatin, which is associated with anti-Ro and anti-La antibodies. The coarse staining pattern has coarse granular nuclear staining, caused by anti-U1-RNP and anti-Sm antibodies. The nucleolar staining pattern is associated with many antibodies including anti-Scl-70, anti-PM-Scl, anti-fibrillarin and anti-Th/To. Nuclear membrane staining appears as a fluorescent ring around the cell nucleus and are produced by anti-gp210 and anti-p62 antibodies. The centromere pattern shows multiple nuclear dots in interphase and mitotic cells, corresponding to the number of chromosomes in the cell. Nuclear dot patterns show between 13 and 25 nuclear dots in interphase cells and are produced by anti-sp100 antibodies. Pleomorphic pattern is caused by antibodies to the proliferating cell nuclear antigen. Indirect immunofluorescence has been shown to be slightly superior compared to ELISA in detection of ANA from HEp-2 cells.
Crithidia luciliae
Crithidia luciliae are haemoflaggelate single celled protists. They are used as a substrate in immunofluorescence for the detection of anti-dsDNA antibodies. They possess an organelle known as the kinetoplast which is a large mitochondrion with a network of interlocking circular dsDNA molecules. After incubation with serum containing anti-dsDNA antibodies and fluorescent-labelled anti-human antibodies, the kinetoplast will fluoresce. The lack of other nuclear antigens in this organelle means that using C. luciliae as a substrate allows for the specific detection of anti-dsDNA antibodies.
ELISA
Enzyme-linked immunosorbent assay (ELISA) uses antigen-coated microtitre plates for the detection of ANAs. Each well of a microtitre plate is coated with either a single antigen or multiple antigens to detect specific antibodies or to screen for ANAs, respectively. The antigens are either from cell extracts or recombinant. Blood serum is incubated in the wells of the plate and is washed out. If antibodies that bind to antigen are present then they will remain after washing. A secondary anti-human antibody conjugated to an enzyme such as horseradish peroxidase is added. The enzyme reaction will produce a change in colour of the solution that is proportional to the amount of antibody bound to the antigen. There are significant differences in the detection of ANA by immunofluorescence and different ELISA kits and there is only a marginal agreement between these. A clinician must be familiar with the differences in order to evaluate the outcomes of the various assays.
Sensitivity
The following table lists the sensitivity of different types of ANAs for different diseases.
Some ANAs appear in several types of disease, resulting in lower specificity of the test. For example, IgM-rheumatoid factor (IgM-RF) have been shown to cross-react with ANA giving falsely positive immunofluorescence. Positive ANA as well as anti-DNA antibodies have been reported in patients with autoimmune thyroid disease. ANA can have a positive test result in up to 45% of people with autoimmune thyroid conditions or rheumatoid arthritis and up to 15% of people with HIV or hepatitis C. As per Lupus Foundation of America, "about 5% of the general population will have a positive ANA. However, at least 95% of the people who have a positive ANA do not have lupus. A positive ANA test can sometimes run in families, even if family members have no evidence of lupus." On the other hand, they say, although 95% of the patients who actually have lupus test positive for ANA, "Only a small percentage have a negative ANA, and many of those have other antibodies (such as anti-phospholipid antibodies, anti-Ro, anti-SSA) or their ANA converted from positive to negative from steroids, cytotoxic medications, or uremia (kidney failure)."
History
The LE cell was discovered in bone marrow in 1948 by Hargraves et al. In 1957 Holborow et al. first demonstrated ANA using indirect immunofluorescence. This was the first indication that processes affecting the cell nucleus were responsible for SLE. In 1959 it was discovered that serum from individuals with SLE contained antibodies that precipitated with saline extracts of nuclei, known as extractable nuclear antigens (ENAs). This led to the characterisation of ENA antigens and their respective antibodies. Thus, anti-Sm and anti-RNP antibodies were discovered in 1966 and 1971, respectively. In the 1970s, the anti-Ro/anti-SS-A and anti-La/anti-SS-B antibodies were discovered. The Scl-70 antibody was known to be a specific antibody to scleroderma in 1979, however the antigen (topoisomerase-I) was not characterised until 1986. The Jo-1 antigen and antibody were characterised in 1980.
See also
Anti-neutrophil cytoplasmic antibody (ANCA)
Rheumatoid factor
References
External links
Autoimmunityblog – HEp-2 ANA summary
Chemical pathology
Autoantibodies
Antibodies
Immunologic tests | Antinuclear antibody | [
"Chemistry",
"Biology"
] | 5,268 | [
"Biochemistry",
"Chemical pathology",
"Immunologic tests"
] |
309,221 | https://en.wikipedia.org/wiki/Lubrication | Lubrication is the process or technique of using a lubricant to reduce friction and wear and tear in a contact between two surfaces. The study of lubrication is a discipline in the field of tribology.
Lubrication mechanisms such as fluid-lubricated systems are designed so that the applied load is partially or completely carried by hydrodynamic or hydrostatic pressure, which reduces solid body interactions (and consequently friction and wear). Depending on the degree of surface separation, different lubrication regimes can be distinguished.
Adequate lubrication allows smooth, continuous operation of machine elements, reduces the rate of wear, and prevents excessive stresses or seizures at bearings. By repelling water and other substances, it as well reduces corrosion. When lubrication breaks down, components can rub destructively against each other, causing heat, local welding, destructive damage and failure.
Lubrication mechanisms
Fluid-lubricated systems
As the load increases on the contacting surfaces, distinct situations can be observed with respect to the mode of lubrication, which are called lubrication regimes:
Fluid film lubrication is the lubrication regime in which, through viscous forces, the load is fully supported by the lubricant within the space or gap between the parts in motion, relative to one another, is avoided.
In hydrostatic lubrication, external pressure is applied to the lubricant in the bearing to maintain the fluid lubricant film where it would otherwise be squeezed out.
In hydrodynamic lubrication, the motion of the contacting surfaces, as well as the design of the bearing, pump lubricant around the bearing to maintain the lubricating film. This design of bearing may wear when started, stopped or reversed, as the lubricant film breaks down. The basis of the hydrodynamic theory of lubrication is the Reynolds equation. The governing equations of the hydrodynamic theory of lubrication and some analytical solutions can be found in the reference.
Elastohydrodynamic lubrication: Mostly for nonconforming surfaces or higher load conditions, the bodies suffer elastic strains at the contact. Such strain creates a load-bearing area, which provides an almost parallel gap for the fluid to flow through. Much as in hydrodynamic lubrication, the motion of the contacting bodies generates a flow induced pressure, which acts as the bearing force over the contact area. In such high pressure regimes, the viscosity of the fluid may rise considerably. At full film elastohydrodynamic lubrication, the generated lubricant film completely separates the surfaces. Due to the strong coupling between lubricant hydrodynamic action and the elastic deformation in contacting solids, this regime of lubrication is an example of Fluid-structure interaction. The classical elastohydrodynamic theory considers Reynolds equation and the elastic deflection equation to solve for the pressure and deformation in this lubrication regime. Contact between raised solid features, or asperities, can also occur, leading to a mixed-lubrication or boundary lubrication regime.
Boundary lubrication is defined as the regime in which the load is carried by the surface asperities (high points) rather than by the lubricant. This is the effect that makes Ultra-high-molecular-weight polyethylene "self-lubricating".
Boundary film lubrication: The hydrodynamic effects are negligible. The bodies come into closer contact at their asperities (high points); the heat developed by the local pressures causes a condition which is called stick-slip, and some asperities break off. At the elevated temperature and pressure conditions, chemically reactive constituents of the lubricant react with the contact surface, forming a highly resistant tenacious layer or film on the moving solid surfaces (boundary film) which is capable of supporting the load and major wear or breakdown is avoided.
Mixed lubrication: This regime is in between the full film elastohydrodynamic and boundary lubrication regimes. The generated lubricant film is not enough to separate the bodies completely, but hydrodynamic effects are considerable.
Besides supporting the load the lubricant may have to perform other functions as well, for instance it may cool the contact areas and remove wear products. While carrying out these functions the lubricant is constantly replaced from the contact areas either by the relative movement (hydrodynamics) or by externally induced forces.
Lubrication is required for correct operation of mechanical systems such as pistons, pumps, cams, bearings, turbines, gears, roller chains, cutting tools etc. where without lubrication the pressure between the surfaces in proximity would generate enough heat for rapid surface damage which in a coarsened condition may literally weld the surfaces together, causing seizure.
In some applications, such as piston engines, the film between the piston and the cylinder wall also seals the combustion chamber, preventing combustion gases from escaping into the crankcase.
If an engine required pressurised lubrication to, say, plain bearings, there would be an oil pump and an oil filter. On early engines (such as a Sabb marine diesel), where pressurised feed was not required splash lubrication would suffice.
See also
.
Automatic lubrication system - A system that delivers controlled amounts of lubricant to multiple locations on a machine while the machine is operating.
References
External links
Machinery Lubrication magazine
International Council for Machinery Lubrication
Tribology
Lubricants | Lubrication | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,160 | [
"Tribology",
"Mechanical engineering",
"Surface science",
"Materials science"
] |
309,249 | https://en.wikipedia.org/wiki/Payload%20fraction | In aerospace engineering, payload fraction is a common term used to characterize the efficiency of a particular design. The payload fraction is the quotient of the payload mass and the total vehicle mass at the start of its journey. It is a function of specific impulse, propellant mass fraction and the structural coefficient. In aircraft, loading less than full fuel for shorter trips is standard practice to reduce weight and fuel consumption. For this reason, the useful load fraction calculates a similar number, but it is based on the combined weight of the payload and fuel together in relation to the total weight.
Propeller-driven airliners had useful load fractions on the order of 25–35%. Modern jet airliners have considerably higher useful load fractions, on the order of 45–55%.
For orbital rockets the payload fraction is between 1% and 5%, while the useful load fraction is perhaps 90%.
Examples
For payload fractions and fuel fractions in aviation, see Fuel Fraction.
See also
Tsiolkovsky rocket equation
References
Astrodynamics
Aerospace engineering | Payload fraction | [
"Engineering"
] | 214 | [
"Astrodynamics",
"Aerospace engineering"
] |
27,080,011 | https://en.wikipedia.org/wiki/Poul%20S.%20Jessen | Poul S. Jessen holds the position of Professor of Optical Sciences with a joint appointment in Physics at the University of Arizona. He is a founding member of the Center for Quantum Information and Control. He has done experimental research in the areas of optical lattices, quantum information, quantum chaos, and quantum optics.
Education
Jessen received a BSc in physics and chemistry from University of Aarhus, Denmark in 1987, and a PhD from Aarhus in 1993. While studying at Aarhus, Jessen travelled to the United States and worked with William Daniel Phillips at the National Institute of Standards and Technology. When his original doctoral thesis adviser at Aarhus retired, Phillips took over as his thesis adviser.
Career
In 1990 and 1992, he was a guest researcher at NIST; in 1993, he was a postdoctoral fellow at University of Maryland; from 1993 to 1998, he was an assistant professor at the University of Arizona; from 1998 to 2002, he was associate professor at the University of Arizona; and from 2002 he has been a full professor at the University of Arizona.
He has co-authored more than twenty papers.
References
External links
Poul Jessen's profile at the University of Arizona
Living people
Year of birth missing (living people)
21st-century Danish physicists
Quantum physicists
Optical physicists
Aarhus University alumni
University of Arizona faculty | Poul S. Jessen | [
"Physics"
] | 265 | [
"Quantum physicists",
"Quantum mechanics"
] |
27,082,137 | https://en.wikipedia.org/wiki/Method%20of%20steepest%20descent | In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
C′ passes through one or more zeros of the derivative g′(z),
the imaginary part of g(z) is constant on C′.
The method of steepest descent was first published by , who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by about hypergeometric functions. The contour of steepest descent has a minimax property, see . described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
Basic idea
The method of steepest descent is a method to approximate a complex integral of the formfor large , where and are analytic functions of . Because the integrand is analytic, the contour can be deformed into a new contour without changing the integral. In particular, one seeks a new contour on which the imaginary part, denoted , of is constant ( denotes the real part). Then and the remaining integral can be approximated with other methods like Laplace's method.
Etymology
The method is called the method of steepest descent because for analytic , constant phase contours are equivalent to steepest descent contours.
If is an analytic function of , it satisfies the Cauchy–Riemann equationsThen so contours of constant phase are also contours of steepest descent.
A simple estimate
Let and . If
where denotes the real part, and there exists a positive real number such that
then the following estimate holds:
Proof of the simple estimate:
The case of a single non-degenerate saddle point
Basic notions and notation
Let be a complex -dimensional vector, and
denote the Hessian matrix for a function . If
is a vector function, then its Jacobian matrix is defined as
A non-degenerate saddle point, , of a holomorphic function is a critical point of the function (i.e., ) where the function's Hessian matrix has a non-vanishing determinant (i.e., ).
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
Complex Morse lemma
The Morse lemma for real-valued functions generalizes as follows for holomorphic functions: near a non-degenerate saddle point of a holomorphic function , there exist coordinates in terms of which is exactly quadratic. To make this precise, let be a holomorphic function with domain , and let in be a non-degenerate saddle point of , that is, and . Then there exist neighborhoods of and of , and a bijective holomorphic function with such that
Here, the are the eigenvalues of the matrix .
The asymptotic expansion in the case of a single non-degenerate saddle point
Assume
and are holomorphic functions in an open, bounded, and simply connected set such that the is connected;
has a single maximum: for exactly one point ;
is a non-degenerate saddle point (i.e., and ).
Then, the following asymptotic holds
where are eigenvalues of the Hessian and are defined with arguments
This statement is a special case of more general results presented in Fedoryuk (1987).
Equation (8) can also be written as
where the branch of
is selected as follows
Consider important special cases:
If is real valued for real and in (aka, the multidimensional Laplace method), then
If is purely imaginary for real (i.e., for all in ) and in (aka, the multidimensional stationary phase method), then where denotes the signature of matrix , which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), is related to the Maslov index see, e.g., and .
The case of multiple non-degenerate saddle points
If the function has multiple isolated non-degenerate saddle points, i.e.,
where
is an open cover of , then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions such that
Whence,
Therefore as we have:
where equation (13) was utilized at the last stage, and the pre-exponential function at least must be continuous.
The other cases
When and , the point is called a degenerate saddle point of a function .
Calculating the asymptotic of
when is continuous, and has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function into one of the multitude of canonical representations. For further details see, e.g., and .
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., and/or are discontinuous or when an extremum of lies at the integration region's boundary, require special care (see, e.g., and ).
Extensions and generalizations
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions.
See also
Pearcey integral
Stationary phase approximation
Laplace's method
Notes
References
English translation in
.
.
.
[in Russian].
.
(Unpublished note, reproduced in Riemann's collected papers.)
Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
Translated in .
.
.
Asymptotic analysis
Perturbation theory | Method of steepest descent | [
"Physics",
"Mathematics"
] | 1,746 | [
"Mathematical analysis",
"Asymptotic analysis",
"Quantum mechanics",
"Perturbation theory"
] |
27,087,481 | https://en.wikipedia.org/wiki/Jeitinho | Jeitinho (, literally "little way") is a Portuguese word to describe a method of finding a way to accomplish something by circumventing or bending the rules or transgressing social conventions. The concept is a deeply ingrained part of Brazilian culture.
Overview
The word "jeitinho" is the diminutive form of jeito, meaning 'way', which comes from the Latin 'jactum'. The usage of 'jeitinho' is derived from the expression dar um jeito, meaning "to find a way". It implies the use of resources at hand, as well as personal connections, and creativity. Como é que ele conseguiu os bilhetes? How did he get the tickets? Ele deu um jeito. He found a way.
Most times Jeitinho is harmless, used to find creative solutions to nonsensical problems and/or excessive bureaucracy, as gatecrashing a party to obtain free food and beverage, or making extraneous handshake deals that don't follow exactly what's in the written contracts. Although it's sometimes seen as dishonest or cunning, in reality it comes from the necessity associated with a lack of resources and official help. Most Brazilians have to be creative and invent new, simpler ways to do things they need, as living. An associated concept is "gambiarra", an improvised solution to technical emergencies with whatever means are at hand, or to 'jerry rig', e.g. attaching less-than-ideal materials to something that broke and make it functional again. The difference between "Jeitinho" and "gambiarra" is that the former is a deal between individuals, while the latter is about fixing objects and systems.
One way to understand jeitinho is as a recurso de esperteza, which means a resource used by espertos—savvy, cunning, or sly individuals who use common sense and prior knowledge, as well as naturally gifted intelligence in their thought processes. It implies that a person is "street-smart", but not necessarily "book-smart." It typically also connotes opportunism, pragmatism, and using one's networks, with little regard for the law, the state or for persons outside of one's own circle or family.
Scholarly discussion
Brazilian scholar and historian Sérgio Buarque de Holanda connects the concept of jeitinho to Brazil's mixed heritage and Iberian ancestry in his book "Roots of Brazil" (Raízes do Brasil). In this work, jeitinho is tied to the idea that a typical Brazilian is a friendly, cordial man, prone to making initial decisions based on his emotions rather than his reason, and that this feature can be found everywhere in the country, from the highest offices of government to the most common situations of everyday life. Jeitinho is also observed in Rio de Janeiro's carnival industry by the scholar Roberto DaMatta in his book "Carnavais, Malandros e Heróis" (Carnival, Rogues and Heroes. Notre Dame Press). Da Matta sees jeitinho in the creative culture of carnival.
Similarity to other terms
The terms "malandro" and "malandragem", which can be roughly translated as "rogue" and "roguishness", are very similar to the "jeitinho", but these terms imply a greater degree of breaking the rules, as opposed to bending the rules.
Elsewhere in Latin America, similar concepts include viveza criolla in Argentina and Uruguay, juega vivo in Panama and malicia indígena in Colombia.
Similar, slang terms are used in Europe. One example is Hungarian term 'megoldani okosba', which translates literally to 'to solve it the smart way'. In Polish language, there is a verb 'kombinować', which has similar meaning to the English term.
See also
Opportunism
Gérson's law
Corruption in Brazil
Jugaad
References
External links
Jeitinho Land, excerpts from Brazilian Legacies by Robert M. Levine," M. E. Sharpe Publishers", 1997, 212 pp
Brazilian cultural conventions
Human behavior
Philosophy of life
Corruption in Brazil
Portuguese words and phrases | Jeitinho | [
"Biology"
] | 898 | [
"Behavior",
"Human behavior"
] |
21,194,674 | https://en.wikipedia.org/wiki/Polite%20architecture | Polite architecture, or "the Polite" in architectural theory comprises buildings designed to include non-local styles for aesthetically pleasing decorative effect by professional architects. The term groups most named current architectural styles and can be used to describe many non-vernacular architectural styles. Irreconcilable architectural practices include Functionalism and Brutalism. Common styles often associated with polite architecture include Victorian, Gregorian, Gothic and Classical.
Description
Polite architecture is characterised by stylistic and romantic features which have been intentionally incorporated by an architect for affectation. A building of polite design is conceived to make a stylistic statement which goes beyond its functional requirements. Its design is deferential to national or international architectural fashions, styles, and conventions; paying little or no regard to the conventional building practices and materials particular to a locality.
'The polite' is also a concept of architectural theory used to differentiate from 'the vernacular'. Polite architecture acts as a subcategory of architecture that focuses on the sides of architecture that reinforce the idea of architecture being an advanced, specialized field. Polite architecture places more emphasis on structures designed by those textually instructed architects, whereas, vernacular architecture typically are constructed through direct experiences and express local ideals and needs. In simpler terms, polite architecture refers to architectural designs that had a forethought styles.
Architectural theory
The term is used by architectural historians to contrast with vernacular architecture, which refers to buildings which are constructed from materials and building conventions particular to their locality.
The architectural historian Ronald Brunskill has offered the following definition:
The ultimate in polite architecture will have been designed by a professional architect or one who has acted as such through some other title, such as surveyor or master mason; it will have been designed to follow a national or even an international fashion, style, or set of conventions, towards an aesthetically satisfying result; and aesthetic considerations will have dominated the designer's thought rather than functional demands.
As a theoretical term, the differences between "the polite" and "the vernacular" can be a matter of degree and subjective analysis. Between the extremes of the wholly vernacular and the completely polite, there are buildings which illustrate vernacular and polite content.
The growth of polite architecture
Although originally only accessible to wealthy individuals and institutions, since the developed world's industrialisation buildings characterised by elements of 'the polite' have become prevalent throughout the building stock of developed countries. The rise in the number of buildings reflecting polite architectural features has been influenced by the expansion of the profession of architecture, the availability of more artistically amenable and often more resilient man-made building materials for most structural and decorative purposes, such as cement render, decorative bricks, plastics, glass and metals, and the availability of transport networks capable of delivering materials produced outside of a building's immediate locality. In 16th century England, towers and castles included both decorative and symbolic components. Such features were not only used in order to convey a grand, majestic appearance, but also were utilized as look out points. Additionally, courtyards were constructed in order to express the concept of community. Courtyards were positioned in the center where activities could be held. Other polite architectural features, such as height and lighting, were employed in order to create an aesthetic surrounding social class. Specific features that further expressed differences in social groupings included moats, gatehouses, emblems, and crenelations. The growth of these elements in the late 18th and 19th centuries, led to an expansion in the proportion of buildings which are of polite design, which may be as a result of aesthetic architects being demanded by choice or by economic convenience. Its growth has continued in the 20th and 21st centuries however has been nuanced by local policy and aesthetic demands to incorporate facets of architectural revivalism in many styles of architecture. With advancements in industrialization and materials, more emphasis could be placed on aesthetics and style.
References
Sources and further reading
Architectural styles
Architectural theory | Polite architecture | [
"Engineering"
] | 789 | [
"Architectural theory",
"Architecture"
] |
21,195,116 | https://en.wikipedia.org/wiki/Darwin%20Core | Darwin Core (often abbreviated to DwC) is an extension of Dublin Core for biodiversity informatics. It is meant to provide a stable standard reference for sharing information on biological diversity (biodiversity). The terms described in this standard are a part of a larger set of vocabularies and technical specifications under development and maintained by Biodiversity Information Standards (TDWG) (formerly the Taxonomic Databases Working Group).
Description
The Darwin Core is a body of standards intended to facilitate the sharing of information about biological diversity. The DwC includes a glossary of terms, and documentation providing reference definitions, examples, and commentary. An overview of the currently adopted terms and concepts can be found in the Darwin Core quick reference guide maintained by TDWG.
The DwC operational unit is primarily based on taxa, their occurrence in nature as documented by observations, specimens, and samples, and related information. Included in the standard are documents describing how these terms are managed, how the set of terms can be extended for new purposes, and how the terms can be used.
Each DwC term includes a definition and discussions meant to promote the consistent use of the terms across applications and disciplines. In other contexts, such terms might be called properties, elements, fields, columns, attributes, or concepts. Though the data types and constraints are not provided in the term definitions, recommendations are made about how to restrict the values where appropriate, for instance by suggesting the use of controlled vocabularies.
DwC standards are versioned and are constantly evolving, and working groups frequently add to the documentation practical examples that discuss, refine, and expand the normative definitions of each term. This approach to documentation allows the standard to adapt to new purposes without disrupting existing applications.
In practice, Darwin Core decouples the definition and semantics of individual terms from application of these terms in different technologies. Darwin Core provides separate guidelines on how to encode the terms as RDF, XML or text files.
The Simple Darwin Core is a specification for one particular way to use the terms and to share data about taxa and their occurrences in a simply-structured way. It is likely what is meant if someone were to suggest "formatting your data according to the Darwin Core".
History
Darwin Core was originally created as a Z39.50 profile by the Z39.50 Biology Implementers Group (ZBIG), supported by funding from a USA National Science Foundation award. The name "Darwin Core" was first coined by Allen Allison at the first meeting of the ZBIG held at the University of Kansas in 1998 while commenting on the profile's conceptual similarity with Dublin Core. The Darwin Core profile was later expressed as an XML Schema document for use by the Distributed Generic Information Retrieval (DiGIR) protocol. A TDWG task group was created to revise the Darwin Core, and a ratified metadata standard was officially released on 9 October 2009.
Though ratified as a standard by Biodiversity Information Standards (TDWG) since then, Darwin Core has had numerous previous versions in production usage. The published standard contains a normative term list with the complete history of the versions of terms leading to the current standard.
Key projects using Darwin Core
The Global Biodiversity Information Facility (GBIF)
The Ocean Biogeographic Information System (OBIS)
The Atlas of Living Australia (ALA)
Online Zoological Collections of Australian Museums (OZCAM)
Mammal Networked Information System (MaNIS)
Ornithological Information System (ORNIS)
FishNet 2
VertNet
Canadensys
Sistema Nature 3.0
Encyclopedia of Life
Integrated Digitized Biocollections (iDigBio)
See also
Darwin Core Archive
Data Curation Network Simple Darwin Core for Non-Biologists Primer
References
External links
Darwin Core Quick Reference Guide
Darwin Core Development Site
Official Darwin Core Website
Executive Summary of Darwin Core
Darwin Core Standard Specifications - GitHub repository where DwC is actively maintained
Computational biology
Bioinformatics
Knowledge representation
Interoperability
Metadata standards | Darwin Core | [
"Engineering",
"Biology"
] | 797 | [
"Biological engineering",
"Telecommunications engineering",
"Bioinformatics",
"Interoperability",
"Computational biology"
] |
21,205,195 | https://en.wikipedia.org/wiki/Hardy%20Cross%20method | The Hardy Cross method is an iterative method for determining the flow in pipe network systems where the inputs and outputs are known, but the flow inside the network is unknown.
The method was first published in November 1936 by its namesake, Hardy Cross, a structural engineering professor at the University of Illinois at Urbana–Champaign. The Hardy Cross method is an adaptation of the Moment distribution method, which was also developed by Hardy Cross as a way to determine the forces in statically indeterminate structures.
The introduction of the Hardy Cross method for analyzing pipe flow networks revolutionized municipal water supply design. Before the method was introduced, solving complex pipe systems for distribution was extremely difficult due to the nonlinear relationship between head loss and flow. The method was later made obsolete by computer solving algorithms employing the Newton–Raphson method or other numerical methods that eliminate the need to solve nonlinear systems of equations by hand.
History
In 1930, Hardy Cross published a paper called "Analysis of Continuous Frames by Distributing Fixed-End Moments" in which he described the moment distribution method, which would change the way engineers in the field performed structural analysis. The moment distribution method was used to determine the forces in statically indeterminate structures and allowed for engineers to safely design structures from the 1930s through the 1960s, until the development of computer oriented methods. In November 1936, Cross applied the same geometric method to solving pipe network flow distribution problems, and published a paper called "Analysis of flow in networks of conduits or conductors."
Derivation
The Hardy Cross method is an application of continuity of flow and continuity of potential to iteratively solve for flows in a pipe network. In the case of pipe flow, conservation of flow means that the flow in is equal to the flow out at each junction in the pipe. Conservation of potential means that the total directional head loss along any loop in the system is zero (assuming that a head loss counted against the flow is actually a head gain).
Hardy Cross developed two methods for solving flow networks. Each method starts by maintaining either continuity of flow or potential, and then iteratively solves for the other.
Assumptions
The Hardy Cross method assumes that the flow going in and out of the system is known and that the pipe length, diameter, roughness and other key characteristics are also known or can be assumed. The method also assumes that the relation between flow rate and head loss is known, but the method does not require any particular relation to be used.
In the case of water flow through pipes, a number of methods have been developed to determine the relationship between head loss and flow. The Hardy Cross method allows for any of these relationships to be used.
The general relationship between head loss and flow is:
where k is the head loss per unit flow and n is the flow exponent. In most design situations the values that make up k, such as pipe length, diameter, and roughness, are taken to be known or assumed and thus the value of k can be determined for each pipe in the network. The values that make up k and the value of n change depending on the relation used to determine head loss. However, all relations are compatible with the Hardy Cross method.
It is also worth noting that the Hardy Cross method can be used to solve simple circuits and other flow like situations. In the case of simple circuits,
is equivalent to
.
By setting the coefficient k to K, the flow rate Q to I and the exponent n to 1, the Hardy Cross method can be used to solve a simple circuit. However, because the relation between the voltage drop and current is linear, the Hardy Cross method is not necessary and the circuit can be solved using non-iterative methods.
Method of balancing heads
The method of balancing heads uses an initial guess that satisfies continuity of flow at each junction and then balances the flows until continuity of potential is also achieved over each loop in the system.
Proof (r denotes k)
The following proof is taken from Hardy Cross's paper, “Analysis of flow in networks of conduits or conductors.”, and can be verified by National Programme on Technology Enhanced Learning Water and Wastewater Engineering page, and Fundamentals of Hydraulic Engineering Systems by Robert J. Houghtalen.
If the initial guess of flow rates in each pipe is correct, the change in head over a loop in the system, would be equal to zero. However, if the initial guess is not correct, then the change in head will be non-zero and a change in flow, must be applied. The new flow rate, is the sum of the old flow rate and some change in flow rate such that the change in head over the loop is zero. The sum of the change in head over the new loop will then be .
The value of can be approximated using the Taylor expansion.
For a small compared to the additional terms vanish, leaving:
And solving for
The change in flow that will balance the head over the loop is approximated by . However, this is only an approximation due to the terms that were ignored from the Taylor expansion. The change in head over the loop may not be zero, but it will be smaller than the initial guess. Multiple iterations of finding a new will approximate to the correct solution.
Process
The method is as follows:
Guess the flows in each pipe, making sure that the total in flow is equal to the total out flow at each junction. (The guess doesn't have to be good, but a good guess will reduce the time it takes to find the solution.)
Determine each closed loop in the system.
For each loop, determine the clockwise head losses and counter-clockwise head losses. Head loss in each pipe is calculated using . Clockwise head losses are from flows in the clockwise direction and likewise for counter-clockwise.
Determine the total head loss in each loop, , by subtracting the counter-clockwise head loss from the clockwise head loss.
For each loop, find without reference to direction (all values should be positive).
The change in flow is equal to .
If the change in flow is positive, apply it to all pipes of the loop in the counter-clockwise direction. If the change in flow is negative, apply it to all pipes of the loop in the clockwise direction.
Continue from step 3 until the change in flow is within a satisfactory range.
Method of balancing flows (section incomplete)
The method of balancing flows uses an initial guess that satisfies continuity of potential over each loop and then balances the flows until continuity of flow is also achieved at each junction.
Advantages of the Hardy Cross method
Simple mathematics
The Hardy Cross method is useful because it relies on only simple mathematics, circumventing the need to solve a system of equations. Without the Hardy Cross methods, engineers would have to solve complex systems of equations with variable exponents that cannot easily be solved by hand.
Self correcting
The Hardy Cross method iteratively corrects for the mistakes in the initial guess used to solve the problem. Subsequent mistakes in calculation are also iteratively corrected. If the method is followed correctly, the proper flow in each pipe can still be found if small mathematical errors are consistently made in the process. As long as the last few iterations are done with attention to detail, the solution will still be correct. In fact, it is possible to intentionally leave off decimals in the early iterations of the method to run the calculations faster.
Example
The Hardy Cross method can be used to calculate the flow distribution in a pipe network. Consider the example of a simple pipe flow network shown at the right. For this example, the in and out flows will be 10 liters per second. We will consider n to be 2, and the head loss per unit flow r, and initial flow guess for each pipe as follows:
We solve the network by method of balancing heads, following the steps outlined in method process above.
1. The initial guesses are set up so that continuity of flow is maintained at each junction in the network.
2. The loops of the system are identified as loop 1-2-3 and loop 2-3-4.
3. The head losses in each pipe are determined.
For loop 1-2-3, the sum of the clockwise head losses is 25 and the sum of the counter-clockwise head losses is 125.
For loop 2-3-4, the sum of the clockwise head losses is 125 and the sum of the counter-clockwise head losses is 25.
4. The total clockwise head loss in loop 1-2-3 is . The total clockwise head loss in loop 2-3-4 is .
5. The value of is determined for each loop. It is found to be 60 in both loops (due to symmetry), as shown in the figure.
6. The change in flow is found for each loop using the equation . For loop 1-2-3, the change in flow is equal to and for loop 2-3-4 the change in flow is equal to .
7. The change in flow is applied across the loops. For loop 1-2-3, the change in flow is negative so its absolute value is applied in the clockwise direction. For loop 2-3-4, the change in flow is positive so its absolute value is applied in the counter-clockwise direction. For pipe 2-3, which is in both loops, the changes in flow are cumulative.
The process then repeats from step 3 until the change in flow becomes sufficiently small or goes to zero.
3. The total head loss in Loop 1-2-3 is
Notice that the clockwise head loss is equal to the counter-clockwise head loss. This means that the flow in this loop is balanced and the flow rates are correct. The total head loss in loop 2-3-4 will also be balanced (again due to symmetry).
In this case, the method found the correct solution in one iteration. For other networks, it may take multiple iterations until the flows in the pipes are correct or approximately correct.
See also
Pipe network analysis
Moment distribution method
References
Hydraulic engineering | Hardy Cross method | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,035 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
6,606,771 | https://en.wikipedia.org/wiki/Bis%28trimethylsilyl%29amine | Bis(trimethylsilyl)amine (also known as hexamethyldisilazane and HMDS) is an organosilicon compound with the molecular formula [(CH3)3Si]2NH. The molecule is a derivative of ammonia with trimethylsilyl groups in place of two hydrogen atoms. An electron diffraction study shows that silicon-nitrogen bond length (173.5 pm) and Si-N-Si bond angle (125.5°) to be similar to disilazane (in which methyl groups are replaced by hydrogen atoms) suggesting that steric factors are not a factor in regulating angles in this case. This colorless liquid is a reagent and a precursor to bases that are popular in organic synthesis and organometallic chemistry. Additionally, HMDS is also increasingly used as molecular precursor in chemical vapor deposition techniques to deposit silicon carbonitride thin films or coatings.
Synthesis and derivatives
Bis(trimethylsilyl)amine is synthesized by treatment of trimethylsilyl chloride with ammonia:
2 (CH3)3SiCl + 3 NH3 → [(CH3)3Si]2NH + 2 NH4Cl
Ammonium nitrate together with triethylamine can be used instead. This method is also useful for 15N isotopic enrichment of HMDS.
Alkali metal bis(trimethylsilyl)amides result from the deprotonation of bis(trimethylsilyl)amine. For example, lithium bis(trimethylsilyl)amide (LiHMDS) is prepared using n-butyllithium:
[(CH3)3Si]2NH + BuLi → [(CH3)3Si]2NLi + BuH
LiHMDS and other similar derivatives: sodium bis(trimethylsilyl)amide (NaHMDS) and potassium bis(trimethylsilyl)amide (KHMDS) are used as a non-nucleophilic bases in synthetic organic chemistry.
Use as reagent
Hexamethyldisilazane is employed as a reagent in many organic reactions:
1) HMDS is used as a reagent in condensation reactions of heterocyclic compounds such as in the microwave synthesis of a derivative of xanthine:
2) The HMDS mediated trimethylsilylation of alcohols, thiols, amines and amino acids as protective groups or for intermediary organosilicon compounds is found to be very efficient and replaced TMSCl reagent.
Silylation of glutamic acid with excess hexamethyldisilazane and catalytic TMSCl in either refluxing xylene or acetonitrile followed by dilution with alcohol (methanol or ethanol) yields the derived lactam pyroglutamic acid in good yield.
HMDS in the presence of catalytic iodine facilitates the silylation of alcohols in excellent yields.
3) HMDS can be used to silylate laboratory glassware and make it hydrophobic, or automobile glass, just as Rain-X does.
4) In gas chromatography, HMDS can be used to silylate OH groups of organic compounds to increase volatility, this way enabling GC-analysis of chemicals that are otherwise non-volatile.
Other uses
In photolithography, HMDS is often used as an adhesion promoter for photoresists. Best results are obtained by applying HMDS from the gas phase on heated substrates.
In electron microscopy, HMDS can be used as an alternative to critical point drying during sample preparation.
In pyrolysis-gas chromatography-mass spectrometry, HMDS is added to the analyte to create silylated diagnostic products during pyrolysis, in order to enhance detectability of compounds with polar functional groups.
In plasma-enhanced chemical vapor deposition (PECVD), HMDS is used as a molecular precursor as a replacement to highly flammable and corrosive gasses like SiH4, CH4, NH3 as it can be easily handled. HMDS is used in conjunction with a plasma of various gases such as argon, helium and nitrogen to deposit SiCN thin films/coatings with excellent mechanical, optical and electronic properties.
See also
Hexamethyldisiloxane
Metal bis(trimethylsilyl)amides
References
Amines
Trimethylsilyl compounds
Reagents for organic chemistry | Bis(trimethylsilyl)amine | [
"Chemistry"
] | 942 | [
"Functional groups",
"Trimethylsilyl compounds",
"Reagents for organic chemistry",
"Amines",
"Bases (chemistry)"
] |
6,612,076 | https://en.wikipedia.org/wiki/Vincamine | Vincamine is a monoterpenoid indole alkaloid found in the leaves of Vinca minor (lesser periwinkle), comprising about 25–65% of its indole alkaloids by weight. It can also be synthesized from related alkaloids.
Uses
Vincamine is sold in Europe as a prescription medicine for the treatment of primary degenerative and vascular dementia. In the United States, it is permitted to be sold as a dietary supplement when labeled for use in adults for six months or less. Most common preparations are in the sustained release tablet forms.
Chemistry
Synthesis
Tabersonine can be used for semi-synthesis of vincamine.
Derivatives
Vinpocetine is a synthetic derivative of vincamine used for cerebrovascular diseases and as dietary supplement. Vincamine derivatives have been also studied as anti addictive and antidiabetic agents.
Research
It may have nootropic effects. It has been investigated as novel anticancer drug.
Concerns over long-term use have been documented by the US National Toxicology Program.
See also
Apparicine
Conophylline
References
External links
Tertiary alcohols
Tryptamine alkaloids
Methyl esters
Quinolizidine alkaloids
Vinca alkaloids
Heterocyclic compounds with 5 rings | Vincamine | [
"Chemistry"
] | 267 | [
"Quinolizidine alkaloids",
"Alkaloids by chemical classification",
"Tryptamine alkaloids"
] |
6,612,077 | https://en.wikipedia.org/wiki/Lindel%C3%B6f%27s%20lemma | In mathematics, Lindelöf's lemma is a simple but useful lemma in topology on the real line, named for the Finnish mathematician Ernst Leonard Lindelöf.
Statement of the lemma
Let the real line have its standard topology. Then every open subset of the real line is a countable union of open intervals.
Generalized Statement
Lindelöf's lemma is also known as the statement that every open cover in a second-countable space has a countable subcover (Kelley 1955:49). This means that every second-countable space is also a Lindelöf space.
Proof of the generalized statement
Let be a countable basis of . Consider an open cover, . To get prepared for the following deduction, we define two sets for convenience, , .
A straight-forward but essential observation is that, which is from the definition of base. Therefore, we can get that,
where , and is therefore at most countable. Next, by construction, for each there is some such that . We can therefore write
completing the proof.
References
J.L. Kelley (1955), General Topology, van Nostrand.
M.A. Armstrong (1983), Basic Topology, Springer.
Covering lemmas
Lemmas
Topology | Lindelöf's lemma | [
"Physics",
"Mathematics"
] | 258 | [
"Mathematical theorems",
"Covering lemmas",
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Mathematical problems",
"Lemmas"
] |
6,612,581 | https://en.wikipedia.org/wiki/Hilbert%20scheme | In algebraic geometry, a branch of mathematics, a Hilbert scheme is a scheme that is the parameter space for the closed subschemes of some projective space (or a more general projective scheme), refining the Chow variety. The Hilbert scheme is a disjoint union of projective subschemes corresponding to Hilbert polynomials. The basic theory of Hilbert schemes was developed by . Hironaka's example shows that non-projective varieties need not have Hilbert schemes.
Hilbert scheme of projective space
The Hilbert scheme of classifies closed subschemes of projective space in the following sense: For any locally Noetherian scheme , the set of -valued points
of the Hilbert scheme is naturally isomorphic to the set of closed subschemes of that are flat over . The closed subschemes of that are flat over can informally be thought of as the families of subschemes of projective space parameterized by . The Hilbert scheme breaks up as a disjoint union of pieces corresponding to the Hilbert scheme of the subschemes of projective space with Hilbert polynomial . Each of these pieces is projective over .
Construction as a determinantal variety
Grothendieck constructed the Hilbert scheme of -dimensional projective space as a subscheme of a Grassmannian defined by the vanishing of various determinants. Its fundamental property is that for a scheme , it represents the functor whose -valued points are the closed subschemes of that are flat over .
If is a subscheme of -dimensional projective space, then corresponds to a graded ideal of the polynomial ring in variables, with graded pieces . For sufficiently large all higher cohomology groups of with coefficients in vanish. Using the exact sequencewe have has dimension , where is the Hilbert polynomial of projective space. This can be shown by tensoring the exact sequence above by the locally flat sheaves , giving an exact sequence where the latter two terms have trivial cohomology, implying the triviality of the higher cohomology of . Note that we are using the equality of the Hilbert polynomial of a coherent sheaf with the Euler-characteristic of its sheaf cohomology groups.
Pick a sufficiently large value of . The -dimensional space is a subspace of the -dimensional space , so represents a point of the Grassmannian . This will give an embedding of the piece of the Hilbert scheme corresponding to the Hilbert polynomial into this Grassmannian.
It remains to describe the scheme structure on this image, in other words to describe enough elements for the ideal corresponding to it. Enough such elements are given by the conditions that the map has rank at most for all positive , which is equivalent to the vanishing of various determinants. (A more careful analysis shows that it is enough just to take .)
Universality
Given a closed subscheme over a field with Hilbert polynomial , the Hilbert scheme has a universal subscheme flat over such that
The fibers over closed points are closed subschemes of . For denote this point as .
is universal with respect to all flat families of subschemes of having Hilbert polynomial . That is, given a scheme and a flat family , there is a unique morphism such that .
Tangent space
The tangent space of the point is given by the global sections of the normal bundle ; that is,
Unobstructedness of complete intersections
For local complete intersections such that , the point is smooth. This implies every deformation of in is unobstructed.
Dimension of tangent space
In the case , the dimension of at is greater than or equal to .
In addition to these properties, determined for which polynomials the Hilbert scheme is non-empty, and showed that if is non-empty then it is linearly connected. So two subschemes of projective space are in the same connected component of the Hilbert scheme if and only if they have the same Hilbert polynomial.
Hilbert schemes can have bad singularities, such as irreducible components that are non-reduced at all points. They can also have irreducible components of unexpectedly high dimension. For example, one might expect the Hilbert scheme of points (more precisely dimension 0, length subschemes) of a scheme of dimension to have dimension , but if its irreducible components can have much larger dimension.
Functorial interpretation
There is an alternative interpretation of the Hilbert scheme which leads to a generalization of relative Hilbert schemes parameterizing subschemes of a relative scheme. For a fixed base scheme , let and letbe the functor sending a relative scheme to the set of isomorphism classes of the setwhere the equivalence relation is given by the isomorphism classes of . This construction is functorial by taking pullbacks of families. Given , there is a family over .
Representability for projective maps
If the structure map is projective, then this functor is represented by the Hilbert scheme constructed above. Generalizing this to the case of maps of finite type requires the technology of algebraic spaces developed by Artin.
Relative Hilbert scheme for maps of algebraic spaces
In its greatest generality, the Hilbert functor is defined for a finite type map of algebraic spaces defined over a scheme . Then, the Hilbert functor is defined as
sending T to
.
This functor is not representable by a scheme, but by an algebraic space. Also, if , and is a finite type map of schemes, their Hilbert functor is represented by an algebraic space.
Examples of Hilbert schemes
Fano schemes of hypersurfaces
One of the motivating examples for the investigation of the Hilbert scheme in general was the Fano scheme of a projective scheme. Given a subscheme of degree , there is a scheme in parameterizing where is a -plane in , meaning it is a degree one embedding of . For smooth surfaces in of degree , the non-empty Fano schemes are smooth and zero-dimensional. This is because lines on smooth surfaces have negative self-intersection.
Hilbert scheme of points
Another common set of examples are the Hilbert schemes of -points of a scheme , typically denoted . For a Riemann surface X, . For there is a nice geometric interpretation where the boundary loci describing the intersection of points can be thought of parametrizing points along with their tangent vectors. For example, is the blowup of the diagonal modulo the symmetric action.
Degree d hypersurfaces
The Hilbert scheme of degree k hypersurfaces in is given by the projectivization . For example, the Hilbert scheme of degree 2 hypersurfaces in is with the universal hypersurface given by
where the underlying ring is bigraded.
Hilbert scheme of curves and moduli of curves
For a fixed genus algebraic curve , the degree of the tri-tensored dualizing sheaf is globally generated, meaning its Euler characteristic is determined by the dimension of the global sections, so
.
The dimension of this vector space is , hence the global sections of determine an embedding into for every genus curve. Using the Riemann-Roch formula, the associated Hilbert polynomial can be computed as
.
Then, the Hilbert scheme
parameterizes all genus g curves. Constructing this scheme is the first step in the construction of the moduli stack of algebraic curves. The other main technical tool are GIT quotients, since this moduli space is constructed as the quotient
,
where is the sublocus of smooth curves in the Hilbert scheme.
Hilbert scheme of points on a manifold
"Hilbert scheme" sometimes refers to the punctual Hilbert scheme of 0-dimensional subschemes on a scheme. Informally this can be thought of as something like finite collections of points on a scheme, though this picture can be very misleading when several points coincide.
There is a Hilbert–Chow morphism from the reduced Hilbert scheme of points to the Chow variety of cycles taking any 0-dimensional scheme to its associated 0-cycle. .
The Hilbert scheme of points on is equipped with a natural morphism to an -th symmetric product of . This morphism is birational for of dimension at most 2. For of dimension at least 3 the morphism is not birational for large : the Hilbert scheme is in general reducible and has components of dimension much larger than that of the symmetric product.
The Hilbert scheme of points on a curve (a dimension-1 complex manifold) is isomorphic to a symmetric power of . It is smooth.
The Hilbert scheme of points on a surface is also smooth (Grothendieck). If , it is obtained from by blowing up the diagonal and then dividing by the action induced by . This was used by Mark Haiman in his proof of the positivity of the coefficients of some Macdonald polynomials.
The Hilbert scheme of a smooth manifold of dimension 3 or more is usually not smooth.
Hilbert schemes and hyperkähler geometry
Let be a complex Kähler surface with (K3 surface or a torus). The canonical bundle of is trivial, as follows from the Kodaira classification of surfaces. Hence admits a holomorphic symplectic form. It was observed by Akira Fujiki (for ) and Arnaud Beauville that is also holomorphically symplectic. This is not very difficult to see, e.g., for . Indeed, is a blow-up of a symmetric square of . Singularities of are locally isomorphic to . The blow-up of is , and this space is symplectic. This is used to show that the symplectic form is naturally extended to the smooth part of the exceptional divisors of . It is extended to the rest of by Hartogs' principle.
A holomorphically symplectic, Kähler manifold is hyperkähler, as follows from the Calabi–Yau theorem. Hilbert schemes of points on the K3 surface and on a 4-dimensional torus give two series of examples of hyperkähler manifolds: a Hilbert scheme of points on K3 and a generalized Kummer surface.
See also
Quot scheme
Castelnuovo–Mumford regularity
Matsusaka's big theorem
Moduli of algebraic curves
Moduli space
Hilbert modular surface
Siegel modular variety
References
Reprinted in
Examples and applications
Bott's formula and enumerative geometry
The Number of Twisted Cubics on a Quintic Threefold
Rational curves on Calabi–Yau threefolds: Verifying mirror symmetry predictions
External links
Scheme theory
Algebraic geometry
Differential geometry
Moduli theory | Hilbert scheme | [
"Mathematics"
] | 2,108 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
6,612,596 | https://en.wikipedia.org/wiki/Hilbert%20series%20and%20Hilbert%20polynomial | In commutative algebra, the Hilbert function, the Hilbert polynomial, and the Hilbert series of a graded commutative algebra finitely generated over a field are three strongly related notions which measure the growth of the dimension of the homogeneous components of the algebra.
These notions have been extended to filtered algebras, and graded or filtered modules over these algebras, as well as to coherent sheaves over projective schemes.
The typical situations where these notions are used are the following:
The quotient by a homogeneous ideal of a multivariate polynomial ring, graded by the total degree.
The quotient by an ideal of a multivariate polynomial ring, filtered by the total degree.
The filtration of a local ring by the powers of its maximal ideal. In this case the Hilbert polynomial is called the Hilbert–Samuel polynomial.
The Hilbert series of an algebra or a module is a special case of the Hilbert–Poincaré series of a graded vector space.
The Hilbert polynomial and Hilbert series are important in computational algebraic geometry, as they are the easiest known way for computing the dimension and the degree of an algebraic variety defined by explicit polynomial equations. In addition, they provide useful invariants for families of algebraic varieties because a flat family has the same Hilbert polynomial over any closed point . This is used in the construction of the Hilbert scheme and Quot scheme.
Definitions and main properties
Consider a finitely generated graded commutative algebra over a field , which is finitely generated by elements of positive degree. This means that
and that .
The Hilbert function
maps the integer to the dimension of the -vector space . The Hilbert series, which is called Hilbert–Poincaré series in the more general setting of graded vector spaces, is the formal series
If is generated by homogeneous elements of positive degrees , then the sum of the Hilbert series is a rational fraction
where is a polynomial with integer coefficients.
If is generated by elements of degree 1 then the sum of the Hilbert series may be rewritten as
where is a polynomial with integer coefficients, and is the Krull dimension of .
In this case the series expansion of this rational fraction is
where
is the binomial coefficient for and is 0 otherwise.
If
the coefficient of in is thus
For the term of index in this sum is a polynomial in of degree with leading coefficient This shows that there exists a unique polynomial with rational coefficients which is equal to for large enough. This polynomial is the Hilbert polynomial, and has the form
The least such that for is called the Hilbert regularity. It may be lower than .
The Hilbert polynomial is a numerical polynomial, since the dimensions are integers, but the polynomial almost never has integer coefficients .
All these definitions may be extended to finitely generated graded modules over , with the only difference that a factor appears in the Hilbert series, where is the minimal degree of the generators of the module, which may be negative.
The Hilbert function, the Hilbert series and the Hilbert polynomial of a filtered algebra are those of the associated graded algebra.
The Hilbert polynomial of a projective variety in is defined as the Hilbert polynomial of the homogeneous coordinate ring of .
Graded algebra and polynomial rings
Polynomial rings and their quotients by homogeneous ideals are typical graded algebras. Conversely, if is a graded algebra generated over the field by homogeneous elements of degree 1, then the map which sends onto defines an homomorphism of graded rings from onto . Its kernel is a homogeneous ideal and this defines an isomorphism of graded algebra between and .
Thus, the graded algebras generated by elements of degree 1 are exactly, up to an isomorphism, the quotients of polynomial rings by homogeneous ideals. Therefore, the remainder of this article will be restricted to the quotients of polynomial rings by ideals.
Properties of Hilbert series
Additivity
Hilbert series and Hilbert polynomial are additive relatively to exact sequences. More precisely, if
is an exact sequence of graded or filtered modules, then we have
and
This follows immediately from the same property for the dimension of vector spaces.
Quotient by a non-zero divisor
Let be a graded algebra and a homogeneous element of degree in which is not a zero divisor. Then we have
It follows from the additivity on the exact sequence
where the arrow labeled is the multiplication by , and is the graded module which is obtained from by shifting the degrees by , in order that the multiplication by has degree 0. This implies that
Hilbert series and Hilbert polynomial of a polynomial ring
The Hilbert series of the polynomial ring in indeterminates is
It follows that the Hilbert polynomial is
The proof that the Hilbert series has this simple form is obtained by applying recursively the previous formula for the quotient by a non zero divisor (here ) and remarking that
Shape of the Hilbert series and dimension
A graded algebra generated by homogeneous elements of degree 1 has Krull dimension zero if the maximal homogeneous ideal, that is the ideal generated by the homogeneous elements of degree 1, is nilpotent. This implies that the dimension of as a -vector space is finite and the Hilbert series of is a polynomial such that is equal to the dimension of as a -vector space.
If the Krull dimension of is positive, there is a homogeneous element of degree one which is not a zero divisor (in fact almost all elements of degree one have this property). The Krull dimension of is the Krull dimension of minus one.
The additivity of Hilbert series shows that . Iterating this a number of times equal to the Krull dimension of , we get eventually an algebra of dimension 0 whose Hilbert series is a polynomial . This show that the Hilbert series of is
where the polynomial is such that and is the Krull dimension of .
This formula for the Hilbert series implies that the degree of the Hilbert polynomial is , and that its leading coefficient is .
Degree of a projective variety and Bézout's theorem
The Hilbert series allows us to compute the degree of an algebraic variety as the value at 1 of the numerator of the Hilbert series. This provides also a rather simple proof of Bézout's theorem.
For showing the relationship between the degree of a projective algebraic set and the Hilbert series, consider a projective algebraic set , defined as the set of the zeros of a homogeneous ideal , where is a field, and let be the ring of the regular functions on the algebraic set.
In this section, one does not need irreducibility of algebraic sets nor primality of ideals. Also, as Hilbert series are not changed by extending the field of coefficients, the field is supposed, without loss of generality, to be algebraically closed.
The dimension of is equal to the Krull dimension minus one of , and the degree of is the number of points of intersection, counted with multiplicities, of with the intersection of hyperplanes in general position. This implies the existence, in , of a regular sequence of homogeneous polynomials of degree one. The definition of a regular sequence implies the existence of exact sequences
for This implies that
where is the numerator of the Hilbert series of .
The ring has Krull dimension one, and is the ring of regular functions of a projective algebraic set of dimension 0 consisting of a finite number of points, which may be multiple points. As belongs to a regular sequence, none of these points belong to the hyperplane of equation The complement of this hyperplane is an affine space that contains This makes an affine algebraic set, which has as its ring of regular functions. The linear polynomial is not a zero divisor in and one has thus an exact sequence
which implies that
Here we are using Hilbert series of filtered algebras, and the fact that the Hilbert series of a graded algebra is also its Hilbert series as filtered algebra.
Thus is an Artinian ring, which is a -vector space of dimension , and Jordan–Hölder theorem may be used for proving that is the degree of the algebraic set . In fact, the multiplicity of a point is the number of occurrences of the corresponding maximal ideal in a composition series.
For proving Bézout's theorem, one may proceed similarly. If is a homogeneous polynomial of degree , which is not a zero divisor in , the exact sequence
shows that
Looking on the numerators this proves the following generalization of Bézout's theorem:
Theorem - If is a homogeneous polynomial of degree , which is not a zero divisor in , then the degree of the intersection of with the hypersurface defined by is the product of the degree of by
In a more geometrical form, this may restated as:
Theorem - If a projective hypersurface of degree does not contain any irreducible component of an algebraic set of degree , then the degree of their intersection is .
The usual Bézout's theorem is easily deduced by starting from a hypersurface, and intersecting it with other hypersurfaces, one after the other.
Complete intersection
A projective algebraic set is a complete intersection if its defining ideal is generated by a regular sequence. In this case, there is a simple explicit formula for the Hilbert series.
Let be homogeneous polynomials in , of respective degrees Setting one has the following exact sequences
The additivity of Hilbert series implies thus
A simple recursion gives
This shows that the complete intersection defined by a regular sequence of polynomials has a codimension of , and that its degree is the product of the degrees of the polynomials in the sequence.
Relation with free resolutions
Every graded module over a graded regular ring has a graded free resolution because of the Hilbert syzygy theorem, meaning there exists an exact sequence
where the are graded free modules, and the arrows are graded linear maps of degree zero.
The additivity of Hilbert series implies that
If is a polynomial ring, and if one knows the degrees of the basis elements of the then the formulas of the preceding sections allow deducing from In fact, these formulas imply that, if a graded free module has a basis of homogeneous elements of degrees then its Hilbert series is
These formulas may be viewed as a way for computing Hilbert series. This is rarely the case, as, with the known algorithms, the computation of the Hilbert series and the computation of a free resolution start from the same Gröbner basis, from which the Hilbert series may be directly computed with a computational complexity which is not higher than that the complexity of the computation of the free resolution.
Computation of Hilbert series and Hilbert polynomial
The Hilbert polynomial is easily deducible from the Hilbert series (see above). This section describes how the Hilbert series may be computed in the case of a quotient of a polynomial ring, filtered or graded by the total degree.
Thus let K a field, be a polynomial ring and I be an ideal in R. Let H be the homogeneous ideal generated by the homogeneous parts of highest degree of the elements of I. If I is homogeneous, then H=I. Finally let B be a Gröbner basis of I for a monomial ordering refining the total degree partial ordering and G the (homogeneous) ideal generated by the leading monomials of the elements of B.
The computation of the Hilbert series is based on the fact that the filtered algebra R/I and the graded algebras R/H and R/G have the same Hilbert series.
Thus the computation of the Hilbert series is reduced, through the computation of a Gröbner basis, to the same problem for an ideal generated by monomials, which is usually much easier than the computation of the Gröbner basis. The computational complexity of the whole computation depends mainly on the regularity, which is the degree of the numerator of the Hilbert series. In fact the Gröbner basis may be computed by linear algebra over the polynomials of degree bounded by the regularity.
The computation of Hilbert series and Hilbert polynomials are available in most computer algebra systems. For example in both Maple and Magma these functions are named HilbertSeries and HilbertPolynomial.
Generalization to coherent sheaves
In algebraic geometry, graded rings generated by elements of degree 1 produce projective schemes by Proj construction while finitely generated graded modules correspond to coherent sheaves. If is a coherent sheaf over a projective scheme X, we define the Hilbert polynomial of as a function , where χ is the Euler characteristic of coherent sheaf, and a Serre twist. The Euler characteristic in this case is a well-defined number by Grothendieck's finiteness theorem.
This function is indeed a polynomial. For large m it agrees with dim by Serre's vanishing theorem. If M is a finitely generated graded module and the associated coherent sheaf the two definitions of Hilbert polynomial agree.
Graded free resolutions
Since the category of coherent sheaves on a projective variety is equivalent to the category of graded-modules modulo a finite number of graded-pieces, we can use the results in the previous section to construct Hilbert polynomials of coherent sheaves. For example, a complete intersection of multi-degree has the resolution
See also
Castelnuovo–Mumford regularity
Hilbert scheme
Quot scheme
Citations
References
.
.
Commutative algebra
Algebraic geometry | Hilbert series and Hilbert polynomial | [
"Mathematics"
] | 2,665 | [
"Fields of abstract algebra",
"Commutative algebra",
"Algebraic geometry"
] |
6,613,227 | https://en.wikipedia.org/wiki/Friedrichs%27s%20inequality | In mathematics, Friedrichs's inequality is a theorem of functional analysis, due to Kurt Friedrichs. It places a bound on the Lp norm of a function using Lp bounds on the weak derivatives of the function and the geometry of the domain, and can be used to show that certain norms on Sobolev spaces are equivalent. Friedrichs's inequality generalizes the Poincaré–Wirtinger inequality, which deals with the case k = 1.
Statement of the inequality
Let be a bounded subset of Euclidean space with diameter . Suppose that lies in the Sobolev space , i.e., and the trace of on the boundary is zero. Then
In the above
denotes the Lp norm;
α = (α1, ..., αn) is a multi-index with norm |α| = α1 + ... + αn;
Dαu is the mixed partial derivative
See also
Poincaré inequality
References
Sobolev spaces
Inequalities
Linear functionals | Friedrichs's inequality | [
"Mathematics"
] | 203 | [
"Mathematical analysis",
"Mathematical theorems",
"Mathematical analysis stubs",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems"
] |
25,567,675 | https://en.wikipedia.org/wiki/MacMahon%27s%20master%20theorem | In mathematics, MacMahon's master theorem (MMT) is a result in enumerative combinatorics and linear algebra. It was discovered by Percy MacMahon and proved in his monograph Combinatory analysis (1916). It is often used to derive binomial identities, most notably Dixon's identity.
Background
In the monograph, MacMahon found so many applications of his result, he called it "a master theorem in the Theory of Permutations." He explained the title as follows: "a Master Theorem from the masterly and rapid fashion in which it deals with various questions otherwise troublesome to solve."
The result was re-derived (with attribution) a number of times, most notably by I. J. Good who derived it from his multilinear generalization of the Lagrange inversion theorem. MMT was also popularized by Carlitz who found an exponential power series version. In 1962, Good found a short proof of Dixon's identity from MMT. In 1969, Cartier and Foata found a new proof of MMT by combining algebraic and bijective ideas (built on Foata's thesis) and further applications to combinatorics on words, introducing the concept of traces. Since then, MMT has become a standard tool in enumerative combinatorics.
Although various q-Dixon identities have been known for decades, except for a Krattenthaler–Schlosser extension (1999), the proper q-analog of MMT remained elusive. After Garoufalidis–Lê–Zeilberger's quantum extension (2006), a number of noncommutative extensions were developed by Foata–Han, Konvalinka–Pak, and Etingof–Pak. Further connections to Koszul algebra and quasideterminants were also found by Hai–Lorentz, Hai–Kriegk–Lorenz, Konvalinka–Pak, and others.
Finally, according to J. D. Louck, the theoretical physicist Julian Schwinger re-discovered the MMT in the context of his generating function approach to the angular momentum theory of many-particle systems. Louck writes:
Precise statement
Let be a complex matrix, and let be formal variables. Consider a coefficient
(Here the notation means "the coefficient of monomial in ".) Let be another set of formal variables, and let be a diagonal matrix. Then
where the sum runs over all nonnegative integer vectors ,
and denotes the identity matrix of size .
Derivation of Dixon's identity
Consider a matrix
Compute the coefficients G(2n, 2n, 2n) directly from the definition:
where the last equality follows from the fact that on the right-hand side we have the product of the following coefficients:
which are computed from the binomial theorem. On the other hand, we can compute the determinant explicitly:
Therefore, by the MMT, we have a new formula for the same coefficients:
where the last equality follows from the fact that we need to use an equal number of times all three terms in the power. Now equating the two formulas for coefficients G(2n, 2n, 2n) we obtain an equivalent version of Dixon's identity:
See also
Permanent
References
P.A. MacMahon, Combinatory analysis, vols 1 and 2, Cambridge University Press, 1915–16.
P. Cartier and D. Foata, Problèmes combinatoires de commutation et réarrangements, Lecture Notes in Mathematics, no. 85, Springer, Berlin, 1969.
L. Carlitz, An Application of MacMahon's Master Theorem, SIAM Journal on Applied Mathematics 26 (1974), 431–436.
I.P. Goulden and D. M. Jackson, Combinatorial Enumeration, John Wiley, New York, 1983.
C. Krattenthaler and M. Schlosser, A new multidimensional matrix inverse with applications to multiple q-series, Discrete Mathematics 204 (1999), 249–279.
S. Garoufalidis, T. T. Q. Lê and D. Zeilberger, The Quantum MacMahon Master Theorem, Proceedings of the National Academy of Sciences of the United States of America 103 (2006), no. 38, 13928–13931 (eprint).
M. Konvalinka and I. Pak, Non-commutative extensions of the MacMahon Master Theorem, Advances in Mathematics 216 (2007), no. 1. (eprint).
D. Foata and G.-N. Han, A new proof of the Garoufalidis-Lê-Zeilberger Quantum MacMahon Master Theorem, Journal of Algebra 307 (2007), no. 1, 424–431 (eprint).
D. Foata and G.-N. Han, Specializations and extensions of the quantum MacMahon Master Theorem, Linear Algebra and its Applications 423 (2007), no. 2–3, 445–455 (eprint).
P.H. Hai and M. Lorenz, Koszul algebras and the quantum MacMahon master theorem, Bull. Lond. Math. Soc. 39 (2007), no. 4, 667–676. (eprint).
P. Etingof and I. Pak, An algebraic extension of the MacMahon master theorem, Proceedings of the American Mathematical Society 136 (2008), no. 7, 2279–2288 ( eprint).
P.H. Hai, B. Kriegk and M. Lorenz, N-homogeneous superalgebras, J. Noncommut. Geom. 2 (2008) 1–51 (eprint).
J.D. Louck, Unitary symmetry and combinatorics, World Sci., Hackensack, NJ, 2008.
Enumerative combinatorics
Factorial and binomial topics
Articles containing proofs
Theorems in combinatorics
Theorems in linear algebra | MacMahon's master theorem | [
"Mathematics"
] | 1,277 | [
"Theorems in linear algebra",
"Theorems in combinatorics",
"Factorial and binomial topics",
"Theorems in algebra",
"Enumerative combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Articles containing proofs"
] |
25,568,388 | https://en.wikipedia.org/wiki/Simons%20Center%20for%20Geometry%20and%20Physics | The Simons Center for Geometry and Physics is a center for theoretical physics and mathematics at Stony Brook University in New York. The focus of the center is mathematical physics and the interface of geometry and physics. It was founded in 2007 by a gift from the James and Marilyn Simons Foundation. The center's current director is physicist Luis Álvarez-Gaumé.
History
Background
James H. Simons was the chair of the mathematics department at Stony Brook from 1968 to 1976. After deciding to leave academia, he then went on to make billions with his investment firm Renaissance Technologies. On February 27, 2008 he announced a donation totaling $60 million (including a $25 million gift two years prior) to the mathematics and physics departments. This was the largest single gift ever given to any of the SUNY schools. The gift came during Stony Brook's 50th anniversary and shortly after Gov. Spitzer announced his commitment to make Stony Brook a “flagship” of the SUNY system that would rival the nation’s most prestigious state research universities. During his announcement speech, Jim Simons said, "From Archimedes to Newton to Einstein, much of the most profound work in physics has been deeply intertwined with the geometric side of mathematics. Since then, in particular with the advent of such areas as quantum field theory and string theory, developments in geometry and physics have become if anything more interrelated. The new Center will give many of the world's best mathematicians and physicists the opportunity to work and interact in an environment and an architecture carefully designed to enhance progress. We believe there is a chance that work accomplished at the Center will significantly change and deepen our understanding of the physical universe and of its basic mathematical structure." The Center results from extensive thought and planning between faculty, department chairs, and others, including Cumrun Vafa of Harvard, who directs the Simons Foundation-supported summer institutes on string theory at Stony Brook, and Isadore Singer of MIT.
Establishment
John Morgan served as the founding director from 2009 to 2016. Luis Álvarez-Gaumé has been the director since 2016.
Building
The Simons Center's building was completed in September 2010. The building is adjacent to the physics and mathematics departments to allow for close collaboration with the mathematics department and the C. N. Yang Institute for Theoretical Physics. The building offers of floor space, spread over six stories, and includes a 236-seat auditorium, a 90-seat lecture hall, offices, seminar rooms, and a cafe. The building is LEED Gold certified and is connected to the Math Tower via an elevated walkway.
Faculty
The Center's permanent faculty currently consists of mathematicians Simon Donaldson, Kenji Fukaya, and John Pardon, and physicists Nikita Nekrasov and Zohar Komargodski. The Center's academic staff also includes roughly 10 research assistant professors and 20 visiting researchers at any given time. Other former faculty members include physicists Michael R. Douglas and Anton Kapustin.
References
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
External links
Official website
Physics research institutes
Mathematical institutes
Stony Brook University
Brookhaven, New York
2010 establishments in New York (state)
Theoretical physics institutes | Simons Center for Geometry and Physics | [
"Physics"
] | 653 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
25,571,913 | https://en.wikipedia.org/wiki/Reverse%20cholesterol%20transport | Reverse cholesterol transport is a multi-step process resulting in the net movement of cholesterol from peripheral tissues back to the liver first via entering the lymphatic system, then the bloodstream.
Cholesterol from non-hepatic peripheral tissues is transferred to HDL by the ABCA1 (ATP-binding cassette transporter). Apolipoprotein A1 (ApoA-1), the major protein component of HDL, acts as an acceptor, and the phospholipid component of HDL acts as a sink for the mobilised cholesterol.
The cholesterol is converted to cholesteryl esters by the enzyme LCAT (lecithin-cholesterol acyltransferase).
The cholesteryl esters can be transferred, with the help of CETP (cholesterylester transfer protein) in exchange for triglycerides, to other lipoproteins (such as LDL and VLDL), and these lipoproteins can be taken up by secreting unesterified cholesterol into the bile or by converting cholesterol to bile acids.
Adiponectin induces ABCA1-mediated reverse cholesterol transport from macrophages by activation of PPAR-γ and LXRα/β.
Uptake of HDL2 is mediated by hepatic lipase, a special form of lipoprotein lipase found only in the liver. Hepatic lipase activity is increased by androgens and decreased by estrogens, which may account for higher concentrations of HDL2 in women.
Discoidal (Nascent) HDL:
Initially, HDL is discoidal in shape because it lacks esterified cholesterol but as it keeps accumulating free cholesterol in it, the enzyme LCAT keeps esterifying the free cholesterol.
When the HDL molecule is cholesterol rich, its shape is changed into more spherical and it becomes less dense (HDL 2). This is carried to the liver to release all the esterified cholesterol into the liver.
References
Biochemistry
Lipids
Lipoproteins
Metabolism | Reverse cholesterol transport | [
"Chemistry",
"Biology"
] | 451 | [
"Biomolecules by chemical classification",
"Lipid biochemistry",
"Organic compounds",
"Cellular processes",
"nan",
"Metabolism",
"Biochemistry",
"Lipids",
"Lipoproteins"
] |
25,572,054 | https://en.wikipedia.org/wiki/Imaging%20Lung%20Sound%20Behavior%20with%20Vibration%20Response%20Imaging | In medicine, Imaging Lung Sound Behavior with Vibration Response Imaging (VRI) is a novelty computer-based technology that takes the concept of the stethoscope to a more progressive level. Since the invention of the stethoscope by René-Théophile-Hyacinthe Laennec France in 1816, physicians have been utilizing lung sounds to diagnose various chest conditions. Today auscultation provides physicians with extensive information on the examination of the patient. The skills of the examiner however, vary, as seen in a clinical study that was conducted on the diagnosis of pneumonia in 2004.
The technology is based on the physiologic vibration generated during the breathing process when flow of air distributing through the bronchial tree creates vibration of the bronchial tree walls and the lung parenchyma itself. Emitted vibration energy propagating through the lung parenchyma and the chest wall reaches the body surface where is captured and recorded by a set of acoustic sensors. The sensors are positioned over the lung areas on the back that allows for the simultaneous reception of these signals from both lungs. These signals are then transformed by a complex algorithm to display the spatial changes in energy intensity during the breathing cycle. The intensity changes follow changes of airflow through the breathing cycle - i.e.: flow increases and decreases during inspiration and expiration. The VRI technology represents these changes as a grey scale-based dynamic image. The darker the higher the vibration intensity and the lighter the lower the vibration intensity is.
VRI and Lung Sound Behavior
The foremost information that the VRI provides on vibration energy, is how lung sounds behave and function during inspiration and expiration, which also includes individual breathing intensity (or vibration energy) graphs for each lung along the time period of 12 seconds. The distribution pattern of normal lung vibration energy for healthy individuals evolves centrally (presumably reflecting early airflow distribution in central large airways) and develops centrifugally in a simultaneous fashion for left and right lungs. Following peak inspiration, there is centripetal regression of vibration energy toward the end of inspiration. The same pattern is repeated during expiration phase accordingly. The peak of inspiratory vibration energy is higher than expiratory energy peak due to inspiration being more active process compared to expiration. At the Maximum Energy Frame (MEF) (a frame on the dynamic image representing the maximum distribution of vibration energy at the peak of inspiration), the right and left zones has a similar shape, area and image intensity, with a tendency, however, to greater intensity on the left. The vibration energy graph is a graphical representation of the behavioral pattern for both lungs and each lung individually. For a healthy individual with normal lungs, the graph has a consistent pattern that is repeated throughout the 12 second breathing period. The graph increases to the peak at the MEF frame on inspiration, and then decreasing to expiration. During expiration the graph pattern looks similar to that of inspiration, however at a lower intensity. When comparing right to left intensity graphs, the graphs are synchronized and peak at the same time and are almost at the same intensity level.
Lung ailments such as Chronic Obstructive Pulmonary Disease (COPD) cause the narrowing of airways in the lungs, limiting airflow and causing shortness of breath. Due to the limitation of airflow the VRI breathing pattern differs from that of a healthy individual. The patterns show asynchrony between lungs; with peaks in vibration energy difference. Because of this asynchrony, the contours of the lung periphery are not smooth, but have a "bumpy-lumpy" or "disco" appearance. The vibration energy graph displays an inconsistent pattern and it is difficult to delineate inspiration from expiration. When comparing the right to the left lung the energy graphs peak at different times, and differs at the intensity level.
Conclusion
Studies have shown that normal lung sounds have distinctive characteristics that can be differentiated from abnormal lung sounds, thus supporting the potential clinical value of acoustic lung imaging. By using the VRI that simultaneously records the vibration energy from 40 points over 12 seconds and presents all of the derived information in a single image the physician can be less dependent on memory. Another advantage of using this method is the ability to store and later compare the data to subsequent recordings. Finally, the VRI examination is harmless, doesn't emit any energy, and is non-invasive and radiation-free, unlike potentially harmful radiologic studies. It is important to note that even though a lot of literature has been published on the VRI method, it is still fairly new and as such has its limitations. Clinical value is limited to afore mentioned studies, and crucial elements such a complete patient work-up, that includes extensive patient history, medication and present presentation of symptoms are invaluable to the decision making process as to how any physician will proceed with the patients' treatment.
See also
Imaging instruments
Medical imaging
References
Respiratory system imaging
Sound measurements | Imaging Lung Sound Behavior with Vibration Response Imaging | [
"Physics",
"Mathematics"
] | 1,014 | [
"Quantity",
"Sound measurements",
"Physical quantities"
] |
3,710,123 | https://en.wikipedia.org/wiki/Theoretical%20motivation%20for%20general%20relativity | A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.
General relativity addresses two questions:
How does the curvature of spacetime affect the motion of matter?
How does the presence of matter affect the curvature of spacetime?
The former question is answered with the geodesic equation. The second question is answered with the Einstein field equation. The geodesic equation and the field equation are related through a principle of least action. The motivation for the geodesic equation is provided in the section Geodesic equation for circular orbits. The motivation for the Einstein field equation is provided in the section Stress–energy tensor.
Geodesic equation for circular orbits
Kinetics of circular orbits
For definiteness consider a circular Earth orbit (helical world line) of a particle. The particle travels with speed v. An observer on Earth sees that length is contracted in the frame of the particle. A measuring stick traveling with the particle appears shorter to the Earth observer. Therefore, the circumference of the orbit, which is in the direction of motion appears longer than times the diameter of the orbit.
In special relativity the 4-proper-velocity of the particle in the inertial (non-accelerating) frame of the earth is
where c is the speed of light, is the 3-velocity, and is
.
The magnitude of the 4-velocity vector is always constant
where we are using a Minkowski metric
.
The magnitude of the 4-velocity is therefore a Lorentz scalar.
The 4-acceleration in the Earth (non-accelerating) frame is
where is c times the proper time interval measured in the frame of the particle. This is related to the time interval in the Earth's frame by
.
Here, the 3-acceleration for a circular orbit is
where is the angular velocity of the rotating particle and is the 3-position of the particle.
The magnitude of the 4-velocity is constant. This implies that the 4-acceleration must be perpendicular to the 4-velocity. The inner product of the 4-acceleration and the 4-velocity is therefore always zero. The inner product is a Lorentz scalar.
Curvature of spacetime: Geodesic equation
The equation for the acceleration can be generalized, yielding the geodesic equation
where is the 4-position of the particle and is the curvature tensor given by
where is the Kronecker delta function, and we have the constraints
and
.
It is easily verified that circular orbits satisfy the geodesic equation. The geodesic equation is actually more general. Circular orbits are a particular solution of the equation. Solutions other than circular orbits are permissible and valid.
Ricci curvature tensor and trace
The Ricci curvature tensor is a special curvature tensor given by the contraction
.
The trace of the Ricci tensor, called the scalar curvature, is
.
The geodesic equation in a local coordinate system
Consider the situation in which there are now two particles in nearby circular polar orbits of the Earth at radius and speed .
The particles execute simple harmonic motion about the Earth and with respect to each other. They are at their maximum distance from each other as they cross the equator. Their trajectories intersect at the poles.
Imagine a spacecraft co-moving with one of the particles. The ceiling of the craft, the direction, coincides with the direction. The front of the craft is in the direction, and the direction is to the left of the craft. The spacecraft is small compared with the size of the orbit so that the local frame is a local Lorentz frame. The 4-separation of the two particles is given by . In the local frame of the spacecraft the geodesic equation is given by
where
and
is the curvature tensor in the local frame.
Geodesic equation as a covariant derivative
The equation of motion for a particle in flat spacetime and in the absence of forces is
.
If we require a particle to travel along a geodesic in curved spacetime, then the analogous expression in curved spacetime is
where the derivative on the left is the covariant derivative, which is the generalization of the normal derivative to a derivative in curved spacetime. Here
is a Christoffel symbol.
The curvature is related to the Christoffel symbol by
.
Metric tensor in the local frame
The interval in the local frame is
where
is the angle with the axis (longitude) and
is the angle with the axis (latitude).
This gives a metric of
in the local frame.
The inverse of the metric tensor is defined such that
where the term on the right is the Kronecker delta.
The transformation of the infinitesimal 4-volume is
where g is the determinant of the metric tensor.
The differential of the determinant of the metric tensor is
.
The relationship between the Christoffel symbols and the metric tensor is
.
Principle of least action in general relativity
The principle of least action states that the world line between two events in spacetime is that world line that minimizes the action between the two events. In classical mechanics the principle of least action is used to derive Newton's laws of motion and is the basis for Lagrangian dynamics. In relativity it is expressed as
between events 1 and 2 is a minimum. Here S is a scalar and
is known as the Lagrangian density. The Lagrangian density is divided into two parts, the density for the orbiting particle and the density of the gravitational field generated by all other particles including those comprising the Earth,
.
In curved spacetime, the "shortest" world line is that geodesic that minimizes the curvature along the geodesic. The action then is proportional to the curvature of the world line. Since S is a scalar, the scalar curvature is the appropriate measure of curvature. The action for the particle is therefore
where is an unknown constant. This constant will be determined by requiring the theory to reduce to Newton's law of gravitation in the nonrelativistic limit.
The Lagrangian density for the particle is therefore
.
The action for the particle and the Earth is
.
Thus the world line that lies on the surface of the sphere of radius r by varying the metric tensor. Minimization and neglect of terms that disappear on the boundaries, including terms second order in the derivative of g, yields
where
is the Hilbert stress–energy tensor of the field generated by the Earth.
The relationship, to within an unknown constant factor, between the stress-energy and the curvature is
.
Stress–energy tensor
Newton's law of gravitation
Newton's Law of Gravitation in non-relativistic mechanics states that the acceleration on an object of mass due to another object of mass is equal to
where is the gravitational constant, is a vector from mass to mass and is the magnitude of that vector. The time t is scaled with the speed of light c
.
The acceleration is independent of .
For definiteness. consider a particle of mass orbiting in the gravitational field of the Earth with mass . The law of gravitation can be written
where is the average mass density inside a sphere of radius .
Gravitational force in terms of the 00 component of the stress–energy tensor
Newton's law can be written
.
where is the volume of a sphere of radius . The quantity will be recognized from special relativity as the rest energy of the large body, the Earth. This is the sum of the rest energies of all the particles that compose Earth. The quantity in the parentheses is then the average rest energy density of a sphere of radius about the Earth. The gravitational field is proportional to the average energy density within a radius r. This is the 00 component of the stress–energy tensor in relativity for the special case in which all the energy is rest energy. More generally
where
and is the velocity of particle i making up the Earth and in the rest mass of particle i. There are N particles altogether making up the Earth.
Relativistic generalization of the energy density
There are two simple relativistic entities that reduce to the 00 component of the stress–energy tensor in the nonrelativistic limit
and the trace
where is the 4-velocity.
The 00 component of the stress–energy tensor can be generalized to the relativistic case as a linear combination of the two terms
where
4-acceleration due to gravity
The 4-acceleration due to gravity can be written
.
Unfortunately, this acceleration is nonzero for as is required for circular orbits. Since the magnitude of the 4-velocity is constant, it is only the component of the force perpendicular to the 4-velocity that contributes to the acceleration. We must therefore subtract off the component of force parallel to the 4-velocity. This is known as Fermi–Walker transport. In other words,
.
This yields
.
The force in the local frame is
.
Einstein field equation
The Einstein field equation is obtained by equating the acceleration required for circular orbits with the acceleration due to gravity
.
This is the relationship between curvature of spacetime and the stress–energy tensor.
The Ricci tensor becomes
.
The trace of the Ricci tensor is
.
Comparison of the Ricci tensor with the Ricci tensor calculated from the principle of least action, Theoretical motivation for general relativity#Principle of least action in general relativity identifying the stress–energy tensor with the Hilbert stress-energy, and remembering that A+B=1 removes the ambiguity in A, B, and C.
and
.
This gives
.
The field equation can be written
where
.
This is the Einstein field equation that describes curvature of spacetime that results from stress-energy density. This equation, along with the geodesic equation have been motivated by the kinetics and dynamics of a particle orbiting the Earth in a circular orbit. They are true in general.
Solving the Einstein field equation
Solving the Einstein field equation requires an iterative process. The solution is represented in the metric tensor
.
Typically there is an initial guess for the tensor. The guess is used to calculate Christoffel symbols, which are used to calculate the curvature. If the Einstein field equation is not satisfied, the process is repeated.
Solutions occur in two forms, vacuum solutions and non-vacuum solutions. A vacuum solution is one in which the stress–energy tensor is zero. The relevant vacuum solution for circular orbits is the Schwarzschild metric. There are also a number of exact solutions that are non-vacuum solutions, solutions in which the stress tensor is non-zero.
Solving the geodesic equation
Solving the geodesic equations requires knowledge of the metric tensor obtained through the solution of the Einstein field equation. Either the Christoffel symbols or the curvature are calculated from the metric tensor. The geodesic equation is then integrated with the appropriate boundary conditions.
Electrodynamics in curved spacetime
Maxwell's equations, the equations of electrodynamics, in curved spacetime are a generalization of Maxwell's equations in flat spacetime (see Formulation of Maxwell's equations in special relativity). Curvature of spacetime affects electrodynamics. Maxwell's equations in curved spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with covariant derivatives. The sourced and source-free equations become (cgs units):
,
and
where is the 4-current, is the field strength tensor, is the Levi-Civita symbol, and
is the 4-gradient. Repeated indices are summed over according to Einstein summation convention. We have displayed the results in several common notations.
The first tensor equation is an expression of the two inhomogeneous Maxwell's equations, Gauss' law and the Ampère's law with Maxwell's correction. The second equation is an expression of the homogeneous equations, Faraday's law of induction and Gauss's law for magnetism.
The electromagnetic wave equation is modified from the equation in flat spacetime in two ways, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears.
where the 4-potential is defined such that
.
We have assumed the generalization of the Lorenz gauge in curved spacetime
.
See also
Newtonian motivations for general relativity
References
General relativity | Theoretical motivation for general relativity | [
"Physics"
] | 2,537 | [
"General relativity",
"Theory of relativity"
] |
3,710,263 | https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra%20by%20vertex%20figure | There are many relations among the uniform polyhedra.
Some are obtained by truncating the vertices of the regular or quasi-regular polyhedron.
Others share the same vertices and edges as other polyhedron.
The grouping below exhibit some of these relations.
The vertex figure of a polyhedron
The relations can be made apparent by examining the vertex figures obtained by listing the faces adjacent to each vertex (remember that for uniform polyhedra all vertices are the same, that is vertex-transitive). For example, the cube has
vertex figure 4.4.4, which is to say, three adjacent square faces.
The possible faces are
3 - equilateral triangle
4 - square
5 - regular pentagon
6 - regular hexagon
8 - regular octagon
10 - regular decagon
5/2 - pentagram
8/3 - octagram
10/3 - decagram
Some faces will appear with reverse orientation which is written here as
-3 - a triangle with reverse orientation (often written as 3/2)
Others pass through the origin which we write as
6* - hexagon passing through the origin
The Wythoff symbol relates the polyhedron to spherical triangles. Wythoff symbols are written
p|q r, p q|r, p q r| where the spherical triangle has angles π/p,π/q,π/r, the bar indicates the position of the vertices in relation to the triangle.
Johnson (2000) classified uniform polyhedra according to the following:
Regular (regular polygonal vertex figures): pq, Wythoff symbol q|p 2
Quasi-regular (rectangular or ditrigonal vertex figures): p.q.p.q 2|p q, or p.q.p.q.p.q, Wythoff symbol 3|p q
Versi-regular (orthodiagonal vertex figures), p.q*.-p.q*, Wythoff symbol q q|p
Truncated regular (isosceles triangular vertex figures): p.p.q, Wythoff symbol q 2|p
Versi-quasi-regular (dipteroidal vertex figures), p.q.p.r Wythoff symbol q r|p
Quasi-quasi-regular (trapezoidal vertex figures): p*.q.p*.-r q.r|p or p.q*.-p.q* p q r|
Truncated quasi-regular (scalene triangular vertex figures), p.q.r Wythoff symbol p q r|
Snub quasi-regular (pentagonal, hexagonal, or octagonal vertex figures), Wythoff symbol p q r|
Prisms (truncated hosohedra),
Antiprisms and crossed antiprisms (snub dihedra)
The format of each figure follows the same basic pattern
image of polyhedron
name of polyhedron
alternate names (in brackets)
Wythoff symbol
Numbering systems: W - number used by Wenninger in polyhedra models, U - uniform indexing, K - Kaleido indexing, C - numbering used in Coxeter et al. 'Uniform Polyhedra'.
Number of vertices V, edges E, Faces F and number of faces by type.
Euler characteristic χ = V - E + F
The vertex figures are on the left, followed by the Point groups in three dimensions#The seven remaining point groups, either tetrahedral Td, octahedral Oh or icosahedral Ih.
Truncated forms
Regular polyhedra and their truncated forms
Column A lists all the regular polyhedra,
column B list their truncated forms.
Regular polyhedra all have vertex figures pr: p.p.p etc. and Wythoff symbol
pq r. The truncated forms have vertex figure q.q.r (where q=2p and r) and Wythoff p qr.
In addition there are three quasi-truncated forms. These also class as truncated-regular polyhedra.
Truncated forms of quasi-regular polyhedra
Column A lists some quasi-regular polyhedra,
column B lists normal truncated forms,
column C shows quasi-truncated forms,
column D shows a different method of truncation.
These truncated forms all have a vertex figure p.q.r and a
Wythoff
symbol p q r.
Polyhedra sharing edges and vertices
Regular
These are all mentioned elsewhere, but this table shows some relations.
They are all regular apart from the tetrahemihexahedron which is versi-regular.
Quasi-regular and versi-regular
Rectangular vertex figures, or crossed rectangles
first column are quasi-regular second and third columns are hemihedra with
faces passing through the origin, called versi-regular by some authors.
Ditrigonal regular and versi-regular
Ditrigonal (that is di(2) -tri(3)-ogonal) vertex figures are the 3-fold analog of a rectangle. These are all quasi-regular as all edges are isomorphic.
The compound of 5-cubes shares the same set of edges and vertices.
The cross forms have a non-orientable vertex figure so the "-" notation has not been used and the "*" faces pass near rather than through the origin.
versi-quasi-regular and quasi-quasi-regular
Group III: trapezoid or crossed trapezoid vertex figures.
The first column include the convex rhombic polyhedra, created by inserting two squares
into the vertex figures of the Cuboctahedron and Icosidodecahedron.
References
Uniform polyhedra | List of uniform polyhedra by vertex figure | [
"Physics"
] | 1,171 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
3,710,682 | https://en.wikipedia.org/wiki/Super%20vector%20space | In mathematics, a super vector space is a -graded vector space, that is, a vector space over a field with a given decomposition of subspaces of grade and grade . The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry.
Definitions
A super vector space is a -graded vector space with decomposition
Vectors that are elements of either or are said to be homogeneous. The parity of a nonzero homogeneous element, denoted by , is or according to whether it is in or ,
Vectors of parity are called even and those of parity are called odd. In theoretical physics, the even elements are sometimes called Bose elements or bosonic, and the odd elements Fermi elements or fermionic. Definitions for super vector spaces are often given only in terms of homogeneous elements and then extended to nonhomogeneous elements by linearity.
If is finite-dimensional and the dimensions of and are and respectively, then is said to have dimension . The standard super coordinate space, denoted , is the ordinary coordinate space where the even subspace is spanned by the first coordinate basis vectors and the odd space is spanned by the last .
A homogeneous subspace of a super vector space is a linear subspace that is spanned by homogeneous elements. Homogeneous subspaces are super vector spaces in their own right (with the obvious grading).
For any super vector space , one can define the parity reversed space to be the super vector space with the even and odd subspaces interchanged. That is,
Linear transformations
A homomorphism, a morphism in the category of super vector spaces, from one super vector space to another is a grade-preserving linear transformation. A linear transformation between super vector spaces is grade preserving if
That is, it maps the even elements of to even elements of and odd elements of to odd elements of . An isomorphism of super vector spaces is a bijective homomorphism. The set of all homomorphisms is denoted .
Every linear transformation, not necessarily grade-preserving, from one super vector space to another can be written uniquely as the sum of a grade-preserving transformation and a grade-reversing one—that is, a transformation such that
Declaring the grade-preserving transformations to be even and the grade-reversing ones to be odd gives the space of all linear transformations from to , denoted and called internal , the structure of a super vector space. In particular,
A grade-reversing transformation from to can be regarded as a homomorphism from to the parity reversed space , so that
Operations on super vector spaces
The usual algebraic constructions for ordinary vector spaces have their counterpart in the super vector space setting.
Dual space
The dual space of a super vector space can be regarded as a super vector space by taking the even functionals to be those that vanish on and the odd functionals to be those that vanish on . Equivalently, one can define to be the space of linear maps from to (the base field thought of as a purely even super vector space) with the gradation given in the previous section.
Direct sum
Direct sums of super vector spaces are constructed as in the ungraded case with the grading given by
Tensor product
One can also construct tensor products of super vector spaces. Here the additive structure of comes into play. The underlying space is as in the ungraded case with the grading given by
where the indices are in . Specifically, one has
Supermodules
Just as one may generalize vector spaces over a field to modules over a commutative ring, one may generalize super vector spaces over a field to supermodules over a supercommutative algebra (or ring).
A common construction when working with super vector spaces is to enlarge the field of scalars to a supercommutative Grassmann algebra. Given a field let
denote the Grassmann algebra generated by anticommuting odd elements . Any super vector space over can be embedded in a module over by considering the (graded) tensor product
The category of super vector spaces
The category of super vector spaces, denoted by , is the category whose objects are super vector spaces (over a fixed field ) and whose morphisms are even linear transformations (i.e. the grade preserving ones).
The categorical approach to super linear algebra is to first formulate definitions and theorems regarding ordinary (ungraded) algebraic objects in the language of category theory and then transfer these directly to the category of super vector spaces. This leads to a treatment of "superobjects" such as superalgebras, Lie superalgebras, supergroups, etc. that is completely analogous to their ungraded counterparts.
The category is a monoidal category with the super tensor product as the monoidal product and the purely even super vector space as the unit object. The involutive braiding operator
given by
on homogeneous elements, turns into a symmetric monoidal category. This commutativity isomorphism encodes the "rule of signs" that is essential to super linear algebra. It effectively says that a minus sign is picked up whenever two odd elements are interchanged. One need not worry about signs in the categorical setting as long as the above operator is used wherever appropriate.
is also a closed monoidal category with the internal Hom object, , given by the super vector space of all linear maps from to . The ordinary set is the even subspace therein:
The fact that is closed means that the functor is left adjoint to the functor , given a natural bijection
Superalgebra
A superalgebra over can be described as a super vector space with a multiplication map
that is a super vector space homomorphism. This is equivalent to demanding
Associativity and the existence of an identity can be expressed with the usual commutative diagrams, so that a unital associative superalgebra over is a monoid in the category .
Notes
References
Categories in category theory | Super vector space | [
"Physics",
"Mathematics"
] | 1,239 | [
"Mathematical structures",
"Super linear algebra",
"Category theory",
"Categories in category theory",
"Supersymmetry",
"Symmetry"
] |
20,112,889 | https://en.wikipedia.org/wiki/Corner%20transfer%20matrix | In statistical mechanics, the corner transfer matrix describes the effect of adding a quadrant to a lattice. Introduced by Rodney Baxter in 1968 as an extension of the Kramers-Wannier row-to-row transfer matrix, it provides a powerful method of studying lattice models. Calculations with corner transfer matrices led Baxter to the exact solution of the hard hexagon model in 1980.
Definition
Consider an IRF (interaction-round-a-face) model, i.e. a square lattice model with a spin σi assigned to each site i and interactions limited to spins around a common face. Let the total energy be given by
where for each face the surrounding sites i, j, k and l are arranged as follows:
For a lattice with N sites, the partition function is
where the sum is over all possible spin configurations and w is the Boltzmann weight
To simplify the notation, we use a ferromagnetic Ising-type lattice where each spin has the value +1 or −1, and the ground state is given by all spins up (i.e. the total energy is minimised when all spins on the lattice have the value +1). We also assume the lattice has 4-fold rotational symmetry (up to boundary conditions) and is reflection-invariant. These simplifying assumptions are not crucial, and extending the definition to the general case is relatively straightforward.
Now consider the lattice quadrant shown below:
The outer boundary sites, marked by triangles, are assigned their ground state spins (+1 in this case). The sites marked by open circles form the inner boundaries of the quadrant; their associated spin sets are labelled {σ1,...,σm} and {σ'1,...,σ'm}, where σ1 = σ'1. There are 2m possible configurations for each inner boundary, so we define a 2m×2m matrix entry-wise by
The matrix A, then, is the corner transfer matrix for the given lattice quadrant. Since the outer boundary spins are fixed and the sum is over all interior spins, each entry of A is a function of the inner boundary spins. The Kronecker delta in the expression ensures that σ1 = σ'1, so by ordering the configurations appropriately we may cast A as a block diagonal matrix:
Corner transfer matrices are related to the partition function in a simple way. In our simplified example, we construct the full lattice from four rotated copies of the lattice quadrant, where the inner boundary spin sets σ, σ', σ" and σ'" are allowed to differ:
The partition function is then written in terms of the corner transfer matrix A as
Discussion
Recursion relation
A corner transfer matrix A2m (defined for an m×m quadrant) may be expressed in terms of smaller corner transfer matrices A2m-1 and A2m-2 (defined for reduced (m-1)×(m-1) and (m-2)×(m-2) quadrants respectively). This recursion relation allows, in principle, the iterative calculation of the corner transfer matrix for any lattice quadrant of finite size.
Like their row-to-row counterparts, corner transfer matrices may be factored into face transfer matrices, which correspond to adding a single face to the lattice. For the lattice quadrant given earlier, the face transfer matrices are of size 2m×2m and defined entry-wise by
where 2 ≤ i ≤ m+1. Near the outer boundary, specifically, we have
So the corner transfer matrix A factorises as
where
Graphically, this corresponds to:
We also require the 2m×2m matrices A* and A**, defined entry-wise by
where the A matrices whose entries appear on the RHS are of size 2m-1×2m-1 and 2m-2×2m-2 respectively. This is more clearly written as
Now from the definitions of A, A*, A**, Ui and Fj, we have
which gives the recursion relation for A2m in terms of A2m-1 and A2m-2.
Diagonal form
When using corner transfer matrices to perform calculations, it is both analytically and numerically convenient to work with their diagonal forms instead. To facilitate this, the recursion relation may be rewritten directly in terms of the diagonal forms and eigenvector matrices of A, A* and A**.
Recalling that the lattice in our example is reflection-invariant, in the sense that
we see that A is a symmetric matrix (i.e. it is diagonalisable by an orthogonal matrix). So we write
where Ad is a diagonal matrix (normalised such that its numerically largest entry is 1), αm is the largest eigenvalue of A, and PTP = I. Likewise for A* and A**, we have
where Ad*, Ad**, P* and P** are defined in an analogous fashion to A* and A**, i.e. in terms of the smaller (normalised) diagonal forms and (orthogonal) eigenvector matrices of A2m-1 and A2m-2.
By substituting these diagonalisations into the recursion relation, we obtain
where
Now At is also symmetric, and may be calculated if Ad*, Ad** and R* are known; diagonalising At then yields its normalised diagonal form Ad, its largest eigenvalue κ, and its orthogonal eigenvector matrix R.
Applications
Spin expectation value
Corner transfer matrices (or their diagonal forms) may be used to calculate quantities such as the spin expectation value at a particular site deep inside the lattice. For the full lattice given earlier, the spin expectation value at the central site is given by
With the configurations ordered such that A is block diagonal as before, we may define a 2m×2m diagonal matrix
such that
Partition function per site
Another important quantity for lattice models is the partition function per site, evaluated in the thermodynamic limit and written as
In our example, this reduces to
since tr Ad4 is a convergent sum as m → ∞ and Ad becomes infinite-dimensional. Furthermore, the number of faces 2m(m+1) approaches the number of sites N in the thermodynamic limit, so we have
which is consistent with the earlier equation giving κ as the largest eigenvalue for At. In other words, the partition function per site is given exactly by the diagonalised recursion relation for corner transfer matrices in the thermodynamic limit; this allows κ to be approximated via the iterative process of calculating Ad for a large lattice.
The matrices involved grow exponentially in size, however, and in actual numerical calculations they must be truncated at each step. One way of doing this is to keep the n largest eigenvalues at each step, for some fixed n. In most cases, the sequence of approximations obtained by taking n = 1,2,3,... converges rapidly, and to the exact value (for an exactly solvable model).
See also
Transfer-matrix method
References
Exactly solvable models
Lattice models
Matrices
Statistical mechanics | Corner transfer matrix | [
"Physics",
"Materials_science",
"Mathematics"
] | 1,471 | [
"Mathematical objects",
"Lattice models",
"Computational physics",
"Matrices (mathematics)",
"Condensed matter physics",
"Statistical mechanics"
] |
20,114,039 | https://en.wikipedia.org/wiki/Langmuir%20adsorption%20model | The Langmuir adsorption model explains adsorption by assuming an adsorbate behaves as an ideal gas at isothermal conditions. According to the model, adsorption and desorption are reversible processes. This model even explains the effect of pressure; i.e., at these conditions the adsorbate's partial pressure is related to its volume adsorbed onto a solid adsorbent. The adsorbent, as indicated in the figure, is assumed to be an ideal solid surface composed of a series of distinct sites capable of binding the adsorbate. The adsorbate binding is treated as a chemical reaction between the adsorbate gaseous molecule and an empty sorption site . This reaction yields an adsorbed species with an associated equilibrium constant :
A_{g}{} + S <=> A_{ad}.
From these basic hypotheses the mathematical formulation of the Langmuir adsorption isotherm can be derived in various independent and complementary ways: by the kinetics, the thermodynamics, and the statistical mechanics approaches respectively (see below for the different demonstrations).
The Langmuir adsorption equation is
where is the fractional occupancy of the adsorption sites, i.e., the ratio of the volume of gas adsorbed onto the solid to the volume of a gas molecules monolayer covering the whole surface of the solid and completely occupied by the adsorbate. A continuous monolayer of adsorbate molecules covering a homogeneous flat solid surface is the conceptual basis for this adsorption model.
Background and experiments
In 1916, Irving Langmuir presented his model for the adsorption of species onto simple surfaces. Langmuir was awarded the Nobel Prize in 1932 for his work concerning surface chemistry. He hypothesized that a given surface has a certain number of equivalent sites to which a species can "stick", either by physisorption or chemisorption. His theory began when he postulated that gaseous molecules do not rebound elastically from a surface, but are held by it in a similar way to groups of molecules in solid bodies.
Langmuir published two papers that confirmed the assumption that adsorbed films do not exceed one molecule in thickness. The first experiment involved observing electron emission from heated filaments in gases. The second, a more direct evidence, examined and measured the films of liquid onto an adsorbent surface layer. He also noted that generally the attractive strength between the surface and the first layer of adsorbed substance is much greater than the strength between the first and second layer. However, there are instances where the subsequent layers may condense given the right combination of temperature and pressure.
Basic assumptions of the model
Inherent within this model, the following assumptions are valid specifically for the simplest case: the adsorption of a single adsorbate onto a series of equivalent sites onto the surface of the solid.
The surface containing the adsorbing sites is a perfectly flat plane with no corrugations (assume the surface is homogeneous). However, chemically heterogeneous surfaces can be considered to be homogeneous if the adsorbate is bound to only one type of functional groups on the surface.
The adsorbing gas adsorbs into an immobile state.
All sites are energetically equivalent, and the energy of adsorption is equal for all sites.
Each site can hold at most one molecule (mono-layer coverage only).
No (or ideal) interactions between adsorbate molecules on adjacent sites. When the interactions are ideal, the energy of side-to-side interactions is equal for all sites regardless of the surface occupancy.
Derivations of the Langmuir adsorption isotherm
The mathematical expression of the Langmuir adsorption isotherm involving only one sorbing species can be demonstrated in different ways: the kinetics approach, the thermodynamics approach, and the statistical mechanics approach respectively. In case of two competing adsorbed species, the competitive adsorption model is required, while when a sorbed species dissociates into two distinct entities, the dissociative adsorption model need to be used.
Kinetic derivation
This section provides a kinetic derivation for a single-adsorbate case. The kinetic derivation applies to gas-phase adsorption. However, it has been mistakenly applied to solutions. The multiple-adsorbate case is covered in the competitive adsorption sub-section.
The model assumes adsorption and desorption as being elementary processes, where the rate of adsorption rad and the rate of desorption rd are given by
where pA is the partial pressure of A over the surface, [S] is the concentration of free sites in number/m2, [Aad] is the surface concentration of A in molecules/m2 (concentration of occupied sites), and kad and kd are constants of forward adsorption reaction and backward desorption reaction in the above reactions.
At equilibrium, the rate of adsorption equals the rate of desorption. Setting rad = rd and rearranging, we obtain
The concentration of sites is given by dividing the total number of sites (S0) covering the whole surface by the area of the adsorbent (a):
We can then calculate the concentration of all sites by summing the concentration of free sites [S] and occupied sites:
Combining this with the equilibrium equation, we get
We define now the fraction of the surface sites covered with A as
This, applied to the previous equation that combined site balance and equilibrium, yields the Langmuir adsorption isotherm:
Thermodynamic derivation
In condensed phases (solutions), adsorption to a solid surface is a competitive process between the solvent (A) and the solute (B) to occupy the binding site. The thermodynamic equilibrium is described as
Solvent (bound) + Solute (free) ↔ Solvent (free) + Solute (bound).
If we designate the solvent by the subscript "1" and the solute by "2", and the bound state by the superscript "s" (surface/bound) and the free state by the "b" (bulk solution / free), then the equilibrium constant can be written as a ratio between the activities of products over reactants:
For dilute solutions the activity of the solvent in bulk solution and the activity coefficients () are also assumed to ideal on the surface. Thus, , and where are mole fractions.
Re-writing the equilibrium constant and solving for yields
Note that the concentration of the solute adsorbate can be used instead of the activity coefficient. However, the equilibrium constant will no longer be dimensionless and will have units of reciprocal concentration instead. The difference between the kinetic and thermodynamic derivations of the Langmuir model is that the thermodynamic uses activities as a starting point while the kinetic derivation uses rates of reaction. The thermodynamic derivation allows for the activity coefficients of adsorbates in their bound and free states to be included. The thermodynamic derivation is usually referred to as the "Langmuir-like equation".
Statistical mechanical derivation
This derivation
based on statistical mechanics was originally provided by Volmer and Mahnert in 1925. The partition function of the finite number of adsorbents adsorbed on a surface, in a canonical ensemble, is given by
where is the partition function of a single adsorbed molecule, is the number of adsorption sites (both occupied and unoccupied), and is the number of adsorbed molecules which should be less than or equal to . The terms in the bracket give the total partition function of the adsorbed molecules by taking a product of the individual partition functions (refer to Partition function of subsystems). The factor accounts for the overcounting arising due to the indistinguishable nature of the adsorbates. The grand canonical partition function is given by
is the chemical potential of an adsorbed molecule. As it has the form of binomial series, the summation is reduced to
where
The grand canonical potential is
based on which the average number of occupied sites is calculated
which gives the coverage
Now, invoking the condition that the system is in equilibrium, that is, the chemical potential of the adsorbed molecules is equal to that of the molecules in gas phase, we have
The chemical potential of an ideal gas is
where is the Helmholtz free energy of an ideal gas with its partition function
is the partition function of a single particle in the volume of (only consider the translational freedom here).
We thus have , where we use Stirling's approximation.
Plugging to the expression of , we have
which gives the coverage
By defining
and using the identity , finally, we have
It is plotted in the figure alongside demonstrating that the surface coverage increases quite rapidly with the partial pressure of the adsorbants, but levels off after P reaches P0.
Competitive adsorption
The previous derivations assumed that there is only one species, A, adsorbing onto the surface. This section considers the case when there are two distinct adsorbates present in the system. Consider two species A and B that compete for the same adsorption sites. The following hypotheses are made here:
All the sites are equivalent.
Each site can hold at most one molecule of A, or one molecule of B, but not both simultaneously.
There are no interactions between adsorbate molecules on adjacent sites.
As derived using kinetic considerations, the equilibrium constants for both A and B are given by
and
The site balance states that the concentration of total sites [S0] is equal to the sum of free sites, sites occupied by A and sites occupied by B:
Inserting the equilibrium equations and rearranging in the same way we did for the single-species adsorption, we get similar expressions for both θA and θB:
Dissociative adsorption
The other case of special importance is when a molecule D2 dissociates into two atoms upon adsorption. Here, the following assumptions would be held to be valid:
D2 completely dissociates to two molecules of D upon adsorption.
The D atoms adsorb onto distinct sites on the surface of the solid and then move around and equilibrate.
All sites are equivalent.
Each site can hold at most one atom of D.
There are no interactions between adsorbate molecules on adjacent sites.
Using similar kinetic considerations, we get
The 1/2 exponent on pD2 arises because one gas phase molecule produces two adsorbed species. Applying the site balance as done above,
Entropic considerations
The formation of Langmuir monolayers by adsorption onto a surface dramatically reduces the entropy of the molecular system.
To find the entropy decrease, we find the entropy of the molecule when in the adsorbed condition.
Using Stirling's approximation, we have
On the other hand, the entropy of a molecule of an ideal gas is
where is the thermal de Broglie wavelength of the gas molecule.
Limitations of the model
The Langmuir adsorption model deviates significantly in many cases, primarily because it fails to account for the surface roughness of the adsorbent. Rough inhomogeneous surfaces have multiple site types available for adsorption, with some parameters varying from site to site, such as the heat of adsorption. Moreover, specific surface area is a scale-dependent quantity, and no single true value exists for this parameter. Thus, the use of alternative probe molecules can often result in different obtained numerical values for surface area, rendering comparison problematic.
The model also ignores adsorbate–adsorbate interactions. Experimentally, there is clear evidence for adsorbate–adsorbate interactions in heat of adsorption data. There are two kinds of adsorbate–adsorbate interactions: direct interaction and indirect interaction. Direct interactions are between adjacent adsorbed molecules, which could make adsorbing near another adsorbate molecule more or less favorable and greatly affects high-coverage behavior. In indirect interactions, the adsorbate changes the surface around the adsorbed site, which in turn affects the adsorption of other adsorbate molecules nearby.
Modifications
The modifications try to account for the points mentioned in above section like surface roughness, inhomogeneity, and adsorbate–adsorbate interactions.
Two-mechanism Langmuir-like equation (TMLLE)
Also known as the two-site Langmuir equation. This equation describes the adsorption of one adsorbate to two or more distinct types of adsorption sites. Each binding site can be described with its own Langmuir expression, as long as the adsorption at each binding site type is independent from the rest.
where
– total amount adsorbed at a given adsorbate concentration,
– maximum capacity of site type 1,
– maximum capacity of site type 2,
– equilibrium (affinity) constant of site type 1,
– equilibrium (affinity) constant of site type 2,
– adsorbate activity in solution at equilibrium
This equation works well for adsorption of some drug molecules to activated carbon in which some adsorbate molecules interact with hydrogen bonding while others interact with a different part of the surface by hydrophobic interactions (hydrophobic effect). The equation was modified to account for the hydrophobic effect (also known as entropy-driven adsorption):
The hydrophobic effect is independent of concentration, since Therefore, the capacity of the adsorbent for hydrophobic interactions can obtained from fitting to experimental data. The entropy-driven adsorption originates from the restriction of translational motion of bulk water molecules by the adsorbate, which is alleviated upon adsorption.
Freundlich adsorption isotherm
The Freundlich isotherm is the most important multi-site adsorption isotherm for rough surfaces.
where αF and CF are fitting parameters. This equation implies that if one makes a log–log plot of adsorption data, the data will fit a straight line. The Freundlich isotherm has two parameters, while Langmuir's equations has only one: as a result, it often fits the data on rough surfaces better than the Langmuir isotherm. However, the Freundlich equation is not unique; consequently, a good fit of the data points does not offer sufficient proof that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry. Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single-site)) have a constant of adsorption as a function of the occupied-sites fraction. On the other hand, heterogeneous adsorbents (multi-site) have a variable of adsorption depending on the sites occupation. When the adsorbate pressure (or concentration) is low, the fractional occupation is small and as a result, only low-energy sites are occupied, since these are the most stable. As the pressure increases, the higher-energy sites become occupied, resulting in a smaller of adsorption, given that adsorption is an exothermic process.
A related equation is the Toth equation. Rearranging the Langmuir equation, one can obtain
J. Toth modified this equation by adding two parameters αT0 and CT0 to formulate the Toth equation:
Temkin adsorption isotherm
This isotherm takes into account indirect adsorbate–adsorbate interactions on adsorption isotherms. Temkin noted experimentally that heats of adsorption would more often decrease than increase with increasing coverage.
The heat of adsorption ΔHad is defined as
He derived a model assuming that as the surface is loaded up with adsorbate, the heat of adsorption of all the molecules in the layer would decrease linearly with coverage due to adsorbate–adsorbate interactions:
where αT is a fitting parameter. Assuming the Langmuir adsorption isotherm still applied to the adsorbed layer, is expected to vary with coverage as follows:
Langmuir's isotherm can be rearranged to
Substituting the expression of the equilibrium constant and taking the natural logarithm:
BET equation
Brunauer, Emmett and Teller (BET) derived the first isotherm for multilayer adsorption. It assumes a random distribution of sites that are empty or that are covered with by one monolayer, two layers and so on, as illustrated alongside. The main equation of this model is
where
and [A] is the total concentration of molecules on the surface, given by
where
in which [A]0 is the number of bare sites, and [A]i is the number of surface sites covered by i molecules.
Adsorption of a binary liquid on a solid
This section describes the surface coverage when the adsorbate is in liquid phase and is a binary mixture.
For ideal both phases no lateral interactions, homogeneous surface the composition of a surface phase for a binary liquid system in contact with solid surface is given by a classic Everett isotherm equation (being a simple analogue of Langmuir equation), where the components are interchangeable (i.e. "1" may be exchanged to "2") without change of equation form:
where the normal definition of multi-component system is valid as follows:
By simple rearrangement, we get
This equation describes competition of components "1" and "2".
See also
Hill equation (biochemistry)
Michaelis–Menten kinetics (equation with the same mathematical form)
Monod equation (equation with the same mathematical form)
Reactions on surfaces
References
The constitution and fundamental properties of solids and liquids. part i. solids. Irving Langmuir; J. Am. Chem. Soc. 38, 2221-95 1916
External links
Langmuir isotherm from Queen Mary, University of London
LMMpro, Langmuir equation-fitting software
Surface science
Materials science | Langmuir adsorption model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,758 | [
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Condensed matter physics",
"nan"
] |
20,115,268 | https://en.wikipedia.org/wiki/Wildfire%20modeling | Wildfire modeling is concerned with numerical simulation of wildfires to comprehend and predict fire behavior. Wildfire modeling aims to aid wildfire suppression, increase the safety of firefighters and the public, and minimize damage. Wildfire modeling can also aid in protecting ecosystems, watersheds, and air quality.
Using computational science, wildfire modeling involves the statistical analysis of past fire events to predict spotting risks and front behavior. Various wildfire propagation models have been proposed in the past, including simple ellipses and egg- and fan-shaped models. Early attempts to determine wildfire behavior assumed terrain and vegetation uniformity. However, the exact behavior of a wildfire's front is dependent on a variety of factors, including wind speed and slope steepness. Modern growth models utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon. Extreme value theory may also be used to predict the size of large wildfires. However, large fires that exceed suppression capabilities are often regarded as statistical outliers in standard analyses, even though fire policies are more influenced by large wildfires than by small fires.
Objectives
Wildfire modeling attempts to reproduce fire behavior, such as how quickly the fire spreads, in which direction, how much heat it generates. A key input to behavior modeling is the Fuel Model, or type of fuel, through which the fire is burning. Behavior modeling can also include whether the fire transitions from the surface (a "surface fire") to the tree crowns (a "crown fire"), as well as extreme fire behavior including rapid rates of spread, fire whirls, and tall well-developed convection columns. Fire modeling also attempts to estimate fire effects, such as the ecological and hydrological effects of the fire, fuel consumption, tree mortality, and amount and rate of smoke produced.
Environmental factors
Wildland fire behavior is affected by weather, fuel characteristics, and topography.
Weather influences fire through wind and moisture. Wind increases the fire spread in the wind direction, higher temperature makes the fire burn faster, while higher relative humidity, and precipitation (rain or snow) may slow it down or extinguish it altogether. Weather involving fast wind changes can be particularly dangerous, since they can suddenly change the fire direction and behavior. Such weather includes cold fronts, foehn winds, thunderstorm downdrafts, sea and land breeze, and diurnal slope winds.
Wildfire fuel includes grass, wood, and anything else that can burn. Small dry twigs burn faster while large logs burn slower; dry fuel ignites more easily and burns faster than wet fuel.
Topography factors that influence wildfires include the orientation toward the sun, which influences the amount of energy received from the sun, and the slope (fire spreads faster uphill). Fire can accelerate in narrow canyons and it can be slowed down or stopped by barriers such as creeks and roads.
These factors act in combination. Rain or snow increases the fuel moisture, high relative humidity slows the drying of the fuel, while winds can make fuel dry faster. Wind can change the fire-accelerating effect of slopes to effects such as downslope windstorms (called Santa Anas, foehn winds, East winds, depending on the geographic location). Fuel properties may vary with topography as plant density varies with elevation or aspect with respect to the sun.
It has long been recognized that "fires create their own weather." That is, the heat and moisture created by the fire feed back into the atmosphere, creating intense winds that drive the fire behavior. The heat produced by the wildfire changes the temperature of the atmosphere and creates strong updrafts, which can change the direction of surface winds. The water vapor released by the fire changes the moisture balance of the atmosphere. The water vapor can be carried away, where the latent heat stored in the vapor is released through condensation.
Approaches
Like all models in computational science, fire models need to strike a balance between fidelity, availability of data, and fast execution. Wildland fire models span a vast range of complexity, from simple cause and effect principles to the most physically complex presenting a difficult supercomputing challenge that cannot hope to be solved faster than real time.
Forest-fire models have been developed since 1940 to the present, but a lot of chemical and thermodynamic questions related to fire behaviour are still to be resolved. Scientists and their forest fire models from 1940 till 2003 are listed in article. Models can be divided into three groups: Empirical, Semi-empirical, and Physically based.
Empirical models
Conceptual models from experience and intuition from past fires can be used to anticipate the future. Many semi-empirical fire spread equations, as in those published by the USDA Forest Service, Forestry Canada, Nobel, Bary, and Gill, and Cheney, Gould, and Catchpole for Australasian fuel complexes have been developed for quick estimation of fundamental parameters of interest such as fire spread rate, flame length, and fireline intensity of surface fires at a point for specific fuel complexes, assuming a representative point-location wind and terrain slope. Based on the work by Fons's in 1946, and Emmons in 1963, the quasi-steady equilibrium spread rate calculated for a surface fire on flat ground in no-wind conditions was calibrated using data of piles of sticks burned in a flame chamber/wind tunnel to represent other wind and slope conditions for the fuel complexes tested.
Two-dimensional fire growth models such as FARSITE and Prometheus, the Canadian wildland fire growth model designed to work in Canadian fuel complexes, have been developed that apply such semi-empirical relationships and others regarding ground-to-crown transitions to calculate fire spread and other parameters along the surface. Certain assumptions must be made in models such as FARSITE and Prometheus to shape the fire growth. For example, Prometheus and FARSITE use the Huygens principle of wave propagation. A set of equations that can be used to propagate (shape and direction) a fire front using an elliptical shape was developed by Richards in 1990. Although more sophisticated applications use a three-dimensional numerical weather prediction system to provide inputs such as wind velocity to one of the fire growth models listed above, the input was passive and the feedback of the fire upon the atmospheric wind and humidity are not accounted for.
Physically based models and coupling with the atmosphere
A simplified physically based two-dimensional fire spread models based upon conservation laws that use radiation as the dominant heat transfer mechanism and convection, which represents the effect of wind and slope, lead to reaction–diffusion systems of partial differential equations.
More complex physical models join computational fluid dynamics models with a wildland fire component and allow the fire to feed back upon the atmosphere. These models include NCAR's Coupled Atmosphere-Wildland Fire-Environment (CAWFE) model developed in 2005, WRF-Fire at NCAR and University of Colorado Denver which combines the Weather Research and Forecasting Model with a spread model by the level-set method, University of Utah's Coupled Atmosphere-Wildland Fire Large Eddy Simulation developed in 2009, Los Alamos National Laboratory's FIRETEC developed in, the WUI (wildland–urban interface) Fire Dynamics Simulator (WFDS) developed in 2007, and, to some degree, the two-dimensional model FIRESTAR. These tools have different emphases and have been applied to better understand the fundamental aspects of fire behavior, such as fuel inhomogeneities on fire behavior, feedbacks between the fire and the atmospheric environment as the basis for the universal fire shape, and are beginning to be applied to wildland urban interface house-to-house fire spread at the community-scale.
The cost of added physical complexity is a corresponding increase in computational cost, so much so that a full three-dimensional explicit treatment of combustion in wildland fuels by direct numerical simulation (DNS) at scales relevant for atmospheric modeling does not exist, is beyond current supercomputers, and does not currently make sense to do because of the limited skill of weather models at spatial resolution under 1 km. Consequently, even these more complex models parameterize the fire in some way, for example, papers by Clark use equations developed by Rothermel for the USDA forest service to calculate local fire spread rates using fire-modified local winds. And, although FIRETEC and WFDS carry prognostic conservation equations for the reacting fuel and oxygen concentrations, the computational grid cannot be fine enough to resolve the reaction rate-limiting mixing of fuel and oxygen, so approximations must be made concerning the subgrid-scale temperature distribution or the combustion reaction rates themselves. These models also are too small-scale to interact with a weather model, so the fluid motions use a computational fluid dynamics model confined in a box much smaller than the typical wildfire.
Attempts to create the most complete theoretical model were made by Albini F.A. in USA and Grishin A.M. in Russia. Grishin's work is based on the fundamental laws of physics, conservation and theoretical justifications are provided. The simplified two-dimensional model of running crown forest fire was developed in Belarusian State University by Barovik D.V. and Taranchuk V.B.
Data assimilation
Data assimilation periodically adjusts the model state to incorporate new data using statistical methods. Because fire is highly nonlinear and irreversible, data assimilation for fire models poses special challenges, and standard methods, such as the ensemble Kalman filter (EnKF) do not work well. Statistical variability of corrections and especially large corrections may result in nonphysical states, which tend to be preceded or accompanied by large spatial gradients. In order to ease this problem, the regularized EnKF penalizes large changes of spatial gradients in the Bayesian update in EnKF. The regularization technique has a stabilizing effect on the simulations in the ensemble but it does not improve much the ability of the EnKF to track the data: The posterior ensemble is made out of linear combinations of the prior ensemble, and if a reasonably close location and shape of the fire cannot be found between the linear combinations, the data assimilation is simply out of luck, and the ensemble cannot approach the data. From that point on, the ensemble evolves essentially without regard to the data. This is called filter divergence. So, there is clearly a need to adjust the simulation state by a position change rather than an additive correction only. The morphing EnKF combines the ideas of data assimilation with image registration and morphing to provide both additive and position correction in a natural manner, and can be used to change a model state reliably in response to data.
Limitations and practical use
The limitations on fire modeling are not entirely computational. At this level, the models encounter limits in knowledge about the composition of pyrolysis products and reaction pathways, in addition to gaps in basic understanding about some aspects of fire behavior such as fire spread in live fuels and surface-to-crown fire transition.
Thus, while more complex models have value in studying fire behavior and testing fire spread in a range of scenarios, from the application point of view, FARSITE and Palm-based applications of BEHAVE have shown great utility as practical in-the-field tools because of their ability to provide estimates of fire behavior in real time. While the coupled fire-atmosphere models have the ability to incorporate the ability of the fire to affect its own local weather, and model many aspects of the explosive, unsteady nature of fires that cannot be incorporated in current tools, it remains a challenge to apply these more complex models in a faster-than-real-time operational environment. Also, although they have reached a certain degree of realism when simulating specific natural fires, they must yet address issues such as identifying what specific, relevant operational information they could provide beyond current tools, how the simulation time could fit the operational time frame for decisions (therefore, the simulation must run substantially faster than real time), what temporal and spatial resolution must be used by the model, and how they estimate the inherent uncertainty in numerical weather prediction in their forecast. These operational constraints must be used to steer model development.
See also
Catastrophe modeling
Extreme value theory
Fuel model
References
External links
PROMETHEUS fire growth simulator
WRF-Fire
Wildfire Visualizations collected links
Wildfire simulations on Youtube
Wildfire visualizations at NCAR
Coupled Weather-Wildfire Modeling - Basic aspects of wildfire behavior
Coupled Weather-Wildfire Modeling - Wildfire Case Studies
Fire research links
Why are wildfires defying long-standing computer models? September 2012
Wildfire prevention
Wildfire suppression
Computational physics
Firefighting
Sustainable forest management
Mathematical modeling
Numerical climate and weather models
Wildfire ecology | Wildfire modeling | [
"Physics",
"Mathematics"
] | 2,588 | [
"Applied mathematics",
"Mathematical modeling",
"Computational physics"
] |
20,120,811 | https://en.wikipedia.org/wiki/Ibutamoren | Ibutamoren () (developmental code names MK-677, MK-0677, LUM-201, L-163,191; former tentative brand name Oratrope) is a potent, long-acting, orally-active, selective, and non-peptide agonist of the ghrelin receptor and a growth hormone secretagogue, mimicking the growth hormone (GH)-stimulating action of the endogenous hormone ghrelin. It has been shown to increase the secretion of several hormones including GH and insulin-like growth factor 1 (IGF-1) and produces sustained increases in the plasma levels of these hormones while also raising cortisol levels.
Effect on lean mass
Ibutamoren has been shown to sustain activation of the GH–IGF-1 axis, increasing growth hormone secretion by up to 97%, and to increase lean body mass with no change in total fat mass or visceral fat. It is under investigation as a potential treatment for reduced levels of these hormones, such as in children or elderly adults with growth hormone deficiency, and human studies have shown it to increase both muscle mass and bone mineral density, making it a promising potential therapy for the treatment of frailty in the elderly. As of June 2017, ibutamoren is in the preclinical stage of development for growth hormone deficiency.
Effect on sleep architecture
In a small study of 14 subjects, MK-677 dosed at 25mg/day at bedtime was shown to increase rapid eye movement sleep by 20% and 50% in young and older subjects respectively. Treatment with MK-677 also resulted in an approximate 50% increase in slow-wave sleep in young subjects.
Growth hormone deficiency
In a study of children with growth hormone deficiency, MK-677 performed better than other growth hormone secretagogues at improving growth hormone levels. An ongoing study compares MK-677 directly to injectable hGH in terms of height velocity in this population.
Non-research use
Since MK-677 is still an Investigational New Drug, it has not yet been approved to be marketed for consumption by humans in the United States. However, it has been used experimentally by some in the bodybuilding community. The use of MK-677 is banned in most sports.
See also
List of growth hormone secretagogues
Ghrelin
References
External links
Ibutamoren - AdisInsight
MK-677 and A Potential Dementia Link - Loungecity.org
Ghrelin Over-Stimulation, Molecular Psychiatry Study - Nature.org
4-Phenylpiperidines
Anti-aging substances
Experimental drugs
Ghrelin receptor agonists
Growth hormone secretagogues
Indolines
Spiro compounds
Orphan drugs | Ibutamoren | [
"Chemistry",
"Biology"
] | 559 | [
"Organic compounds",
"Anti-aging substances",
"Senescence",
"Spiro compounds"
] |
20,120,864 | https://en.wikipedia.org/wiki/Cranked%20eye%20bolt | A cranked eye bolt is an eye bolt typically used as a structural tie down in building construction where the eye of the bolt must be fastened to a point that cannot be directly below where the shaft would otherwise be fastened. This often occurs where a bearer must be tied down to a post or column but the bearer cannot be directly fastened to the post of column.
It has a shaft which is cranked, or bent twice: once off center, and a second time to bring the shaft back parallel to the original shaft.
Uses
The requirement for an offset tie down will occur when vermin proofing must be placed between the column or post, and a wooden bearer, for example to stop termites travelling up through a concrete or wooden post or column directly into the bearer and the rest of the building. The "ant capping", typically a 0.5mm to 0.8mm thick galvanised steel sheet, must be placed between the post and the bearer overlapping the perimeter of the post by approximately 20mm to 40mm or more. The cranked eye bolt is fastened to the post using a bolt through the eye, the crank in the shaft allowing the shaft to be positioned so that it does not impede the overlap of the "ant capping" up through the bearer.
Should termite attack occur, the post of column can be replaced with no structural effect to the building.
Cranked eye bolts can also be used to tie the top plate of a house frame directly to house supports, using rod couplers and steel extension rods.
Engineering
Cranked eye bolts used to be made by bending an "eye" into the end of a rod that was threaded at the other end.
Today, cranked eye bolts are now typically made by welding a cranked and threaded rod to a heavy gauge steel washer.
Cranked eye bolts are made with different degrees of crank and lengths of shaft for flexibility.
See also
Eye bolt
Tie down
Notes and references
Structural system
Threaded fasteners | Cranked eye bolt | [
"Technology",
"Engineering"
] | 401 | [
"Structural system",
"Structural engineering",
"Building engineering"
] |
1,344,439 | https://en.wikipedia.org/wiki/Green%20building | Green building (also known as green construction, sustainable building, or eco-friendly building) refers to both a structure and the application of processes that are environmentally responsible and resource-efficient throughout a building's life-cycle: from planning to design, construction, operation, maintenance, renovation, and demolition. This requires close cooperation of the contractor, the architects, the engineers, and the client at all project stages. The Green Building practice expands and complements the classical building design concerns of economy, utility, durability, and comfort. Green building also refers to saving resources to the maximum extent, including energy saving, land saving, water saving, material saving, etc., during the whole life cycle of the building, protecting the environment and reducing pollution, providing people with healthy, comfortable and efficient use of space, and being in harmony with nature. Buildings that live in harmony; green building technology focuses on low consumption, high efficiency, economy, environmental protection, integration and optimization.’
Leadership in Energy and Environmental Design (LEED) is a set of rating systems for the design, construction, operation, and maintenance of green buildings which was developed by the U.S. Green Building Council. Other certificate systems that confirm the sustainability of buildings are the British BREEAM (Building Research Establishment Environmental Assessment Method) for buildings and large-scale developments or the DGNB System (Deutsche Gesellschaft für Nachhaltiges Bauen e.V.) which benchmarks the sustainability performance of buildings, indoor environments and districts. Currently, the World Green Building Council is conducting research on the effects of green buildings on the health and productivity of their users and is working with the World Bank to promote Green Buildings in Emerging Markets through EDGE (Excellence in Design for Greater Efficiencies) Market Transformation Program and certification. There are also other tools such as NABERS or Green Star in Australia, Global Sustainability Assessment System (GSAS) used in the Middle East and the Green Building Index (GBI) predominantly used in Malaysia.
Building information modeling (BIM) is a process involving the generation and management of digital representations of physical and functional characteristics of places. Building information models (BIMs) are files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged, or networked to support decision-making regarding a building or other built asset. Current BIM software is used by individuals, businesses, and government agencies who plan, design, construct, operate and maintain diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports, and tunnels.
Although new technologies are constantly being developed to complement current practices in creating greener structures, the common objective of green buildings is to reduce the overall impact of the built environment on human health and the natural environment by:
Efficiently using energy, water, and other resources
Protecting occupant health and improving employee productivity (see healthy building)
Reducing waste, pollution, and environmental degradation
Natural building is a similar concept, usually on a smaller scale and focusing on the use of locally available natural materials. Other related topics include sustainable design and green architecture. Sustainability may be defined as meeting the needs of present generations without compromising the ability of future generations to meet their needs. Although some green building programs don't address the issue of retrofitting existing homes, others do, especially through public schemes for energy efficient refurbishment. Green construction principles can easily be applied to retrofit work as well as new construction.
A 2009 report by the U.S. General Services Administration found 12 sustainably-designed buildings that cost less to operate and have excellent energy performance. In addition, occupants were overall more satisfied with the building than those in typical commercial buildings. These are eco-friendly buildings.
Reducing environmental impact
Buildings represent a large part of energy, electricity, water and materials consumption. As of 2020, they account for 37% of global energy use and energy-related emissions, which the United Nations estimate contributed to 33% of overall worldwide emissions. Including the manufacturing of building materials, the global emissions were 39%. If new technologies in construction are not adopted during this time of rapid growth, emissions could double by 2050, according to the United Nations Environment Program.
Glass buildings, especially all-glass skyscrapers, contribute significantly to climate change due to their energy inefficiency. While these structures are visually appealing and allow abundant natural light, they also trap heat, necessitating increased use of air conditioning systems, which contribute to higher carbon emissions. Experts advocate for design modifications and potential restrictions on all-glass edifices to mitigate their detrimental environmental impact.
Buildings account for a large amount of land. According to the National Resources Inventory, approximately of land in the United States are developed. The International Energy Agency released a publication that estimated that existing buildings are responsible for more than 40% of the world's total primary energy consumption and for 24% of global carbon dioxide emissions.
According to Global status report from the year 2016, buildings consume more than 30% of all produced energy. The report states that "Under a below 2°C trajectory, effective action to improve building energy efficiency could limit building final energy demand to just above current levels, meaning that the average energy intensity of the global building stock would decrease by more than 80% by 2050".Green building practices aim to reduce the environmental impact of building as the building sector has the greatest potential to deliver significant cuts in emissions at little or no cost. General guidelines can be summarized as follows: Every building should be as small as possible. Avoid contributing to sprawl, even if the most energy-efficient, environmentally sound methods are used in design and construction. Bioclimatic design principles are able to reduce energy expenditure and by extension, carbon emissions. Bioclimatic design is a method of building design that takes local climate into account to create comfortable conditions within the structure. This could be as simple as constructing a different shape for the building envelope or facing the building towards the south to maximize solar exposure for energy or lighting purposes. Given the limitations of city planned construction, bioclimatic principles may be employed on a lesser scale, however it is still an effective passive method to reduce environmental impact.
Goals of green building
The concept of sustainable development can be traced to the energy (especially fossil oil) crisis and environmental pollution concerns of the 1960s and 1970s. The Rachel Carson book, "Silent Spring", published in 1962, is considered to be one of the first initial efforts to describe sustainable development as related to green building. The green building movement in the U.S. originated from the need and desire for more energy efficient and environmentally friendly construction practices. There are a number of motives for building green, including environmental, economic, and social benefits. However, modern sustainability initiatives call for an integrated and synergistic design to both new construction and in the retrofitting of existing structures. Also known as sustainable design, this approach integrates the building life-cycle with each green practice employed with a design-purpose to create a synergy among the practices used.
Green building brings together a vast array of practices, techniques, and skills to reduce and ultimately eliminate the impacts of buildings on the environment and human health. It often emphasizes taking advantage of renewable resources, e.g., using sunlight through passive solar, active solar, and photovoltaic equipment, and using plants and trees through green roofs, rain gardens, and reduction of rainwater run-off. Many other techniques are used, such as using low-impact building materials or using packed gravel or permeable concrete instead of conventional concrete or asphalt to enhance replenishment of groundwater.
While the practices or technologies employed in green building are constantly evolving and may differ from region to region, fundamental principles persist from which the method is derived: siting and structure design efficiency, energy efficiency, water efficiency, materials efficiency, indoor environmental quality enhancement, operations and maintenance optimization and waste and toxics reduction. The essence of green building is an optimization of one or more of these principles. Also, with the proper synergistic design, individual green building technologies may work together to produce a greater cumulative effect.
On the aesthetic side of green architecture or sustainable design is the philosophy of designing a building that is in harmony with the natural features and resources surrounding the site. There are several key steps in designing sustainable buildings: specify 'green' building materials from local sources, reduce loads, optimize systems, and generate on-site renewable energy.
Life cycle assessment
A life cycle assessment (LCA) can help avoid a narrow outlook on environmental, social and economic concerns by assessing a full range of impacts associated with all cradle-to-grave stages of a process: from extraction of raw materials through materials processing, manufacture, distribution, use, repair and maintenance, and disposal or recycling. Impacts taken into account include (among others) embodied energy, global warming potential, resource use, air pollution, water pollution, and waste.
In terms of green building, the last few years have seen a shift away from a prescriptive approach, which assumes that certain prescribed practices are better for the environment, toward the scientific evaluation of actual performance through LCA.
Although LCA is widely recognized as the best way to evaluate the environmental impacts of buildings (ISO 14040 provides a recognized LCA methodology), it is not yet a consistent requirement of green building rating systems and codes, despite the fact that embodied energy and other life cycle impacts are critical to the design of environmentally responsible buildings.
In North America, LCA is rewarded to some extent in the Green Globes rating system, and is part of the new American National Standard based on Green Globes, ANSI/GBI 01-2010: Green Building Protocol for Commercial Buildings. LCA is also included as a pilot credit in the LEED system, though a decision has not been made as to whether it will be incorporated fully into the next major revision. The state of California also included LCA as a voluntary measure in its 2010 draft Green Building Standards Code.
Although LCA is often perceived as overly complex and time-consuming for regular use by design professionals, research organizations such as BRE in the UK and the Athena Sustainable Materials Institute in North America are working to make it more accessible.
In the UK, the BRE Green Guide to Specifications offers ratings for 1,500 building materials based on LCA.
Siting and structure design efficiency
The foundation of any construction project is rooted in the concept and design stages. The concept stage, in fact, is one of the major steps in a project life cycle, as it has the largest impact on cost and performance. In designing environmentally optimal buildings, the objective is to minimize the total environmental impact associated with all life-cycle stages of the building project. However, building as a process is not as streamlined as an industrial process, and varies from one building to the other, never repeating itself identically. In addition, buildings are much more complex products, composed of a multitude of materials and components each constituting various design variables to be decided at the design stage. A variation of every design variable may affect the environment during all the building's relevant life-cycle stages.
Energy efficiency
Green buildings often include measures to reduce energy consumption – both the embodied energy required to extract, process, transport and install building materials and operating energy to provide services such as heating and power for equipment.
As high-performance buildings use less operating energy, embodied energy has assumed much greater importance – and may make up as much as 30% of the overall life cycle energy consumption. Studies such as the U.S. LCI Database Project show buildings built primarily with wood will have a lower embodied energy than those built primarily with brick, concrete, or steel.
To reduce operating energy use, designers use details that reduce air leakage through the building envelope (the barrier between conditioned and unconditioned space). They also specify high-performance windows and extra insulation in walls, ceilings, and floors. Another strategy, passive solar building design, is often implemented in low-energy homes. Designers orient windows and walls and place awnings, porches, and trees to shade windows and roofs during the summer while maximizing solar gain in the winter. In addition, effective window placement (daylighting) can provide more natural light and lessen the need for electric lighting during the day. Solar water heating further reduces energy costs.
Onsite generation of renewable energy through solar power, wind power, hydro power, or biomass can significantly reduce the environmental impact of the building. Power generation is generally the most expensive feature to add to a building.
Energy efficiency for green buildings can be evaluated from either numerical or non-numerical methods. These include use of simulation modelling, analytical or statistical tools.
In a report published in April 2024, the International Energy Agency (IEA) highlighted that buildings are responsible for about 30% of global final energy consumption and over 50% of electricity demand. It noted the tripling of heat pump sales from 2015 to 2022, electric cars accounting for 20% of 2023 vehicle sales, and a potential doubling of China's peak electricity demand by mid-century. India's air conditioner ownership could see a tenfold rise by 2050, causing a sixfold increase in peak electricity demand, which could be halved with efficient practices. By 2050, demand response measures might lower household electricity bills by 7% to 12% in advanced economies and nearly 20% in developing ones, with smart device installations nearly doubling by 2030. The US could see a 116 GW reduction in peak demand, 80 million tonnes less CO2 per year by 2030, and save between USD 100 billion and USD 200 billion over twenty years with grid-interactive buildings. In Alabama, a smart neighborhood demonstrated 35% to 45% energy savings compared to traditional homes.
Water efficiency
Reducing water consumption and protecting water quality are key objectives in sustainable building. One critical issue of water consumption is that in many areas, the demands on the supplying aquifer exceed its ability to replenish itself. To the maximum extent feasible, facilities should increase their dependence on water that is collected, used, purified, and reused on-site. The protection and conservation of water throughout the life of a building may be accomplished by designing for dual plumbing that recycles water in toilet flushing or by using water for washing of the cars. Waste-water may be minimized by utilizing water conserving fixtures such as ultra-low flush toilets and low-flow shower heads. Bidets help eliminate the use of toilet paper, reducing sewer traffic and increasing possibilities of re-using water on-site. Point of use water treatment and heating improves both water quality and energy efficiency while reducing the amount of water in circulation. The use of non-sewage and greywater for on-site use such as site-irrigation will minimize demands on the local aquifer.
Large commercial buildings with water and energy efficiency can qualify for an LEED Certification. Philadelphia's Comcast Center is the tallest building in Philadelphia. It is also one of the tallest buildings in the USA that is LEED Certified. Their environmental engineering consists of a hybrid central chilled water system which cools floor-by-floor with steam instead of water. Burn's Mechanical set-up the entire renovation of the 58 story, 1.4 million square foot sky scraper.
Materials efficiency
Building materials typically considered 'green' include lumber( that has been certified to a third-party standard), rapidly renewable plant materials (like bamboo and straw), dimension stone, recycled stone, hempcrete, recycled metal (see: copper sustainability and recyclability), and other non-toxic, reusable, renewable, and/or recyclable products. Materials with lower embodied energy can be used in substitution to common building materials with high degrees of energy consumption and carbon/harmful emissions. For concrete a high performance self-healing version is available, however options with lower yields of pollutive waste entertain ideas of upcycling and congregate supplementing; replacing traditional concrete mixes with slag, production waste, and aggregates. Insulation also sees multiple angles for substitution. Commonly used fiberglass has competition from other eco-friendly, low energy embodying insulators with similar or higher R-values (per inch of thickness) at a competitive price. Sheep wool, cellulose, and ThermaCork perform more efficiently, however, use may be limited by transportation or installation costs.
Furthermore, embodied energy comparisons can help deduce the selection of building material and its efficiency. Wood production emits less than concrete and steel if produced in a sustainable way just as steel can be produced more sustainably through improvements in technology (e.g. EAF) and energy recycling/carbon capture(an underutilized potential for systematically storing carbon in the built environment).
The EPA (Environmental Protection Agency) also suggests using recycled industrial goods, such as coal combustion products, foundry sand, and demolition debris in construction projects. Energy efficient building materials and appliances are promoted in the United States through energy rebate programs.
A 2022 report from the Boston Consulting Group found that, investments in developing greener forms of cement, iron, and steel lead to bigger greenhouse gas reductions compared with investments in electricity and aviation. In addition, the process of making cement without producing is unavoidable. However, using pozzolans clinkers can reduce emission while in the process of making cement.
Indoor environmental quality enhancement
The Indoor Environmental Quality (IEQ) category in LEED standards, one of the five environmental categories, was created to provide comfort, well-being, and productivity of occupants. The LEED IEQ category addresses design and construction guidelines especially: indoor air quality (IAQ), thermal quality, and lighting quality.
Indoor Air Quality seeks to reduce volatile organic compounds, or VOCs, and other air impurities such as microbial contaminants. Buildings rely on a properly designed ventilation system (passively/naturally or mechanically powered) to provide adequate ventilation of cleaner air from outdoors or recirculated, filtered air as well as isolated operations (kitchens, dry cleaners, etc.) from other occupancies. During the design and construction process choosing construction materials and interior finish products with zero or low VOC emissions will improve IAQ. Most building materials and cleaning/maintenance products emit gases, some of them toxic, such as many VOCs including formaldehyde. These gases can have a detrimental impact on occupants' health, comfort, and productivity. Avoiding these products will increase a building's IEQ. LEED, HQE and Green Star contain specifications on use of low-emitting interior. Draft LEED 2012 is about to expand the scope of the involved products. BREEAM limits formaldehyde emissions, no other VOCs. MAS Certified Green is a registered trademark to delineate low VOC-emitting products in the marketplace. The MAS Certified Green Program ensures that any potentially hazardous chemicals released from manufactured products have been thoroughly tested and meet rigorous standards established by independent toxicologists to address recognized long-term health concerns. These IAQ standards have been adopted by and incorporated into the following programs:
The United States Green Building Council (USGBC) in their LEED rating system
The California Department of Public Health (CDPH) in their section 01350 standards
The Collaborative for High Performance Schools (CHPS) in their Best Practices Manual
The Business and Institutional Furniture Manufacturers Association (BIFMA) in their level® sustainability standard.
Also important to indoor air quality is the control of moisture accumulation (dampness) leading to mold growth and the presence of bacteria and viruses as well as dust mites and other organisms and microbiological concerns. Water intrusion through a building's envelope or water condensing on cold surfaces on the building's interior can enhance and sustain microbial growth. A well-insulated and tightly sealed envelope will reduce moisture problems but adequate ventilation is also necessary to eliminate moisture from sources indoors including human metabolic processes, cooking, bathing, cleaning, and other activities.
Personal temperature and airflow control over the HVAC system coupled with a properly designed building envelope will also aid in increasing a building's thermal quality. Creating a high performance luminous environment through the careful integration of daylight and electrical light sources will improve on the lighting quality and energy performance of a structure.
Solid wood products, particularly flooring, are often specified in environments where occupants are known to have allergies to dust or other particulates. Wood itself is considered to be hypo-allergenic and its smooth surfaces prevent the buildup of particles common in soft finishes like carpet. The Asthma and Allergy Foundation of America recommends hardwood, vinyl, linoleum tile or slate flooring instead of carpet. The use of wood products can also improve air quality by absorbing or releasing moisture in the air to moderate humidity.
Interactions among all the indoor components and the occupants together form the processes that determine the indoor air quality. Extensive investigation of such processes is the subject of indoor air scientific research and is well documented in the journal Indoor Air.
Operations and maintenance optimization
No matter how sustainable a building may have been in its design and construction, it can only remain so if it is operated responsibly and maintained properly. Ensuring operations and maintenance(O&M) personnel are part of the project's planning and development process will help retain the green criteria designed at the onset of the project. Every aspect of green building is integrated into the O&M phase of a building's life. The addition of new green technologies also falls on the O&M staff. Although the goal of waste reduction may be applied during the design, construction and demolition phases of a building's life-cycle, it is in the O&M phase that green practices such as recycling and air quality enhancement take place. O&M staff should aim to establish best practices in energy efficiency, resource conservation, ecologically sensitive products and other sustainable practices. Education of building operators and occupants is key to effective implementation of sustainable strategies in O&M services.
Waste reduction
Green architecture also seeks to reduce waste of energy, water and materials used during construction. For example, in California nearly 60% of the state's waste comes from commercial buildings During the construction phase, one goal should be to reduce the amount of material going to landfills. Well-designed buildings also help reduce the amount of waste generated by the occupants as well, by providing on-site solutions such as compost bins to reduce matter going to landfills.
To reduce the amount of wood that goes to landfill, Neutral Alliance (a coalition of government, NGOs and the forest industry) created the website dontwastewood.com. The site includes a variety of resources for regulators, municipalities, developers, contractors, owner/operators and individuals/homeowners looking for information on wood recycling.
When buildings reach the end of their useful life, they are typically demolished and hauled to landfills. Deconstruction is a method of harvesting what is commonly considered "waste" and reclaiming it into useful building material. Extending the useful life of a structure also reduces waste – building materials such as wood that are light and easy to work with make renovations easier.
To reduce the impact on wells or water treatment plants, several options exist. "Greywater", wastewater from sources such as dishwashing or washing machines, can be used for subsurface irrigation, or if treated, for non-potable purposes, e.g., to flush toilets and wash cars. Rainwater collectors are used for similar purposes.
Centralized wastewater treatment systems can be costly and use a lot of energy. An alternative to this process is converting waste and wastewater into fertilizer, which avoids these costs and shows other benefits. By collecting human waste at the source and running it to a semi-centralized biogas plant with other biological waste, liquid fertilizer can be produced. This concept was demonstrated by a settlement in Lübeck Germany in the late 1990s. Practices like these provide soil with organic nutrients and create carbon sinks that remove carbon dioxide from the atmosphere, offsetting greenhouse gas emission. Producing artificial fertilizer is also more costly in energy than this process.
Reduce impact onto electricity network
Electricity networks are built based on peak demand (another name is peak load). Peak demand is measured in the units of watts (W). It shows how fast electrical energy is consumed. Residential electricity is often charged on electrical energy (kilowatt hour, kWh). Green buildings or sustainable buildings are often capable of saving electrical energy but not necessarily reducing peak demand.
When sustainable building features are designed, constructed and operated efficiently, peak demand can be reduced so that there is less desire for electricity network expansion and there is less impact onto carbon emission and climate change. These sustainable features can be good orientation, sufficient indoor thermal mass, good insulation, photovoltaic panels, thermal or electrical energy storage systems, smart building (home) energy management systems.
Cost and payoff
The most criticized issue about constructing environmentally friendly buildings is the price. Photovoltaics, new appliances, and modern technologies tend to cost more money. Most green buildings cost a premium of <2%, but yield 10 times as much over the entire life of the building. In regards to the financial benefits of green building, "Over 20 years, the financial payback typically exceeds the additional cost of greening by a factor of 4-6 times. And broader benefits, such as reductions in greenhouse gases (GHGs) and other pollutants have large positive impacts on surrounding communities and on the planet." The stigma is between the knowledge of up-front cost vs. life-cycle cost. The savings in money come from more efficient use of utilities which result in decreased energy bills. It is projected that different sectors could save $130 billion on energy bills. Also, higher worker or student productivity can be factored into savings and cost deductions.
Numerous studies have shown the measurable benefit of green building initiatives on worker productivity. In general it has been found that, "there is a direct correlation between increased productivity and employees who love being in their work space." Specifically, worker productivity can be significantly impacted by certain aspects of green building design such as improved lighting, reduction of pollutants, advanced ventilation systems and the use of non-toxic building materials. In "The Business Case for Green Building", the U.S. Green Building Council gives another specific example of how commercial energy retrofits increase worker health and thus productivity, "People in the U.S. spend about 90% of their time indoors. EPA studies indicate indoor levels of pollutants may be up to ten times higher than outdoor levels. LEED-certified buildings are designed to have healthier, cleaner indoor environmental quality, which means health benefits for occupants."
Studies have shown over a 20-year life period, some green buildings have yielded $53 to $71 per square foot back on investment. Confirming the rentability of green building investments, further studies of the commercial real estate market have found that LEED and Energy Star certified buildings achieve significantly higher rents, sale prices and occupancy rates as well as lower capitalization rates potentially reflecting lower investment risk.
Regulation and operation
As a result of the increased interest in green building concepts and practices, a number of organizations have developed standards, codes and rating systems for use by government regulators, building professionals and consumers. In some cases, codes are written so local governments can adopt them as bylaws to reduce the local environmental impact of buildings.
Green building rating systems such as BREEAM (United Kingdom), LEED (United States and Canada), DGNB (Germany), CASBEE (Japan), and VERDEGBCe (Spain), GRIHA (India) help consumers determine a structure's level of environmental performance. They award credits for optional building features that support green design in categories such as location and maintenance of building site, conservation of water, energy, and building materials, and occupant comfort and health. The number of credits generally determines the level of achievement.
Green building codes and standards, such as the International Code Council's draft International Green Construction Code, are sets of rules created by standards development organizations that establish minimum requirements for elements of green building such as materials or heating and cooling.
Some of the major building environmental assessment tools currently in use include:
United States: International Green Construction Code (IGCC)
Green neighborhoods and villages
At the beginning of the 21st century, efforts were made to implement the principles of green building, not only for individual buildings, but also for neighborhoods and villages. The intent is to create zero energy neighborhoods and villages, which means they're going to create all the energy on their own. They will also reuse waste, implements sustainable transportation, and produce their own food. Green villages have been identified as a way to decentralize sustainable climate practices, which may prove key in areas with high rural or scattered village populations, such as India, where 74% of the population lives in over 600,000 different villages.
International frameworks and assessment tools
IPCC Fourth Assessment Report
Climate Change 2007, the Fourth Assessment Report (AR4) of the United Nations Intergovernmental Panel on Climate Change (IPCC), is the fourth in a series of such reports. The IPCC was established by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) to assess scientific, technical and socio-economic information concerning climate change, its potential effects and options for adaptation and mitigation.
UNEP and Climate change
United Nations Environment Program UNEP works to facilitate the transition to low-carbon societies, support climate proofing efforts, improve understanding of climate change science, and raise public awareness about this global challenge.
GHG Indicator
The Greenhouse Gas Indicator: UNEP Guidelines for Calculating Greenhouse Gas Emissions for Businesses and Non-Commercial Organizations
Agenda 21
Agenda 21 is a programme run by the United Nations (UN) related to sustainable development. It is a comprehensive blueprint of action to be taken globally, nationally and locally by organizations of the UN, governments, and major groups in every area in which humans impact on the environment. The number 21 refers to the 21st century.
FIDIC's PSM
The International Federation of Consulting Engineers (FIDIC) Project Sustainability Management Guidelines were created to assist project engineers and other stakeholders in setting sustainable development goals for their projects that are recognized and accepted as being in the interests of society. The process is also intended to align project goals with local conditions and priorities and assist those involved in managing projects to measure and verify their progress.
The Project Sustainability Management Guidelines are structured with Themes and Sub-Themes under the three main sustainability headings of Social, Environmental and Economic. For each individual Sub-Theme a core project indicator is defined along with guidance as to the relevance of that issue in the context of an individual
project.
The Sustainability Reporting Framework provides guidance for organizations to use as the basis for disclosure about their sustainability performance, and also provides stakeholders a universally applicable, comparable framework in which to understand disclosed information.
The Reporting Framework contains the core product of the Sustainability Reporting Guidelines, as well as Protocols and Sector Supplements.
The Guidelines are used as the basis for all reporting. They are the foundation upon which all other reporting guidance is based, and outline core content for reporting that is broadly relevant to all organizations regardless of size, sector, or location. The Guidelines contain principles and guidance as well as standard disclosures – including indicators – to outline a disclosure framework that organizations can voluntarily, flexibly, and incrementally, adopt.
Protocols underpin each indicator in the Guidelines and include definitions for key terms in the indicator, compilation methodologies, intended scope of the indicator, and other technical references.
Sector Supplements respond to the limits of a one-size-fits-all approach. Sector Supplements complement the use of the core Guidelines by capturing the unique set of sustainability issues faced by different sectors such as mining, automotive, banking, public agencies and others.
IPD Environment Code
The IPD Environment Code was launched in February 2008. The Code is intended as a good practice global standard for measuring the environmental performance of corporate buildings. Its aim is to accurately measure and manage the environmental impacts of corporate buildings and enable property executives to generate high quality, comparable performance information about their
buildings anywhere in the world. The Code covers a wide range of building types (from offices to airports) and aims to inform and support
the following;
Creating an environmental strategy
Inputting to real estate strategy
Communicating a commitment to environmental improvement
Creating performance targets
Environmental improvement plans
Performance assessment and measurement
Life cycle assessments
Acquisition and disposal of buildings
Supplier management
Information systems and data population
Compliance with regulations
Team and personal objectives
IPD estimate that it will take approximately three years to gather significant data to develop a robust set of baseline data that could be used across a typical corporate estate.
ISO 21931
ISO/TS 21931:2006, Sustainability in building construction—Framework for methods of assessment for environmental performance of construction works—Part 1: Buildings, is intended to provide a general framework for improving the quality and comparability of methods for assessing the environmental performance of buildings. It identifies and describes issues to be taken into account when using methods for the assessment of environmental performance for new or existing building properties in the design, construction, operation, refurbishment and deconstruction stages. It is not an assessment system in itself but is intended be used in conjunction with, and following the principles set out in, the ISO 14000 series of standards.
Development history
In the 1930s, geothermal hot water district heating of houses started in Iceland.
In the 1960s, American architect Paul Soleri proposed a new concept of ecological architecture.
In 1969, American architect Ian McHarg wrote the book "Design Integrates Nature", which marked the official birth of ecological architecture.
In the 1970s, the energy crisis caused various building energy-saving technologies such as solar energy, geothermal energy, and wind energy to emerge, and energy-saving buildings became the forerunner of building development.
In 1975, the Swiss PLENAR-group published the concept of an energy efficient house in "PLENAR: Planning-Energy-Architecture".
In 1980, the World Conservation Organization put forward the slogan "sustainable development" for the first time. At the same time, the energy-saving building system was gradually improved, and it was widely used in developed countries such as Germany, Britain, France and Canada.
In 1982, Per and Maria Krusche et al. published an ecological approach to architecture in "Ökologisches Bauen" (ecological buildings) for the German Federal Environment Agency.
In 1987, the United Nations Environment Program published the "Our Common Future" report, which established the idea of sustainable development.
In 1990, the world's first green building standard was released in the UK.
In 1992, because the "United Nations Conference on Environment and Development" promoted sustainable development, green buildings gradually became the direction of development.
In 1993, the United States created the Green Building Association.
In 1996, Hong Kong introduced green building standards.
In 1999, Taiwan introduced green building standards.
In 2000, Canada introduced green building standards.
In 2005, Singapore initiated the "BCA Green Building Mark".
In 2015, according to the Berkeley National Laboratory, China implemented the "Green Building Evaluation Standards".
In 2021, the first, both low-cost and sustainable 3D printed house made out of a clay-mixture was completed.
Green building by country
Green building in Australia
Green building in Bangladesh
Green building in Germany
Green building in Israel
Green building in South Africa
Green building in the United Kingdom
Green building in India
Green building in the United States
The Model home 2020 project: Denmark, Austria, Germany, France, UK
See also
Alternative natural materials
Arcology — high density ecological structures
Autonomous building
Biophilic design
Building
Building insulation
Centre for Interactive Research on Sustainability
Deconstruction (building)
Eco hotel
Environmental planning
Geo-exchange
Green architecture
Green building and wood
Green Building Council
Green home
Green technology
Glass in green buildings
Healthy building
Leadership in Energy and Environmental Design
List of low-energy building techniques
Living Building Challenge
Low-energy house
National Green Building Standard
Natural building
Sustainable city
Sustainable habitat
Tropical green building
World Green Building Council
Yakhchāl
Zero-energy building
Zero heating building
References
External links
Sustainable Architecture at the Open Directory Project
Prochorskaite A, Couch C, Malys N, Maliene V (2016) Housing Stakeholder Preferences for the "Soft" Features of Sustainable and Healthy Housing Design in the UK.
The Sustainable house handbook : how to plan and build an affordable, energy-efficient and waterwise home for the future / Josh Byrne. - ISBN 9781743795828 . - Richmond, Vic. : Hardie Grant Books, 2020.
Sustainable house / Michael Mobbs. - 2nd ed. - Sydney, NSW : UNSW Press, 2010. - ISBN 978-1-920705-52-7
Nationwide House Energy Rating Scheme (NatHERS)
Renew : leading in sustainability
Housing Industry Association. GreenSmart Awards.
National Australian Built Environment Rating System (NaBERS)
Sustainable building
Building engineering
Sustainable architecture
Low-energy building
Buildings and structures by type
Sustainable urban planning
Sustainability
Sustainable development
Sustainable design
Building | Green building | [
"Engineering",
"Environmental_science"
] | 7,571 | [
"Sustainable building",
"Sustainable architecture",
"Buildings and structures by type",
"Building",
"Building engineering",
"Construction",
"Civil engineering",
"Environmental social science",
"Architecture"
] |
1,344,480 | https://en.wikipedia.org/wiki/Uniform%20isomorphism | In the mathematical field of topology a uniform isomorphism or is a special isomorphism between uniform spaces that respects uniform properties. Uniform spaces with uniform maps form a category. An isomorphism between uniform spaces is called a uniform isomorphism.
Definition
A function between two uniform spaces and is called a uniform isomorphism if it satisfies the following properties
is a bijection
is uniformly continuous
the inverse function is uniformly continuous
In other words, a uniform isomorphism is a uniformly continuous bijection between uniform spaces whose inverse is also uniformly continuous.
If a uniform isomorphism exists between two uniform spaces they are called or .
Uniform embeddings
A is an injective uniformly continuous map between uniform spaces whose inverse is also uniformly continuous, where the image has the subspace uniformity inherited from
Examples
The uniform structures induced by equivalent norms on a vector space are uniformly isomorphic.
See also
— an isomorphism between topological spaces
— an isomorphism between metric spaces
References
, pp. 180-4
Homeomorphisms
Uniform spaces | Uniform isomorphism | [
"Mathematics"
] | 206 | [
"Uniform spaces",
"Homeomorphisms",
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology"
] |
1,344,979 | https://en.wikipedia.org/wiki/Visbreaker | A visbreaker is a processing unit in an oil refinery whose purpose is to minimize the quantity of residual oil produced in the distillation of crude oil and to increase the yield of more valuable middle distillates (heating oil and diesel) by the refinery. A visbreaker thermally cracks large hydrocarbon molecules in the oil by heating in a furnace to lower its viscosity and to produce small quantities of light hydrocarbons. (LPG and gasoline). The process name of "visbreaker" refers to the fact that the process lowers (i.e., breaks) the viscosity of the residual oil. The process is non-catalytic.
Process objectives
The objectives of visbreaking are:
Lower the viscosity of the feed stream: Typically this is the residue from vacuum distillation of crude oil but can also be the residue from hydroskimming operations, natural bitumen from seeps in the ground or tar sands, and even certain high viscosity crude oils.
Lower the amount of residual fuel oil produced by a refinery: Residual fuel oil is generally regarded as a low value product. Demand for residual fuel continues to decrease as it is replaced in its traditional markets, such as fuel needed to generate steam in power stations, by cleaner burning alternative fuels such as natural gas.
Increase the proportion of middle distillates in the refinery output: Middle distillate is used as a diluent with residual oils to bring their viscosity down to a marketable level. By lowering the viscosity of the residual stream in a visbreaker, a fuel oil can be made using less diluent and the middle distillate saved can be diverted to higher value diesel or heating oil manufacture.
Technology
Coil visbreaking
The term coil (or furnace) visbreaking is applied to units where the cracking process occurs in the furnace tubes (or "coils"). Material exiting the furnace is quenched to halt the cracking reactions: frequently this is achieved by heat exchange with the virgin material being fed to the furnace, which in turn is a good energy efficiency step, but sometimes a stream of cold oil (usually gas oil) is used to the same effect. The gas oil is recovered and re-used. The extent of the cracking reaction is controlled by regulation of the speed of flow of the oil through the furnace tubes. The quenched oil then passes to a fractionator where the products of the cracking (gas, LPG, gasoline, gas oil and tar) are separated and recovered.
Soaker visbreaking
In soaker visbreaking, the bulk of the cracking reaction occurs not in the furnace but in a drum located after the furnace called the soaker. Here the oil is held at an elevated temperature for a pre-determined period of time to allow cracking to occur before being quenched. The oil then passes to a fractionator. In soaker visbreaking, lower temperatures are used than in coil visbreaking. The comparatively long duration of the cracking reaction is used instead.
Process options
Visbreaker tar can be further refined by feeding it to a vacuum fractionator. Here additional heavy gas oil may be recovered and routed either to catalytic cracking, hydrocracking or thermal cracking units on the refinery. The vacuum-flashed tar (sometimes referred to as pitch) is then routed to fuel oil blending. In a few refinery locations, visbreaker tar is routed to a delayed coker for the production of certain specialist cokes such as anode coke or needle coke.
Soaker visbreaking versus coil visbreaking
From the standpoint of yield, there is little or nothing to choose between the two approaches. However, each offers significant advantages in particular situations:
De-coking: The cracking reaction forms petroleum coke as a byproduct. In coil visbreaking, this deposits in the tubes of the furnace and will eventually lead to fouling or blocking of the tubes. The same will occur in the drum of a soaker visbreaker, though the lower temperatures used in the soaker drum lead to fouling at a much slower rate. Coil visbreakers therefore require frequent de-coking. This is quite labour-intensive, but can be developed into a routine where tubes are de-coked sequentially without the need to shut down the visbreaking operation. Soaker drums require far less frequent attention but their being taken out of service normally requires a complete halt to the operation. Which is the more disruptive activity will vary from refinery to refinery.
Fuel Economy: The lower temperatures used in the soaker approach mean that these units use less fuel. In cases where a refinery buys fuel to support process operations, any savings in fuel consumption could be extremely valuable. In such cases, soaker visbreaking may be advantageous.
Quality and yields
Feed quality and product quality
The quality of the feed going into a visbreaker will vary considerably with the type of crude oil that the refinery is processing. The following is a typical quality for the vacuum distillation residue of Arabian light (a crude oil from Saudi Arabia and widely refined around the world):
Once this material has been run through a visbreaker (and, again, there will be considerable variation from visbreaker to visbreaker as no two will operate under exactly the same conditions) the lowering in viscosity is dramatic:
Yields
The yields of the various hydrocarbon products will depend on the "severity" of the cracking operation as determined by the temperature the oil is heated to in the visbreaker furnace. At the low end of the scale, a furnace heating to 425 °C would crack only mildly, while operations at 500 °C would be considered as very severe. Arabian light crude residue when visbroken at 450 °C would yield around 76% (by weight) of tar, 15% middle distillates, 6% gasolines and 3% gas and LPG.
Fuel oil stability
The severity of visbreaker operation is normally limited by the need to produce a visbreaker tar that can be blended to make a stable fuel oil.
Stability in this case is taken to mean the tendency of a fuel oil to produce sediments when stored. These sediments are undesirable as they can quickly foul the filters of pumps used to move the oil necessitating time-consuming maintenance.
Vacuum residue fed to a visbreaker can be considered to be composed of the following:
Asphaltenes: large polycyclic molecules that are suspended in the oil in a coloidal form
Resins: also polycyclic but of a lower molecular weight than asphaltenes
Aromatic hydrocarbons: derivatives of benzene, toluene and xylenes
Parafinic hydrocarbons: alkanes
Visbreaking preferentially cracks aliphatic compounds which have relatively low sulphur contents, low density and high viscosity and the effect of their removal can be clearly seen in the change in quality between feed and product. A too severe cracking in a visbreaker will lead to the asphaltene colloid becoming metastable. Subsequent addition of a diluent to manufacture a finished fuel oil can cause the colloid to break down, precipitating asphaltenes as a sludge. It has been observed that a paraffinic diluent is more likely to cause precipitation than an aromatic one. Stability of fuel oil is assessed using a number of proprietary tests (for example "P" value and SHF tests).
Economics
Viscosity blending
The viscosity blending of two or more liquids having different viscosities is a three-step procedure. The first step is to calculate the Viscosity Blending Index (VBI) of each component of the blend using the following equation (known as a Refutas equation):
(1) VBN = 14.534 × ln[ln(v + 0.8)] + 10.975
where v is the viscosity in square millimeters per second (mm²/s) or centistokes (cSt) and ln is the natural logarithm (loge). It is important that the viscosity of each component of the blend be obtained at the same temperature.
The next step is to calculate the VBN of the blend, using this equation:
(2) VBNBlend = [wA × VBNA] + [wB × VBNB] + ... + [wX × VBNX]
where w is the weight fraction (i.e., % ÷ 100) of each component of the blend.
Once the viscosity blending number of a blend has been calculated using equation (2), the final step is to determine the viscosity of the blend by using the invert of equation (1):
(3) v = ee(VBN - 10.975) ÷ 14.534 − 0.8
where VBN is the viscosity blending number of the blend and e is the transcendental number 2.71828, also known as Euler's number.
Example economics for a two-component blend
A marketable fuel oil, such as for fueling a power station, might be required to have a viscosity of 40 centistokes at 100 °C. It might be prepared using either the virgin or visbroken residue described above combined with a distillate diluent ("cutter stock"). Such a cutter stock could typically have a viscosity at 100 °C of 1.3 centistokes. Rearranging equation (2) above for a simple two component blend shows that the percentage of cutterstock required in the blend is found by:
(4) %cutter stock = [VBN40 − VBNresidue] ÷ [VBNcutter stock − VBNresidue]
Using the viscosities quoted in the tables above for the residues from Arab Light crude oil and calculating VBNs according to equation (1) gives:
For virgin residue (i.e., the unconverted feed to the visbreaker): 27.5% cutter stock in the blend
For visbroken residue: 13.3% cutter stock in the blend.
As middle distillates have a far higher value in the market place than fuel oils, it can be seen that the use of a visbreaker will considerably improve the economics of fuel oil manufacture. For example, if the cutter stock is taken to have a value of $300 per tonne and fuel oil $150 per ton (oil prices naturally change quickly, but these prices, and more importantly the differences between them, are not unrealistic), it is a simple matter to calculate the value of the different residues in this example as being:
Virgin residue: $93.1 per tonne
Visbroken residue: $127.0 per tonne
References
External links
Shell Thermal Conversion
Fuel Oil Stability Testing
Chemical processes
Oil refineries
Distillation | Visbreaker | [
"Chemistry"
] | 2,220 | [
"Separation processes",
"Oil refineries",
"Chemical processes",
"Petroleum",
"Distillation",
"Oil refining",
"nan",
"Chemical process engineering"
] |
1,345,201 | https://en.wikipedia.org/wiki/Clusterin | In humans, clusterin (CLU) is encoded by the CLU gene on chromosome 8. CLU is an extracellular molecular chaperone which binds to misfolded proteins in body fluids to neutralise their toxicity and mediate their cellular uptake by receptor-mediated endocytosis. Once internalised by cells, complexes between CLU and misfolded proteins are trafficked to lysosomes where they are degraded. CLU is involved in many diseases including neurodegenerative diseases, cancers, inflammatory diseases, and aging.
Structure
The CLU gene contains nine exons and a variety of mRNA isoforms can be detected, although most of these are only ever expressed at very low levels (< 0.3% of the total). The full-length mRNA encoding the secreted isoform is by far the dominant species transcribed. Secreted CLU (apolipoprotein J) is an approximately 60 kDa disulfide-linked heterodimeric glycoprotein which migrates in SDS-PAGE with an apparent molecular mass of 75-80 kDa. Mature CLU is composed of disulfide-linked α- and β-chains. Although multiple previous publications proposed the existence of N-terminally truncated CLU protein isoforms in different cell compartments, recent work has highlighted the lack of direct evidence for this and shown that the full-length CLU polypeptide, with variable levels of glycosylation (and hence variable apparent mass), can translocate from the ER/Golgi to the cytosol and nucleus during stress.
Function
Clusterin was first identified in ram rete testis fluid where it was shown to elicit in vitro clustering of rat Sertoli cells and erythrocytes, hence its name.
CLU has functional similarities to members of the small heat shock protein family and is thus a molecular chaperone. Unlike most other chaperone proteins, which aid intracellular proteins, CLU is trafficked through the ER/Golgi before normally being secreted. Within the secretory system, CLU has been suggested to facilitate the folding of secreted proteins in an ATP-independent way. The gene is highly conserved in species, and the protein is widely distributed in many tissues and organs, where it been implicated in a number of biological processes, including lipid transport, membrane recycling, cell adhesion, programmed cell death, and complement-mediated cell lysis. Overexpression of secretory CLU can protect cells from apoptosis induced by cellular stress, such as chemotherapy, radiotherapy, or androgen/estrogen depletion. CLU has been suggested to promote cell survival by a number of means, including inhibition of BAX on the mitochondrial membrane, activation of the phosphatidylinositol 3-kinase/protein kinase B pathway, modulation of extracellular signal-regulated kinase (ERK) 1/2 signaling and matrix metallopeptidase-9 expression, promotion of angiogenesis, and mediation of the nuclear factor kappa B (NF-κB) pathway. Meanwhile, its downregulation allows for p53 activation, which then skews the proapoptotic:antiapoptotic ratio of present Bcl-2 family members, resulting in mitochondrial dysfunction and cell death. p53 may also transcriptionally repress secretory CLU to further promote the proapoptotic cascade.
Clinical associations
Two independent genome-wide association studies found a statistical association between a SNP within the clusterin gene and the risk of having Alzheimer's disease. Further studies have suggested that people who already have Alzheimer's disease have more clusterin in their blood, and that clusterin levels in blood correlate with faster cognitive decline in individuals with Alzheimer's disease, but have not found that clusterin levels predicted the onset of Alzheimer's disease. In addition to Alzheimer's disease, CLU may be involved in other neurodegenerative diseases such as Huntington disease.
CLU may promote tumorigenesis by facilitating BAX-KLU70 binding and, consequently, preventing BAX from localizing to the outer mitochondrial membrane to stimulate cell death. In clear cell renal cell carcinoma, CLU functions to regulate ERK 1/2 signaling and matrix metallopeptidase-9 expression to promote tumor cell migration, invasion, and metastasis. In epithelial ovarian cancer, CLU has been observed to promote angiogenesis and chemoresistance. Other pathways CLU participates in to downplay apoptosis in tumor cells include the PI3K/AKT/mTOR pathway and NF-κB pathway. Unlike most other cancers, which feature upregulated CLU levels to enhance tumor cell survival, testicular seminoma features downregulated CLU levels, allowing for increased sensitivity to chemotherapy treatments. Other cancers CLU has been implicated in include breast cancer, pancreatic cancer, hepatocellular carcinoma, and melanoma.
As evident by its key roles in cancer development, CLU can serve as a therapeutic target for fighting tumor growth and chemoresistance. Studies revealed that inhibition of CLU resulted in increased effectiveness of chemotherapeutic agents to kill tumor cells. In particular, custirsen, an antisense oligonucleotide that blocks the CLU mRNA transcript, enhanced heat-shock protein 90 (HSP90) inhibitor activity by suppressing the heat-shock response in castrate-resistant prostate cancer, and was tested in phase III trials.
CLU activity is also involved in infectious diseases, such as hepatitis C. CLU is induced by the stress of hepatitis C viral infection, which disrupts glucose regulation. The chaperone protein then aids hepatitis C viral assembly by stabilizing its core and NS5A units. In addition to the above diseases, CLU has been linked to other conditions resulting from oxidative damage, including aging, glomerulonephritis, atherosclerosis, and myocardial infarction.
Interactions
CLU has been shown to interact with many different protein ligands and several cell receptors.
References
Further reading
External links
Apolipoproteins and Applied Research
Proteins | Clusterin | [
"Chemistry"
] | 1,296 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
1,345,771 | https://en.wikipedia.org/wiki/Position%20%28geometry%29 | In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents a point P in space. Its length represents the distance in relation to an arbitrary reference origin O, and its direction represents the angular orientation with respect to given reference axes. Usually denoted x, r, or s, it corresponds to the straight line segment from O to P.
In other words, it is the displacement or translation that maps the origin to P:
The term position vector is used mostly in the fields of differential geometry, mechanics and occasionally vector calculus.
Frequently this is used in two-dimensional or three-dimensional space, but can be easily generalized to Euclidean spaces and affine spaces of any dimension.
Relative position
The relative position of a point Q with respect to point P is the Euclidean vector resulting from the subtraction of the two absolute position vectors (each with respect to the origin):
where .
The relative direction between two points is their relative position normalized as a unit vector
Definition and representation
Three dimensions
In three dimensions, any set of three-dimensional coordinates and their corresponding basis vectors can be used to define the location of a point in space—whichever is the simplest for the task at hand may be used.
Commonly, one uses the familiar Cartesian coordinate system, or sometimes spherical polar coordinates, or cylindrical coordinates:
where t is a parameter, owing to their rectangular or circular symmetry. These different coordinates and corresponding basis vectors represent the same position vector. More general curvilinear coordinates could be used instead and are in contexts like continuum mechanics and general relativity (in the latter case one needs an additional time coordinate).
n dimensions
Linear algebra allows for the abstraction of an n-dimensional position vector. A position vector can be expressed as a linear combination of basis vectors:
The set of all position vectors forms position space (a vector space whose elements are the position vectors), since positions can be added (vector addition) and scaled in length (scalar multiplication) to obtain another position vector in the space. The notion of "space" is intuitive, since each xi (i = 1, 2, …, n) can have any value, the collection of values defines a point in space.
The dimension of the position space is n (also denoted dim(R) = n). The coordinates of the vector r with respect to the basis vectors ei are xi. The vector of coordinates forms the coordinate vector or n-tuple (x1, x2, …, xn).
Each coordinate xi may be parameterized a number of parameters t. One parameter xi(t) would describe a curved 1D path, two parameters xi(t1, t2) describes a curved 2D surface, three xi(t1, t2, t3) describes a curved 3D volume of space, and so on.
The linear span of a basis set B = {e1, e2, …, en} equals the position space R, denoted span(B) = R.
Applications
Differential geometry
Position vector fields are used to describe continuous and differentiable space curves, in which case the independent parameter needs not be time, but can be (e.g.) arc length of the curve.
Mechanics
In any equation of motion, the position vector r(t) is usually the most sought-after quantity because this function defines the motion of a particle (i.e. a point mass) – its location relative to a given coordinate system at some time t.
To define motion in terms of position, each coordinate may be parametrized by time; since each successive value of time corresponds to a sequence of successive spatial locations given by the coordinates, the continuum limit of many successive locations is a path the particle traces.
In the case of one dimension, the position has only one component, so it effectively degenerates to a scalar coordinate. It could be, say, a vector in the x direction, or the radial r direction. Equivalent notations include
Derivatives
For a position vector r that is a function of time t, the time derivatives can be computed with respect to t. These derivatives have common utility in the study of kinematics, control theory, engineering and other sciences.
Velocity
where dr is an infinitesimally small displacement (vector).
Acceleration
Jerk
These names for the first, second and third derivative of position are commonly used in basic kinematics. By extension, the higher-order derivatives can be computed in a similar fashion. Study of these higher-order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite sequence, enabling several analytical techniques in engineering and physics.
See also
Affine space
Coordinate system
Horizontal position
Line element
Parametric surface
Position fixing
Position four-vector
Six degrees of freedom
Vertical position
Notes
References
Keller, F. J., Gettys, W. E. et al. (1993). "Physics: Classical and modern" 2nd ed. McGraw Hill Publishing.
External links
Kinematic properties | Position (geometry) | [
"Physics",
"Mathematics"
] | 1,031 | [
"Geometric measurement",
"Point (geometry)",
"Mechanical quantities",
"Physical quantities",
"Quantity",
"Position",
"Kinematic properties",
"Space",
"Vector physical quantities",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
1,346,096 | https://en.wikipedia.org/wiki/Transfer%20operator | In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system.
The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
Definition
The iterated function to be studied is a map for an arbitrary set .
The transfer operator is defined as an operator acting on the space of functions as
where is an auxiliary valuation function. When has a Jacobian determinant , then is usually taken to be .
The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoretic pushforward of g: in essence, the transfer operator is the direct image functor in the category of measurable spaces. The left-adjoint of the Perron–Frobenius operator is the Koopman operator or composition operator. The general setting is provided by the Borel functional calculus.
As a general rule, the transfer operator can usually be interpreted as a (left-)shift operator acting on a shift space. The most commonly studied shifts are the subshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include the Jacobi operator and the Hessenberg matrix, both of which generate systems of orthogonal polynomials via a right-shift.
Applications
Whereas the iteration of a function naturally leads to a study of the orbits of points of X under iteration (the study of point dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear in physics problems, such as quantum chaos and statistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications to rational drug design, through the field of molecular dynamics.
It is often the case that the transfer operator is positive, has discrete positive real-valued eigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator.
The eigenfunctions of the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantum Hamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selected ensemble of quantum states will encompass a large number of very different fractal eigenstates with non-zero support over the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time and the increase of entropy.
The transfer operator of the Bernoulli map is exactly solvable and is a classic example of deterministic chaos; the discrete eigenvalues correspond to the Bernoulli polynomials. This operator also has a continuous spectrum consisting of the Hurwitz zeta function.
The transfer operator of the Gauss map is called the Gauss–Kuzmin–Wirsing (GKW) operator. The theory of the GKW dates back to a hypothesis by Gauss on continued fractions and is closely related to the Riemann zeta function.
See also
Bernoulli scheme
Shift of finite type
Krein–Rutman theorem
Transfer-matrix method
References
(Provides an introductory survey).
Chaos theory
Dynamical systems
Operator theory
Spectral theory | Transfer operator | [
"Physics",
"Mathematics"
] | 771 | [
"Mechanics",
"Dynamical systems"
] |
1,346,182 | https://en.wikipedia.org/wiki/Isochron%20dating | Isochron dating is a common technique of radiometric dating and is applied to date certain events, such as crystallization, metamorphism, shock events, and differentiation of precursor melts, in the history of rocks. Isochron dating can be further separated into mineral isochron dating and whole rock isochron dating; both techniques are applied frequently to date terrestrial and extraterrestrial rocks (meteorites and Moon rocks). The advantage of isochron dating as compared to simple radiometric dating techniques is that no assumptions are needed about the initial amount of the daughter nuclide in the radioactive decay sequence. Indeed, the initial amount of the daughter product can be determined using isochron dating. This technique can be applied if the daughter element has at least one stable isotope other than the daughter isotope into which the parent nuclide decays.
Basis for method
All forms of isochron dating assume that the source of the rock or rocks contained unknown amounts of both radiogenic and non-radiogenic isotopes of the daughter element, along with some amount of the parent nuclide. Thus, at the moment of crystallization, the ratio of the concentration of the radiogenic isotope of the daughter element to that of the non-radiogenic isotope is some value independent of the concentration of the parent. As time goes on, some amount of the parent decays into the radiogenic isotope of the daughter, increasing the ratio of the concentration of the radiogenic isotope to that of the non-radiogenic isotope of the daughter element. The greater the initial concentration of the parent, the greater the concentration of the radiogenic daughter isotope will be at some particular time. Thus, the ratio of the radiogenic to non-radiogenic isotopes of the daughter element will become larger with time, while the ratio of parent to daughter will become smaller. For rocks that start out with a small concentration of the parent, the radiogenic/non-radiogenic ratio of the daughter element will not change as quickly as it will with rocks that start out with a large concentration of the parent.
Assumptions
An isochron diagram will only give a valid age if all samples are cogenetic, which means they have the same initial isotopic composition (that is, the rocks are from the same unit, the minerals are from the same rock, etc.), all samples have the same initial isotopic composition (at t0), and the system has remained closed.
Isochron plots
The mathematical expression from which the isochron is derived is
where
t is age of the sample,
D* is number of atoms of the radiogenic daughter isotope in the sample,
D0 is number of atoms of the daughter isotope in the original or initial composition,
n is number of atoms of the parent isotope in the sample at the present,
λ is the decay constant of the parent isotope, equal to the inverse of the radioactive half-life of the parent isotope times the natural logarithm of 2, and
(eλt-1) is the slope of the isochron which defines the age of the system.
Because the isotopes are measured by mass spectrometry, ratios are used instead of absolute concentrations since mass spectrometers usually measure the former rather than the latter. (See the section on isotope ratio mass spectrometry.) As such, isochrons are typically defined by the following equation, which normalizes the concentration of parent and radiogenic daughter isotopes to the concentration of a non-radiogenic isotope of the daughter element that is assumed to be constant:
where
is the concentration of the non-radiogenic isotope of the daughter element (assumed constant),
is the present concentration of the radiogenic daughter isotope,
is the initial concentration of the radiogenic daughter isotope, and
is the present concentration of the parent isotope that has decayed over time .
To perform dating, a rock is crushed to a fine powder, and minerals are separated by various physical and magnetic means. Each mineral has different ratios between parent and daughter concentrations. For each mineral, the ratios are related by the following equation:
(1)
where
is the initial concentration of the parent isotope, and
is the total amount of the parent isotope which has decayed by time .
The proof of (1) amounts to simple algebraic manipulation. It is useful in this form because it exhibits the relationship between quantities that actually exist at present. To wit, , and respectively correspond to the concentrations of parent, daughter and non-radiogenic isotopes found in the rock at the time of measurement.
The ratios or (relative concentration of present daughter and non-radiogenic isotopes) and or (relative concentration of present parent and non-radiogenic isotope) are measured by mass spectrometry and plotted against each other in a three-isotope plot known as an isochron plot.
If all data points lie on a straight line, this line is called an isochron. The better the fit of the data points to a line, the more reliable the resulting age estimate. Since the ratio of the daughter and non-radiogenic isotopes is proportional to the ratio of the parent and non-radiogenic isotopes, the slope of the isochron gets steeper with time. The change in slope from initial conditions—assuming an initial isochron slope of zero (a horizontal isochron) at the point of intersection (intercept) of the isochron with the y-axis—to the current computed slope gives the age of the rock. The slope of the isochron, or , represents the ratio of daughter to parent as used in standard radiometric dating and can be derived to calculate the age of the sample at time t. The y-intercept of the isochron line yields the initial radiogenic daughter ratio, .
Whole rock isochron dating uses the same ideas but instead of different minerals obtained from one rock uses different types of rocks that are derived from a common reservoir; e.g. the same precursor melt. It is possible to date the differentiation of the precursor melt which then cooled and crystallized into the different types of rocks.
One of the best known isotopic systems for isochron dating is the rubidium–strontium system. Other systems that are used for isochron dating include samarium–neodymium, and uranium–lead. Some isotopic systems based on short-living extinct radionuclides such as 53Mn, 26Al, 129I, 60Fe and others are used for isochron dating of events in the early history of the Solar System. However, methods using extinct radionuclides give only relative ages and have to be calibrated with radiometric dating techniques based on long-living radionuclides like Pb-Pb dating to give absolute ages.
Application
Isochron dating is useful in the determination of the age of igneous rocks, which have their initial origin in the cooling of liquid magma. It is also useful to determine the time of metamorphism, shock events (such as the consequence of an asteroid impact) and other events depending on the behaviour of the particular isotopic systems under such events. It can be used to determine the age of grains in sedimentary rocks and understand their origin by a method known as a provenance study.
See also
Radiometric dating
References
External links
Basics of radioactive isotope geochemistry from Cornell
Isochron Dating at the TalkOrigins Archive
Radiometric dating | Isochron dating | [
"Chemistry"
] | 1,514 | [
"Radiometric dating",
"Radioactivity"
] |
1,346,837 | https://en.wikipedia.org/wiki/Flare%20gun | A flare gun, also known as a Very pistol or signal pistol, is a large-bore handgun that discharges flares, blanks and smoke. The flare gun is typically used to produce a distress signal.
Types
The most common type of flare gun is a Very (sometimes spelled Verey), which was named after Edward Wilson Very (1847–1910), an American naval officer who developed and popularized a single-shot breech-loading snub-nosed pistol that fired flares (Very lights). They have a single action trigger mechanism, hammer action, and a center fire pin. Modern varieties are frequently made out of durable plastic of a bright colour that makes them more conspicuous and easier to retrieve in an emergency and assists in distinguishing them from conventional firearms.
The Very pistol, typical of the type used in the Second World War, are of one inch bore (26.5mm), now known as "Calibre 4" for signal pistols. These are still available and more recent longer-barrel models can also fire parachute flares. Many newer models fire smaller 12-gauge flares. In countries where possession of firearms is strictly controlled, such as the United Kingdom, the use of Very pistols as emergency equipment on boats is less common than, for example, the United States. In such locations, distress flares are more commonly fired from single-shot tube devices which are then disposed of after use. These devices are fired by twisting or striking a pad on one end, but the contents are otherwise similar to a round from a flare gun, although the flares themselves are much larger and can burn brighter for longer. In the Russian Federation, which also has strict controls on firearms, a special tube-shaped flare launching device called a "Hunter's Signal" (Сигнал Охотника) is available. This is reusable but is deliberately designed in a way to avoid resemblance to a gun.
Flare guns may be used whenever someone needs to send a distress signal. The flares must be shot directly above, making the signal visible for a longer period of time and revealing the position of whoever is in need of assistance. There are four distinct flare calibers: 12-gauge (18.53mm), 25mm, 26.5mm, and 37mmthe first three being the most popular for boaters.
Use as weapons
Flare guns may be used for the destruction of flammable material, or in an anti-personnel role.
Pocket mortars
In World War II, Germany manufactured grenades designed to be fired from adapted flare guns known as the Sturmpistole in its final form. Fragmentation and anti-tank grenades were produced, but the latter would likely have been of limited use against late-war armoured vehicles.
The Soviets developed the Baranov pocket mortar during 1943, which fired a 175g round with an 8g explosive charge out to a range of 200-350m (it was also proposed to increase this to 600-700m). A later development was the PSA/PSA-1/ASP, a copy of the US issue M8 flare pistol. This fired an experimental grenade which was 40% more powerful than that used with the Kampfpistole.
Conversion kits
Conversion kits are available intended to convert flare guns to accept conventional ammunition by use of barrel inserts. There are also 12 gauge inserts intended to allow use of rifle or pistol ammunition in conventional 12 gauge shotguns. Use of any of these devices in the Orion plastic 12 gauge flare gun is not recommended by the manufacturer and ATF tests have demonstrated that sometimes a single use results in a catastrophic failure. In the United States, if these conversion kits are used in a metal flare gun, the converted gun is considered to be a firearm by the ATF. If a rifled barrel insert is used, the converted firearm is classified as a pistol; if a smoothbore barrel insert is used, the converted firearm is classified as an AOW subject to the additional requirements of the NFA. Flare cartridges are low pressure compared to conventional ammunition and even metal flare guns are not designed or intended to be used with conventional ammunition. Conversion of a flare gun to fire conventional ammunition may also be restricted by local improvised firearm laws.
See also
37 mm flare
References
Further reading
External links
History of the Very pistol with many examples
WW German signal-pistol grenades and their use by tank crews.
Rescue equipment
Optical communications | Flare gun | [
"Engineering"
] | 903 | [
"Optical communications",
"Telecommunications engineering"
] |
1,347,035 | https://en.wikipedia.org/wiki/GE%20Automation%20%26%20Controls | General Electric Automation and Controls division combines what was formerly known as GE Intelligent Platforms and Alstom's Power Automation and Controls. In 2019, GE Intelligent Platforms was acquired by Emerson Electric and is now part of Emerson's Discrete Automation business unit.
GE Automation and Controls produce Programmable Logic Controller (PLC) and Programmable Automation Controller (PAC) based control systems, I/O, and field devices, including support to design, commission and operate industrial assets and operations. Industries served include manufacturing, food and beverage, life sciences, power, oil and gas, mining and metals, water and wastewater, and specialty machinery industries.
History
In 1986, GE Fanuc Automation Corporation was jointly established in the US by FANUC and General Electric (GE). Under the joint venture company, three operating companies, GE Fanuc Automation North America, Inc., in the U.S., GE Fanuc Automation Europe S.A. in Luxembourg, and Fanuc GE Automation Asia Ltd. in Japan were established (the Asian company was established in 1987).
In 2007, the company was renamed to GE Fanuc Intelligent Platforms (and GE Fanuc Automation Solutions Europe SA became GE Fanuc Intelligent Platforms Europe SA). GE Fanuc Automation CNC Europe changed its name to Fanuc GE CNC Europe.
In 2009, GE and Fanuc agreed to dissolve joint venture and the software, controls and embedded business became part of GE, under the new name GE Intelligent Platforms.
In 2015, GE Intelligent Platforms, Inc. changed its name to Automation & Controls upon acquisition of Alstom's Power Automation & Controls business.
In 2018, amidst restructuring plans for the whole General Electric group, it was announced that Emerson Electric was to acquire Intelligent Platforms, and the deal was completed on February 1, 2019.
Acquisitions
1998: completed acquisition of AFE Technologies (first purchased 70% in late 1996)
1998: acquired Total Control Products
2000: acquired DataViews Corp
2001: acquired VMIC
2002: acquired Intellution, Inc.
2003: acquired RAMiX
2003: acquired Mountain Systems, Inc.
2006: acquired (technology assets of) Condor Engineering
2006: acquired SBS Technologies
2006: acquired Radstone Technology PLC
2008: acquired process technology assets from MTL Instruments Group
2011: acquired SmartSignal, Inc
2011: acquired technology assets of CSense Systems (Pty) Ltd.
2015: acquired Alstom Power Automation & Controls
2015: sold its embedded systems division to Veritas Capital, now known as Abaco Systems
2019: acquired by Emerson Electric
References
External links
1987 establishments in Virginia
2019 disestablishments in Virginia
2019 mergers and acquisitions
Albemarle County, Virginia
American companies disestablished in 2019
American companies established in 1987
Computer companies disestablished in 2019
Computer companies established in 1987
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct software companies of the United States
Former General Electric subsidiaries
Industrial automation
MES software
Software companies based in Virginia
Software companies established in 1987
Software companies disestablished in 2019 | GE Automation & Controls | [
"Engineering"
] | 606 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
1,347,871 | https://en.wikipedia.org/wiki/Halocline | In oceanography, a halocline (from Greek hals, halos 'salt' and klinein 'to slope') is a cline, a subtype of chemocline caused by a strong, vertical salinity gradient within a body of water. Because salinity (in concert with temperature) affects the density of seawater, it can play a role in its vertical stratification. Increasing salinity by one kg/m3 results in an increase of seawater density of around 0.7 kg/m3.
Description
In the midlatitudes, an excess of evaporation over precipitation leads to surface waters being saltier than deep waters. In such regions, the vertical stratification is due to surface waters being warmer than deep waters and the halocline is destabilizing. Such regions may be prone to salt fingering, a process which results in the preferential mixing of salinity.
In certain high latitude regions (such as the Arctic Ocean, Bering Sea, and the Southern Ocean) the surface waters are actually colder than the deep waters and the halocline is responsible for maintaining water column stability, isolating the surface waters from the deep waters. In these regions, the halocline is important in allowing for the formation of sea ice, and limiting the escape of carbon dioxide to the atmosphere.
Haloclines are also found in fjords, and poorly mixed estuaries where fresh water is deposited at the ocean surface.
A halocline can be easily created and observed in a drinking glass or other clear vessel. If fresh water is slowly poured over a quantity of salt water, using a spoon held horizontally at water-level to prevent mixing, a hazy interface layer, the halocline, will soon be visible due to the varying index of refraction across the boundary.
A halocline is most commonly confused with a thermocline – a thermocline is an area within a body of water that marks a drastic change in temperature. A halocline can coincide with a thermocline and form a pycnocline.
Haloclines are common in water-filled limestone caves near the ocean. Less dense fresh water from the land forms a layer over salt water from the ocean. For underwater cave explorers, this can cause the optical illusion of air space in caverns. Passing through the halocline tends to stir up the layers.
Graph
In the graphical representation, three layers can be discerned:
About of low salinity water "swimming" on top of the ocean. The temperature is , which is very near to the freezing point. This layer blocks heat transfer from the warmer, deeper levels into the sea ice, which has considerable effect on its thickness.
About of steeply rising salinity and increasing temperature. This is the actual halocline.
The deep layer with nearly constant salinity and slowly decreasing temperature.
Other types of clines
Thermocline – A cline based on difference in water temperature,
Chemocline – A cline based on difference in water chemistry,
Pycnocline – A cline based on difference in water density.
See also
Thin layers (oceanography)
References
Aquatic ecology
Physical oceanography
Saline water | Halocline | [
"Physics",
"Chemistry",
"Biology"
] | 665 | [
"Saline water",
"Applied and interdisciplinary physics",
"Salts",
"Ecosystems",
"Physical oceanography",
"Aquatic ecology"
] |
1,347,945 | https://en.wikipedia.org/wiki/Great%20Filter | The Great Filter is the idea that, in the development of life from the earliest stages of abiogenesis to reaching the highest levels of development on the Kardashev scale, there is a barrier to development that makes detectable extraterrestrial life exceedingly rare. The Great Filter is one possible resolution of the Fermi paradox.
The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies that something is wrong with one or more of the arguments (from various scientific disciplines) that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species with advanced civilizations actually observed (currently just one: human). This probability threshold, which could lie in the past or following human extinction, might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction. The main conclusion of this argument is that the easier it was for life to evolve to the present stage, the bleaker the future chances of humanity probably are.
The idea was first proposed in an online essay titled "The Great Filter – Are We Almost Past It?". The first version was written in August 1996 and the article . Hanson's formulation has received recognition in several published sources discussing the Fermi paradox and its implications.
Main argument
Fermi paradox
There is no reliable evidence that aliens have visited Earth; we have observed no intelligent extraterrestrial life with current technology, nor has SETI found any transmissions from other civilizations. The Universe, apart from the Earth, seems "dead"; Hanson states:
Our planet and solar system, however, don't look substantially colonized by advanced competitive life from the stars, and neither does anything else we see. To the contrary, we have had great success at explaining the behavior of our planet and solar system, nearby stars, our galaxy, and even other galaxies, via simple "dead" physical processes, rather than the complex purposeful processes of advanced life. Life is expected to expand to fill all available niches. With technology such as self-replicating spacecraft, these niches would include neighboring star systems and even, on longer time scales which are still small compared to the age of the universe, other galaxies. Hanson notes, "If such advanced life had substantially colonized our planet, we would know it by now."
The Great Filter
With no evidence of intelligent life in places other than Earth, it appears that the process of starting with a star and ending with "advanced explosive lasting life" must be unlikely. This implies that at least one step in this process must be improbable. Hanson's list, while incomplete, describes the following nine steps in an "evolutionary path" that results in the colonization of the observable universe:
The right star system (including organics and potentially habitable planets)
Reproductive molecules (e.g. RNA)
Simple (prokaryotic) single-cell life
Complex (eukaryotic) single-cell life
Sexual reproduction
Multi-cell life
Tool-using animals with intelligence
A civilization advancing toward the potential for a colonization explosion (where we are now)
Colonization explosion
According to the Great Filter hypothesis, at least one of these steps—if the list were complete—must be improbable. If it is not an early step (i.e., in the past), then the implication is that the improbable step lies in the future and humanity's prospects of reaching step 9 (interstellar colonization) are still bleak. If the past steps are likely, then many civilizations would have developed to the current level of the human species. However, none appear to have made it to step 9, or the Milky Way would be full of colonies. So perhaps step 9 is the unlikely one, and the only things that appear likely to keep us from step 9 are some sort of catastrophe, an underestimation of the impact of procrastination as technology increasingly unburdens existence, or resource exhaustion leading to the impossibility of making the step due to consumption of the available resources (for example highly constrained energy resources). So by this argument, finding multicellular life on Mars (provided it evolved independently) would be bad news, since it would imply steps 2–6 are easy, and hence only 1, 7, 8 or 9 (or some unknown step) could be the big problem.
Although steps 1–8 have occurred on Earth, any one of these may be unlikely. If the first seven steps are necessary preconditions to calculating the likelihood (using the local environment) then an anthropically biased observer can infer nothing about the general probabilities from its (pre-determined) surroundings.
In a 2020 paper, Jacob Haqq-Misra, Ravi Kumar Kopparapu, and Edward Schwieterman argued that current and future telescopes searching for biosignatures in the ultraviolet to near-infrared wavelengths could place upper bounds on the fraction of planets in the galaxy that host life. Meanwhile, the evolution of telescopes that can detect technosignatures at mid-infrared wavelengths could provide insights into the Great Filter. They say that if planets with technosignatures are abundant, then this can increase confidence that the Great Filter is in the past. On the other hand, if finding that life is commonplace while technosignatures are absent, then this would increase the likelihood that the Great Filter lies in the future.
Recently, paleobiologist Olev Vinn has suggested that the great filter may exist between steps 8 and 9 due to inherited behavior patterns (IBP) that initially occur in all intelligent biological organisms. These IBPs are incompatible with conditions prevailing in technological civilizations and could inevitably lead to the self-destruction of civilization in multiple ways.
In a specific formulation named the "Berserker hypothesis", a filter exists between steps 8 and 9 in which each civilization is destroyed by a lethal Von Neumann probe created by a more advanced civilization.
Responses
There are many alternative scenarios that might allow for the evolution of intelligent life to occur multiple times without either catastrophic self-destruction or glaringly visible evidence. These are possible resolutions to the Fermi paradox: "They do exist, but we see no evidence". Other ideas include: it is too expensive to spread physically throughout the galaxy; Earth is purposely isolated; it is dangerous to communicate and hence civilizations actively hide, among others.
Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources.
Astronomer Seth Shostak of the SETI Institute argues that one can postulate a galaxy filled with intelligent extraterrestrial civilizations that have failed to colonize Earth. Perhaps the aliens lacked the intent and purpose to colonize or depleted their resources, or maybe the galaxy is colonized but in a heterogeneous manner, or the Earth could be located in a "galactic backwater". Although absence of evidence generally is only weak evidence of absence, the absence of extraterrestrial megascale engineering projects, for example, might point to the Great Filter at work. Does this mean that one of the steps leading to intelligent life is unlikely? According to Shostak:
This is, of course, a variant on the Fermi paradox: We don't see clues to widespread, large-scale engineering, and consequently we must conclude that we're alone. But the possibly flawed assumption here is when we say that highly visible construction projects are an inevitable outcome of intelligence. It could be that it's the engineering of the small, rather than the large, that is inevitable. This follows from the laws of inertia (smaller machines are faster, and require less energy to function) as well as the speed of light (small computers have faster internal communication). It may be—and this is, of course, speculation—that advanced societies are building small technology and have little incentive or need to rearrange the stars in their neighborhoods, for instance. They may prefer to build nanobots instead. It should also be kept in mind that, as Arthur C. Clarke said, truly advanced engineering would look like magic to us—or be unrecognizable altogether. By the way, we've only just begun to search for things like Dyson spheres, so we can't really rule them out.Joseph Voros in "Macro-Perspectives Beyond the World System" (2007) points out that some researchers have attempted to search for energy signatures that could be traced to Dyson-like structures (shells, swarms, or spheres). So far, none have been found. See for example, Tilgner & Heinrichsen, "A Program to Search for Dyson Spheres with the Infrared Space Observatory", Acta Astronautica Vol. 42 (May–June, 1998), pp. 607–612; and Timofeev et al. "A search of the IRAS database for evidence of Dyson Spheres", Acta Astronautica Vol. 46, (June 2000), pp. 655–659.
See also
References
Further reading
External links
The Great Filter – Are We Almost Past It? (1998), Robin Hanson
Why Alien Life Would be our Doom - The Great Filter (2018), Kurzgesagt – In a Nutshell
Extraterrestrial life
Open problems
Fermi paradox
1996 introductions
1996 in science | Great Filter | [
"Astronomy",
"Biology"
] | 2,031 | [
"Astronomical hypotheses",
"Hypothetical life forms",
"Extraterrestrial life",
"Astronomical controversies",
"Fermi paradox",
"Biological hypotheses"
] |
2,746,861 | https://en.wikipedia.org/wiki/Turbo%20Dispatch | Turbo Dispatch is a public domain standard for the electronic transfer of job details, initially using packet radio, but now also using the internet. It is used throughout the United Kingdom to pass the details of stranded motorists between all the major UK motoring organisations and their 400 plus vehicle recovery agents. In many cases it is also used by the vehicle recovery agent to pass the details to the attending recovery vehicle.
History
On 30 June 1994, a group of representatives from the UK seven major motoring organisations and the Institute of Vehicle Recovery were invited a meeting at Brooklands Museum. Brooklands Museum was chosen as the venue because the meeting's chairman Andy Lambert was involved with the museum, having transported the vast majority of the exhibits there, and could therefore show people items they would not normally get to see. He clearly hoped that this would be enough incentive to get ‘the clubs’ to sit-down in the same room together. It soon emerged that it was a shared dream of all those present that ‘common standards’ for all aspects of vehicle recovery could be introduced to the industry. Amongst other things, this group laid the foundations of Turbo Dispatch project.
Because of the reliability of delivery needed it was decided that Mobitex should be used. In the UK, there was only one provider of Mobitex, namely RAM Data, which later became a subsidiary of BT called Transcomm. This is why many users still refer to RAMing jobs. Ian Lane of Motor Trade Software (MTS) designed and wrote the protocols along with the gateway software. Much pioneering work was carried out during early 1995. In the autumn of 1995, Green Flag and Delta Rescue were the first motoring organisations to start experimenting with transmissions to the garages, with the first genuine job being sent to Southbank Garage at the end of the year.
The point where most recovery operators learned about Turbo Dispatch was during the Association of Vehicle Recovery Operators' AVRO EX 1996. Despite it being Green Flag's 25th Anniversary, the motoring organisation set up a mini ‘Control Room’ at the show and let people see for the first time how the system worked. By the end of that year, the Automobile Association had set up a trial in London, sending jobs to selected recovery operators.
Towards the end of 1997, the AA's Evan Anderson became greatly involved in promoting the concept of Turbo Dispatch within the AA. This was undoubtedly the turning point, because Evan seemed to accept that although MTS and RAM did have a monopoly, it was not an intentional one. Anyone else could develop a system, but quite simply nobody else had successfully done so.
In the following year, the RAC went live and by the end of the year all the major players in the industry had adopted Turbo Dispatch. By the year 2005, it is estimated that around 92% of the 4 million ‘garaged’ breakdowns a year were sent to recovery operators using Turbo Dispatch.
The system is clearly popular with motoring organisations, because of the saving in job dispatcher talking on the telephone to recovery operators. What surprised a lot of people was how popular it was with the operators as well, mainly for the same reasons. A busy controller does not want to take job details over the telephone, when he can have it appear on his computer screen. Because he could then use the same system to dispatch the job to his driver, it meant the whole process could be handled in seconds.
Technical review
The following notes describe the method of automatically transferring job details from a motoring organisation (or club's) computer into their recovery operator's computer using the Mobitex data network. Since each of the motoring organisations is likely to have its own unique computer system and working practices, it is not practical to attempt to define how automatic job transfer would be best achieved for them. Consequently, only a typical recovery operator's computer installation is defined below.
Typically, the recovery operator job logging and invoicing system is based around a personal computer. This will be connected to the Mobitex data network by a Masc. radio modem (sometimes called a Mobidem) using one of the PC's RS-232 communication ports. Job transmission will entail the motoring organisation's computer using the recovery operator's unique Mobidem number to call his computer.
Once a connection has been made, the job details are transmitted in the data format defined in the Turbo Dispatch Format Manual, and the call terminated. A dedicated communications computer will usually be employed to constantly monitor the radio modem via the RS232 port. This will operate a program TD.EXE, running under Windows. Data received from the radio modem will be transferred to an incoming message queue on the other computer, for subsequent automatic processing by the main job taking software (a maximum of 5 seconds delay). An audible sound will also be generated by the communications computer.
Immediately the job details are received by the recovery operator's communications computer, it will transmit an automatic acknowledgement to the origin of the job message and place the job details in a queue awaiting manual acceptance by the recovery operator's controller. Job Dispatch will be completed by the recovery operator's controller, either by accepting, or rejecting the job and the corresponding transaction being dispatched to the motoring organisation.
The Turbo Dispatch application was initially restricted to employing the MASC protocol of the Transcomm Mobile Data Network. This protocol was adopted because it eliminates the complexity of retries which would result from employing a session based protocol on the radio modems e.g. Hayes or X.28.
Control
The Turbo Dispatch Standards Group used to meet on average two or three times a year.
Its brief was to "Maintain a common standard for message and data formats to be used in electronic data communications between host computers and/or data terminals in fixed/mobile environments".
All major motoring organisations send representatives and all changes/additions to the standard are discussed and must be agreed by all those present at the meeting. As there is little commercial advantage to any organisation (an improvement for one will normally also benefit the others), there is usually a large amount of agreement.
In 2010, BT Transcomm announced the closure of the Mobitex Network in the UK.
Following a competitive tender process including Presentations with solutions from MTT, BT, Vodafone and Apex Networks, it was unanimously awarded to Apex Networks Ltd and the migration from MTT Turbo Dispatch to an Internet-based solution named Automotive Network Services (ANS) took place in 2012. Apex Networks already had a growing number of users for their new Recovery Management System (RMS) which had initially been developed for Ontime Rescue & Recovery.
From 2012, the ANS Standards group now meet quarterly and the forum is attended by the vehicle recovery software providers, namely Apex Networks, MTT and Laserbyte, along with the major motoring organisations and two industry advisers (independent recovery operators that are end-users of the systems). These Meetings normally last for around 3 hours and are held at the Rhodes Arts Complex in Bishop's Stortford, or at various venues around the country provided by the Motoring Organisations (Clubs). The forum was chaired for many years by David Brinklow (Commercial Director of Apex Networks), but more recent has been chaired by J.P. Dekker (of Apex Networks)
A copy of the ANS standard document is available to download from https://www.apex-networks.com/rms-resources
A copy of the ANS directory list is available to download from http://webservices.autonet-services.com/ansdirectory.aspx
See also
Vehicle recovery
References
Professional Recovery, December 2001, Partnership Publications
Vehicle Recovery Link, June 1997, May 1999 and July 1999, R K Solutions.
The World History of the Towing and Recovery Industry, John Hawkins, TT Publications Inc. 280 pp.
, Piers Brendon, Bloomsbury Publishing Plc, 1997. 432 pp.
Breakdown Doctor: The Real Life Adventures of a 1960s Road Rescue Man, Fred Henderson, Reading Room Publishing, 2005. 192 pp.
External links
Telecommunications-related introductions in 1994
Packet radio
Road transport in the United Kingdom
Standards of the United Kingdom
Emergency road services | Turbo Dispatch | [
"Technology"
] | 1,669 | [
"Wireless networking",
"Packet radio"
] |
2,746,930 | https://en.wikipedia.org/wiki/Equilibrium%20thermodynamics | Equilibrium Thermodynamics is the systematic study of transformations of matter and energy in systems in terms of a concept called thermodynamic equilibrium. The word equilibrium implies a state of balance. Equilibrium thermodynamics, in origins, derives from analysis of the Carnot cycle. Here, typically a system, as cylinder of gas, initially in its own state of internal thermodynamic equilibrium, is set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles into its final equilibrium state, work is extracted.
In an equilibrium state the potentials, or driving forces, within the system, are in exact balance. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial state of thermodynamic equilibrium, subject to accurately specified constraints, to calculate, when the constraints are changed by an externally imposed intervention, what the state of the system will be once it has reached a new equilibrium. An equilibrium state is mathematically ascertained by seeking the extrema of a thermodynamic potential function, whose nature depends on the constraints imposed on the system. For example, a chemical reaction at constant temperature and pressure will reach equilibrium at a minimum of its components' Gibbs free energy and a maximum of their entropy.
Equilibrium thermodynamics differs from non-equilibrium thermodynamics, in that, with the latter, the state of the system under investigation will typically not be uniform but will vary locally in those as energy, entropy, and temperature distributions as gradients are imposed by dissipative thermodynamic fluxes. In equilibrium thermodynamics, by contrast, the state of the system will be considered uniform throughout, defined macroscopically by such quantities as temperature, pressure, or volume. Systems are studied in terms of change from one equilibrium state to another; such a change is called a thermodynamic process.
Ruppeiner geometry is a type of information geometry used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the idea that there exist equilibrium states which can be represented by points on two-dimensional surface and the distance between these equilibrium states is related to the fluctuation between them.
See also
Non-equilibrium thermodynamics
Thermodynamics
References
Adkins, C.J. (1983). Equilibrium Thermodynamics, 3rd Ed. Cambridge: Cambridge University Press.
Cengel, Y. & Boles, M. (2002). Thermodynamics – an Engineering Approach, 4th Ed. (textbook). New York: McGraw Hill.
Kondepudi, D. & Prigogine, I. (2004). Modern Thermodynamics – From Heat Engines to Dissipative Structures (textbook). New York: John Wiley & Sons.
Perrot, P. (1998). A to Z of Thermodynamics (dictionary). New York: Oxford University Press.
Branches of thermodynamics | Equilibrium thermodynamics | [
"Physics",
"Chemistry"
] | 662 | [
"Branches of thermodynamics",
"Thermodynamics"
] |
2,747,470 | https://en.wikipedia.org/wiki/Biological%20thermodynamics | Biological thermodynamics (Thermodynamics of biological systems) is a science that explains the nature and general laws of thermodynamic processes occurring in living organisms as nonequilibrium thermodynamic systems that convert the energy of the Sun and food into other types of energy. The nonequilibrium thermodynamic state of living organisms is ensured by the continuous alternation of cycles of controlled biochemical reactions, accompanied by the release and absorption of energy, which provides them with the properties of phenotypic adaptation and a number of others.
History
In 1935, the first scientific work devoted to the thermodynamics of biological systems was published - the book of the Hungarian-Russian theoretical biologist Erwin S. Bauer (1890-1938) "Theoretical Biology". E. Bauer formulated the "Universal Law of Biology" in the following edition: "All and only living systems are never in equilibrium and perform constant work at the expense of their free energy against the equilibrium required by the laws of physics and chemistry under existing external conditions". This law can be considered the 1st law of thermodynamics of biological systems.
In 1957, German-British physician and biochemist Hans Krebs and British-American biochemist Hans Kornberg in the book "Energy Transformations in Living Matter" first described the thermodynamics of biochemical reactions. In their works, H. Krebs and Hans Kornberg showed how in living cells, as a result of biochemical reactions, adenosine triphosphate (ATP) is synthesized from food, which is the main source of energy of living organisms (the Krebs–Kornberg cycle).
In 2006, the Israeli-Russian scientist Boris Dobroborsky (1945) published the book "Thermodynamics of Biological Systems", in which the general principles of functioning of living organisms from the perspective of nonequilibrium thermodynamics were formulated for the first time and the nature and properties of their basic physiological functions were explained.
The main provisions of the theory of thermodynamics of biological systems
A living organism is a thermodynamic system of an active type (in which energy transformations occur), striving for a stable nonequilibrium thermodynamic state. The nonequilibrium thermodynamic state in plants is achieved by continuous alternation of phases of solar energy consumption as a result of photosynthesis and subsequent biochemical reactions, as a result of which adenosine triphosphate (ATP) is synthesized in the daytime, and the subsequent release of energy during the splitting of ATP mainly in the dark. Thus, one of the conditions for the existence of life on Earth is the alternation of light and dark time of day.
In animals, the processes of alternating cycles of biochemical reactions of ATP synthesis and cleavage occur automatically. Moreover, the processes of alternating cycles of biochemical reactions at the levels of organs, systems and the whole organism, for example, respiration, heart contractions and others occur with different periods and externally manifest themselves in the form of biorhythms. At the same time, the stability of the nonequilibrium thermodynamic state, optimal under certain conditions of vital activity, is provided by feedback systems through the regulation of biochemical reactions in accordance with the Lyapunov stability theory. This principle of vital activity was formulated by B. Dobroborsky in the form of the 2nd law of thermodynamics of biological systems in the following wording:
The stability of the nonequilibrium thermodynamic state of biological systems is ensured by the continuous alternation of phases of energy consumption and release through controlled reactions of synthesis and cleavage of ATP.
The following consequences follow from this law:
1. In living organisms, no process can occur continuously, but must alternate with the opposite direction: inhalation with exhalation, work with rest, wakefulness with sleep, synthesis with cleavage, etc.
2. The state of a living organism is never static, and all its physiological and energy parameters are always in a state of continuous fluctuations relative to the average values both in frequency and amplitude.
This principle of functioning of living organisms provides them with the properties of phenotypic adaptation and a number of others.
See also
Bioenergetics
Ecological energetics
Harris-Benedict Equations
Stress (biology)
References
Further reading
Haynie, D. (2001). Biological Thermodynamics (textbook). Cambridge: Cambridge University Press.
Lehninger, A., Nelson, D., & Cox, M. (1993). Principles of Biochemistry, 2nd Ed (textbook). New York: Worth Publishers.
Alberty, Robert, A. (2006). Biochemical Thermodynamics: Applications of Mathematica (Methods of Biochemical Analysis), Wiley-Interscience.
External links
Cellular Thermodynamics - Wolfe, J. (2002), Encyclopedia of Life Sciences.
Bioenergetics
Thermodynamics
Thermodynamics | Biological thermodynamics | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 1,042 | [
"Applied and interdisciplinary physics",
"Thermodynamics",
"Biophysics",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.