id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
44,689,981 | https://en.wikipedia.org/wiki/Frenkel%20line | In thermodynamics, the Frenkel line is a proposed boundary on the phase diagram of a supercritical fluid, separating regions of qualitatively different behavior. Fluids on opposite sides of the line have been described as "liquidlike" or "gaslike", and exhibit different behaviors in terms of oscillation, excitation modes, and diffusion.
Other proposed similar boundary lines include for example the Fisher-Widom line and the Widom line.
Overview
Two types of approaches to the behavior of liquids are present in the literature. The most common one is based on a van der Waals model. It treats the liquids as dense structureless gases. Although this approach allows explanation of many principal features of fluids, in particular the liquid-gas phase transition, it fails to explain other important issues such as, for example, the existence in liquids of transverse collective excitations such as phonons.
Another approach to fluid properties was proposed by Yakov Frenkel. It is based on the assumption that at moderate temperatures, the particles of liquid behave in a manner similar to a crystal, i.e. the particles demonstrate oscillatory motions. However, while in crystals they oscillate around their nodes, in liquids, after several periods, the particles change their nodes. This approach is based on postulation of some similarity between crystals and liquids, providing insight into many important properties of the latter: transverse collective excitations, large heat capacity, and so on.
From the discussion above, one can see that the microscopic behavior of particles of moderate and high temperature fluids is qualitatively different. If one heats a fluid from a temperature close to the melting point to some high temperature, a crossover from the solid-like to the gas-like regime occurs. The line of this crossover was named the Frenkel line, after Yakov Frenkel.
Several methods to locate the Frenkel line are proposed in the literature. The exact criterion defining the Frenkel line is the one based on a comparison of characteristic times in fluids. One can define a 'jump time' via
,
where is the size of the particle and is the diffusion coefficient. This is the time necessary for a particle to move a distance comparable to its own size. The second characteristic time corresponds to the shortest period of transverse oscillations of particles within the fluid, . When these two time scales are roughly equal, one cannot distinguish between the oscillations of the particles and their jumps to another position. Thus the criterion for the Frenkel line is given by .
There exist several approximate criteria to locate the Frenkel line on the pressure-temperature plane. One of these criteria is based on the velocity autocorrelation function (vacf): below the Frenkel line, the vacf demonstrates oscillatory behaviour, while above it, the vacf monotonically decays to zero. The second criterion is based on the fact that at moderate temperatures, liquids can sustain transverse excitations, which disappear upon heating. One further criterion is based on isochoric heat capacity measurements. The isochoric heat capacity per particle of a monatomic liquid near the melting line is close to (where is the Boltzmann constant). The contribution to the heat capacity due to the potential part of transverse excitations is . Therefore, at the Frenkel line, where transverse excitations vanish, the isochoric heat capacity per particle should be , a direct prediction from the phonon theory of liquid thermodynamics.
Crossing the Frenkel line leads also to some structural crossovers in fluids. Currently Frenkel lines of several idealised liquids, such as Lennard-Jones and soft spheres, as well as realistic models such as liquid iron, hydrogen, water, and carbon dioxide, have been reported in the literature.
See also
Supercritical liquid–gas boundaries
References
External links
Liquids and Supercritical Fluids - University of Salford
Statistical mechanics | Frenkel line | [
"Physics"
] | 818 | [
"Statistical mechanics"
] |
23,348,207 | https://en.wikipedia.org/wiki/Amasa%20Stone%20Bishop | Amasa Stone Bishop (1921 – May 21, 1997) was an American nuclear physicist specializing in fusion physics. He received his B.S. in physics from the California Institute of Technology in 1943. From 1943 to 1946 he was a member of the staff of Radiation Laboratory at the Massachusetts Institute of Technology, where he was involved with radar research and development. Later, he became a staff member of the University of California at Berkeley from 1946 to 1950. Specializing in high energy particle work, he earned his Ph.D. in physics in 1950.
After attaining his Ph.D., Amasa spent three years in Switzerland, acting as research associate at the Federal Institute of Technology in Zürich, and later at the University of Zürich. In 1953 Amasa joined the research division of the Atomic Energy Commission (AEC) in Washington and became the director of the American program to develop controlled fusion, also known as Project Sherwood. He was later presented the AEC Outstanding Service Award for his work. After leaving this position in 1956, Amasa published a book on behalf of the AEC discussing the various attempts at harnessing fusion under Project Sherwood. The book, "Project Sherwood: The U.S. Program in Controlled Fusion", was published in 1958.
After 1956 Amasa also served as the AEC's European scientific representative, based in Paris. He was also an assistant delegate to the European atomic energy agency, Euratom, in Brussels. Later he spent several years in Princeton, New Jersey, and was in charge of the fusion program in Washington.
In 1970 Amasa joined the United Nations in Europe as director of environment of the United Nations Economic Commission for Europe. During this position, he worked with scientists and diplomats to create solutions for various environmental problems. He left this position to retire in 1980. Amasa died on May 21, 1997, of pneumonia related to Alzheimer's disease at the Clinique de Genolier in Genolier, Switzerland.
Bishop was the great-grandson of Industrialist Amasa Stone.
See also
Timeline of nuclear fusion
References
1921 births
1997 deaths
Sherwood
American nuclear physicists
Deaths from pneumonia in Switzerland
Nuclear fusion
California Institute of Technology alumni
UC Berkeley College of Letters and Science alumni
American expatriates in Switzerland
Scientists from Cleveland | Amasa Stone Bishop | [
"Physics",
"Chemistry",
"Engineering"
] | 456 | [
"Nuclear fusion",
"nan",
"Nuclear physics"
] |
23,355,682 | https://en.wikipedia.org/wiki/Flux%20pumping | Flux pumping is a method for magnetising superconductors to fields in excess of 15 teslas. The method can be applied to any type II superconductor and exploits a fundamental property of superconductors, namely their ability to support and maintain currents on the length scale of the superconductor. Conventional magnetic materials are magnetised on a molecular scale which means that superconductors can maintain a flux density orders of magnitude bigger than conventional materials. Flux pumping is especially significant when one bears in mind that all other methods of magnetising superconductors require application of a magnetic flux density at least as high as the final required field. This is not true of flux pumping.
An electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance.
The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.
In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely small amount of resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, then the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments.
Introduction
In the method described here a magnetic field is swept across the superconductor in a magnetic wave. This field induces current according to Faraday's law of induction. As long as the direction of motion of the magnetic wave is constant then the current induced will always be in the same sense and successive waves will induce more and more current.
Traditionally the magnetic wave would be generated either by physically moving a magnet or by an arrangement of coils switched in sequence, such as occurs on the stator of a three-phase motor. Flux Pumping is a solid state method where a material which changes magnetic state at a suitable magnetic ordering temperature is heated at its edge and the resultant thermal wave produces a magnetic wave which then magnetizes the superconductor. A superconducting flux pump should not be confused with a classical flux pump as described in Van Klundert et al.’s review.
The method described here has two unique features:
At no point is the superconductor driven normal; the procedure simply makes modifications to the critical state.
The critical state is not modified by a moving magnet or an array of solenoids, but by a thermal pulse which modifies the magnetization, thus sweeping vortices into the material.
The system, as described, is actually a novel kind of heat engine in which thermal energy is being converted into magnetic energy.
Background
Meissner effect
When a superconductor is placed in a weak external magnetic field H, the field penetrates the superconductor only a small distance λ, called the London penetration depth, decaying exponentially to zero within the interior of the material. This is called the Meissner effect, and is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from this because a superconductor expels all magnetic fields, not just those that are changing. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was explained by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum , and thus (coupled with the quantum Hall resistivity) for the Planck constant h. Josephson was awarded the Nobel Prize for this work in 1973.
E–J power law
The most popular model used to describe superconductivity include Bean's critical state model and variations such as the Kim–Anderson model. However, the Bean model assumes zero resistivity and that current is always induced at the critical current. A more useful model for engineering applications is the so-called E–J power law, in which the field and the current are linked by the following equations:
In these equations, if n = 1 then the conductor has linear resistivity such as is found in copper. The higher the n-value the closer we get to the critical state model. Also the higher the n-value then the "better" the superconductor as the lower the resistivity at a certain current. The E–J power law can be used to describe the phenomenon of flux-creep in which a superconductor gradually loses its magnetisation over time. This process is logarithmic and thus gets slower and slower and ultimately leads to very stable fields.
Theory
The potential of superconducting coils and bulk melt-processed YBCO single domains to maintain significant magnetic fields at cryogenic temperatures makes them particularly attractive for a variety of engineering applications including superconducting magnets, magnetic bearings and motors. It has already been shown that large fields can be obtained in single domain bulk samples at 77 K. A range of possible applications exist in the design of high power density electric motors.
Before such devices can be created a major problem needs to be overcome. Even though all of these devices use a superconductor in the role of a permanent magnet and even though the superconductor can trap potentially huge magnetic fields (greater than 10 T) the problem is the induction of the magnetic fields, this applies both to bulk and to coils operating in persistent mode. There are four possible known methods:
Cooling in field;
Zero field cooling, followed by slowly applied field;
Pulse magnetization;
Flux pumping;
Any of these methods could be used to magnetise the superconductor and this may be done either in situ or ex situ. Ideally the superconductors are magnetised in situ.
There are several reasons for this: first, if the superconductors should become demagnetised through (i) flux creep, (ii) repeatedly applied perpendicular fields or (iii) by loss of cooling then they may be re-magnetized without the need to disassemble the machine. Secondly, there are difficulties with handling very strongly magnetized material at cryogenic temperatures when assembling the machine. Thirdly, ex situ methods would require the machine to be assembled both cold and pre-magnetized and would offer significant design difficulties. Until room temperature superconductors can be prepared, the most efficient design of machine will therefore be one in which an in situ magnetizing fixture is included.
The first three methods all require a solenoid which can be switched on and off. In the first method an applied magnetic field is required equal to the required magnetic field, whilst the second and third approaches require fields at least two times greater. The final method, however, offers significant advantages since it achieves the final required field by repeated applications of a small field and can utilise a permanent magnet.
If we wish to pulse a field using, say, a 10 T magnet to magnetize a 30 mm × 10 mm sample then we can work out how big the solenoid needs to be. If it were possible to wind an appropriate coil using YBCO tape then, assuming an Ic of 70 A and a thickness of 100 μm, we would have 100 turns and 7 000 A turns. This would produce a B field of approximately 7 000/(20 × 10−3) × 4π × 10−7 = 0.4 T. To produce 10 T would require pulsing to 1 400 A! An alternative calculation would be to assume a Jc of say 5 × 108Am−1 and a coil 1 cm2 in cross section. The field would then be 5 × 108 × 10−2 × (2 × 4π × 10−7) = 10 T. Clearly if the magnetisation fixture is not to occupy more room than the puck itself then a very high activation current would be required and either constraint makes in situ magnetization a very difficult proposition. What is required for in situ magnetisation is a magnetisation method in which a relatively small field of the order of milliteslas repeatedly applied is used to magnetize the superconductor.
Applications
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI and NMR machines, mass spectrometers, Magnetohydrodynamic Power Generation and beam-steering magnets used in particle accelerators. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on HTS outweigh the additional costs involved.
Promising future applications include high-performance transformers, power storage devices, electric power transmission, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, and fault current limiters.
References
Sources
Qiuliang Wang et al., "Study of Full-wave Superconducting Rectifier-type Flux-pumps", IEEE Transactions on Magnetics, vol. 32, No. 4, pp. 2699–2702, Jul. 1996.
L.J.M. van de Klundert et al., "On fully conducting rectifiers and fluxpumps. A review. Part 2: Commutation modes, characteristics and switches", Cryogenics, pp. 267–277, May 1981.
L.J.M. van de Klundert et al., "Fully superconducting rectifiers and fluxpumps Part 1: Realized methods for pumping flux", Cryogenics, pp. 195–206, Apr. 1981.
Kleinert, Hagen, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES"; Disorder Fields, Phase Transitions, pp. 1–742, World Scientific (Singapore, 1989); Paperback (also readable online: Vol. I)
Larkin, Anatoly; Varlamov, Andrei, Theory of Fluctuations in Superconductors, Oxford University Press, Oxford, United Kingdom, 2005 ()
External links
Recent publications
Magnetic levitation
Superconductivity
Spintronics | Flux pumping | [
"Physics",
"Materials_science",
"Engineering"
] | 2,706 | [
"Physical quantities",
"Spintronics",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
23,364,086 | https://en.wikipedia.org/wiki/Hertzsprung%E2%80%93Russell%20diagram | The Hertzsprung–Russell diagram (abbreviated as H–R diagram, HR diagram or HRD) is a scatter plot of stars showing the relationship between the stars' absolute magnitudes or luminosities and their stellar classifications or effective temperatures. The diagram was created independently in 1911 by Ejnar Hertzsprung and by Henry Norris Russell in 1913, and represented a major step towards an understanding of stellar evolution.
Historical background
In the nineteenth century large-scale photographic spectroscopic surveys of stars were performed at Harvard College Observatory, producing spectral classifications for tens of thousands of stars, culminating ultimately in the Henry Draper Catalogue. In one segment of this work Antonia Maury included divisions of the stars by the width of their spectral lines. Hertzsprung noted that stars described with narrow lines tended to have smaller proper motions than the others of the same spectral classification. He took this as an indication of greater luminosity for the narrow-line stars, and computed secular parallaxes for several groups of these, allowing him to estimate their absolute magnitude.
In 1910 Hans Oswald Rosenberg published a diagram plotting the apparent magnitude of stars in the Pleiades cluster against the strengths of the calcium K line and two hydrogen Balmer lines. These spectral lines serve as a proxy for the temperature of the star, an early form of spectral classification. The apparent magnitude of stars in the same cluster is equivalent to their absolute magnitude and so this early diagram was effectively a plot of luminosity against temperature. The same type of diagram is still used today as a means of showing the stars in clusters without having to initially know their distance and luminosity. Hertzsprung had already been working with this type of diagram, but his first publications showing it were not until 1911. This was also the form of the diagram using apparent magnitudes of a cluster of stars all at the same distance.
Russell's early (1913) versions of the diagram included Maury's giant stars identified by Hertzsprung, those nearby stars with parallaxes measured at the time, stars from the Hyades (a nearby open cluster), and several moving groups, for which the moving cluster method could be used to derive distances and thereby obtain absolute magnitudes for those stars.
Forms of diagram
There are several forms of the Hertzsprung–Russell diagram, and the nomenclature is not very well defined. All forms share the same general layout: stars of greater luminosity are toward the top of the diagram, and stars with higher surface temperature are toward the left side of the diagram.
The original diagram displayed the spectral type of stars on the horizontal axis and the absolute visual magnitude on the vertical axis. The spectral type is not a numerical quantity, but the sequence of spectral types is a monotonic series that reflects the stellar surface temperature. Modern observational versions of the chart replace spectral type by a color index (in diagrams made in the middle of the 20th Century, most often the B-V color) of the stars. This type of diagram is what is often called an observational Hertzsprung–Russell diagram, or specifically a color–magnitude diagram (CMD), and it is often used by observers. In cases where the stars are known to be at identical distances such as within a star cluster, a color–magnitude diagram is often used to describe the stars of the cluster with a plot in which the vertical axis is the apparent magnitude of the stars. For cluster members, by assumption there is a single additive constant difference between their apparent and absolute magnitudes, called the distance modulus, for all of that cluster of stars. Early studies of nearby open clusters (like the Hyades and Pleiades) by Hertzsprung and Rosenberg produced the first CMDs, a few years before Russell's influential synthesis of the diagram collecting data for all stars for which absolute magnitudes could be determined.
Another form of the diagram plots the effective surface temperature of the star on one axis and the luminosity of the star on the other, almost invariably in a log-log plot. Theoretical calculations of stellar structure and the evolution of stars produce plots that match those from observations. This type of diagram could be called temperature-luminosity diagram, but this term is hardly ever used; when the distinction is made, this form is called the theoretical Hertzsprung–Russell diagram instead. A peculiar characteristic of this form of the H–R diagram is that the temperatures are plotted from high temperature to low temperature, which aids in comparing this form of the H–R diagram with the observational form.
Although the two types of diagrams are similar, astronomers make a sharp distinction between the two. The reason for this distinction is that the exact transformation from one to the other is not trivial. To go between effective temperature and color requires a color–temperature relation, and constructing that is difficult; it is known to be a function of stellar composition and can be affected by other factors like stellar rotation. When converting luminosity or absolute bolometric magnitude to apparent or absolute visual magnitude, one requires a bolometric correction, which may or may not come from the same source as the color–temperature relation. One also needs to know the distance to the observed objects (i.e., the distance modulus) and the effects of interstellar obscuration, both in the color (reddening) and in the apparent magnitude (where the effect is called "extinction"). Color distortion (including reddening) and extinction (obscuration) are also apparent in stars having significant circumstellar dust. The ideal of direct comparison of theoretical predictions of stellar evolution to observations thus has additional uncertainties incurred in the conversions between theoretical quantities and observations.
Interpretation
Most of the stars occupy the region in the diagram along the line called the main sequence. During the stage of their lives in which stars are found on the main sequence line, they are fusing hydrogen in their cores. The next concentration of stars is on the horizontal branch (helium fusion in the core and hydrogen burning in a shell surrounding the core). Another prominent feature is the Hertzsprung gap located in the region between A5 and G0 spectral type and between +1 and −3 absolute magnitudes (i.e., between the top of the main sequence and the giants in the horizontal branch). RR Lyrae variable stars can be found in the left of this gap on a section of the diagram called the instability strip. Cepheid variables also fall on the instability strip, at higher luminosities.
The H-R diagram can be used by scientists to roughly measure how far away a star cluster or galaxy is from Earth. This can be done by comparing the apparent magnitudes of the stars in the cluster to the absolute magnitudes of stars with known distances (or of model stars). The observed group is then shifted in the vertical direction, until the two main sequences overlap. The difference in magnitude that was bridged in order to match the two groups is called the distance modulus and is a direct measure for the distance (ignoring extinction). This technique is known as main sequence fitting and is a type of spectroscopic parallax. Not only the turn-off in the main sequence can be used, but also the tip of the red giant branch stars.
The diagram seen by ESA's Gaia mission
ESA's Gaia mission showed several features in the diagram that were either not known or that were suspected to exist. It found a gap in the main sequence that appears for M-dwarfs and that is explained with the transition from a partly convective core to a fully convective core. For white dwarfs the diagram shows several features. Two main concentrations appear in this diagram following the cooling sequence of white dwarfs that are explained with the atmospheric composition of white dwarfs, especially hydrogen versus helium dominated atmospheres of white dwarfs. A third concentration is explained with core crystallization of the white dwarfs interior. This releases energy and delays the cooling of white dwarfs.
Role in the development of stellar physics
Contemplation of the diagram led astronomers to speculate that it might demonstrate stellar evolution, the main suggestion being that stars collapsed from red giants to dwarf stars, then moving down along the line of the main sequence in the course of their lifetimes. Stars were thought therefore to radiate energy by converting gravitational energy into radiation through the Kelvin–Helmholtz mechanism. This mechanism resulted in an age for the Sun of only tens of millions of years, creating a conflict over the age of the Solar System between astronomers, and biologists and geologists who had evidence that the Earth was far older than that. This conflict was only resolved in the 1930s when nuclear fusion was identified as the source of stellar energy.
Following Russell's presentation of the diagram to a meeting of the Royal Astronomical Society in 1912, Arthur Eddington was inspired to use it as a basis for developing ideas on stellar physics. In 1926, in his book The Internal Constitution of the Stars he explained the physics of how stars fit on the diagram. The paper anticipated the later discovery of nuclear fusion and correctly proposed that the star's source of power was the combination of hydrogen into helium, liberating enormous energy. This was a particularly remarkable intuitive leap, since at that time the source of a star's energy was still unknown, thermonuclear energy had not been proven to exist, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington managed to sidestep this problem by concentrating on the thermodynamics of radiative transport of energy in stellar interiors. Eddington predicted that dwarf stars remain in an essentially static position on the main sequence for most of their lives. In the 1930s and 1940s, with an understanding of hydrogen fusion, came an evidence-backed theory of evolution to red giants following which were speculated cases of explosion and implosion of the remnants to white dwarfs. The term supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a pre-supernova star, a concept put forth by Fred Hoyle in 1954. The pure mathematical quantum mechanics and classical mechanical models of stellar processes enable the Hertzsprung–Russell diagram to be annotated with known conventional paths known as stellar sequences—there continue to be added rarer and more anomalous examples as more stars are analysed and mathematical models considered.
See also
References
Bibliography
External links
Omega Cen H-R animation of a Hertzsprung–Russell diagram created from real Hubble data
JavaHRD an interactive Hertzsprung–Russell diagram as a Java applet
BaSTI a Bag of Stellar Tracks and Isochrones, simulations with FRANEC code by Teramo Astronomical Observatory
Leos Ondra: The first Hertzsprung-Russell diagram
Who first published a Hertzsprung-Russell diagram? Hertzsprung or Russell? Answer: neither!
Stellar evolution
Concepts in astronomy
Diagrams
1910 introductions | Hertzsprung–Russell diagram | [
"Physics",
"Astronomy"
] | 2,262 | [
"Concepts in astronomy",
"Astrophysics",
"Stellar evolution"
] |
21,848,654 | https://en.wikipedia.org/wiki/Absorption%20hardening | In the field of nuclear engineering, absorption hardening is the increase in average energy of neutrons in a population by preferential absorption of lower-energy neutrons. This occurs because absorption cross-sections typically increase for lower neutron energies.
References
Weston M. Stacey, Nuclear Reactor Physics, 2nd ed. (Wiley-VCH, 2007)
Nuclear technology | Absorption hardening | [
"Physics"
] | 72 | [
"Nuclear technology",
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
21,850,245 | https://en.wikipedia.org/wiki/Circular%20ensemble | In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introduced by Freeman Dyson as modifications of the Gaussian matrix ensembles. The three main examples are the circular orthogonal ensemble (COE) on symmetric unitary matrices, the circular unitary ensemble (CUE) on unitary matrices, and the circular symplectic ensemble (CSE) on self dual unitary quaternionic matrices.
Probability distributions
The distribution of the unitary circular ensemble CUE(n) is the Haar measure on the unitary group U(n). If U is a random element of CUE(n), then UTU is a random element of COE(n); if U is a random element of CUE(2n), then URU is a random element of CSE(n), where
Each element of a circular ensemble is a unitary matrix, so it has eigenvalues on the unit circle: with for k=1,2,... n, where the are also known as eigenangles or eigenphases. In the CSE each of these n eigenvalues appears twice. The distributions have densities with respect to the eigenangles, given by
on (symmetrized version), where β=1 for COE, β=2 for CUE, and β=4 for CSE. The normalisation constant Zn,β is given by
as can be verified via Selberg's integral formula, or Weyl's integral formula for compact Lie groups.
Generalizations
Generalizations of the circular ensemble restrict the matrix elements of U to real numbers [so that U is in the orthogonal group O(n)] or to real quaternion numbers [so that U is in the symplectic group Sp(2n). The Haar measure on the orthogonal group produces the circular real ensemble (CRE) and the Haar measure on the symplectic group produces the circular quaternion ensemble (CQE).
The eigenvalues of orthogonal matrices come in complex conjugate pairs and , possibly complemented by eigenvalues fixed at +1 or -1. For n=2m even and det U=1, there are no fixed eigenvalues and the phases θk have probability distribution
with C an unspecified normalization constant. For n=2m+1 odd there is one fixed eigenvalue σ=det U equal to ±1. The phases have distribution
For n=2m+2 even and det U=-1 there is a pair of eigenvalues fixed at +1 and -1, while the phases have distribution
This is also the distribution of the eigenvalues of a matrix in Sp(2m).
These probability density functions are referred to as Jacobi distributions in the theory of random matrices, because correlation functions can be expressed in terms of Jacobi polynomials.
Calculations
Averages of products of matrix elements in the circular ensembles can be calculated using Weingarten functions. For large dimension of the matrix these calculations become impractical, and a numerical method is advantageous. There exist efficient algorithms to generate random matrices in the circular ensembles, for example by performing a QR decomposition on a Ginibre matrix.
References
Software Implementations
External links
Random matrices
Mathematical physics
Freeman Dyson | Circular ensemble | [
"Physics",
"Mathematics"
] | 677 | [
"Random matrices",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Matrices (mathematics)",
"Statistical mechanics",
"Mathematical physics"
] |
21,851,088 | https://en.wikipedia.org/wiki/R-7A%20Semyorka | The R-7A Semyorka, GRAU index 8K74, was an early Soviet intercontinental ballistic missile derived from the earlier R-7 Semyorka. It was the only member of the R-7 family of rockets to be deployed as an operational missile. The R-7A first flew on 23 December 1959, entered service on 31 December of the same year, and was formally accepted on 20 January 1960. It was declared fully operational on 12 September 1960 and was retired from service in 1968.
Twenty eight test launches were conducted with three failures. Most test launches occurred from Sites 1/5 and 31/6 at the Baikonur Cosmodrome. The main operational base for R-7A missiles was Plesetsk Cosmodrome, where four launch pads were used at Sites 41/1, 16/2, 43/3 and 43/4. Baikonur Site 31/6 was also used for operational missiles. Another base near Krasnoyarsk was proposed, but later cancelled.
The R-7A was designed to carry a nuclear warhead; however, there was only one occasion where a live warhead was loaded onto a missile, during the Cuban Missile Crisis. After the missile had been armed, it was rolled out to Site 41/1 at Plesetsk, and would have had a response time of 8-12 hours, should a launch have been ordered. In the event of nuclear war, other missiles would have been armed as required, in accordance with the Soviet policy of storing missiles and warheads separately.
References
R-7 (rocket family)
Nuclear warfare
Plesetsk Cosmodrome
Cuban Missile Crisis | R-7A Semyorka | [
"Chemistry"
] | 339 | [
"Radioactivity",
"Nuclear warfare"
] |
21,851,601 | https://en.wikipedia.org/wiki/Hadwiger%20conjecture%20%28combinatorial%20geometry%29 | In combinatorial geometry, the Hadwiger conjecture states that any convex body in n-dimensional Euclidean space can be covered by 2n or fewer smaller bodies homothetic with the original body, and that furthermore, the upper bound of 2n is necessary if and only if the body is a parallelepiped. There also exists an equivalent formulation in terms of the number of floodlights needed to illuminate the body.
The Hadwiger conjecture is named after Hugo Hadwiger, who included it on a list of unsolved problems in 1957; it was, however, previously studied by and independently, . Additionally, there is a different Hadwiger conjecture concerning graph coloring—and in some sources the geometric Hadwiger conjecture is also called the Levi–Hadwiger conjecture or the Hadwiger–Levi covering problem.
The conjecture remains unsolved even in three dimensions, though the two dimensional case was resolved by .
Formal statement
Formally, the Hadwiger conjecture is: If K is any bounded convex set in the n-dimensional Euclidean space Rn, then there exists a set of 2n scalars si and a set of 2n translation vectors vi such that all si lie in the range 0 < si < 1, and
Furthermore, the upper bound is necessary if and only if K is a parallelepiped, in which case all 2n of the scalars may be chosen to be equal to 1/2.
Alternate formulation with illumination
As shown by Boltyansky, the problem is equivalent to one of illumination: how many floodlights must be placed outside of an opaque convex body in order to completely illuminate its exterior? For the purposes of this problem, a body is only considered to be illuminated if for each point of the boundary of the body, there is at least one floodlight that is separated from the body by all of the tangent planes intersecting the body on this point; thus, although the faces of a cube may be lit by only two floodlights, the planes tangent to its vertices and edges cause it to need many more lights in order for it to be fully illuminated. For any convex body, the number of floodlights needed to completely illuminate it turns out to equal the number of smaller copies of the body that are needed to cover it.
Examples
As shown in the illustration, a triangle may be covered by three smaller copies of itself, and more generally in any dimension a simplex may be covered by n + 1 copies of itself, scaled by a factor of n/(n + 1). However, covering a square by smaller squares (with parallel sides to the original) requires four smaller squares, as each one can cover only one of the larger square's four corners. In higher dimensions, covering a hypercube or more generally a parallelepiped by smaller homothetic copies of the same shape requires a separate copy for each of the vertices of the original hypercube or parallelepiped; because these shapes have 2n vertices, 2n smaller copies are necessary. This number is also sufficient: a cube or parallelepiped may be covered by 2n copies, scaled by a factor of 1/2. Hadwiger's conjecture is that parallelepipeds are the worst case for this problem, and that any other convex body may be covered by fewer than 2n smaller copies of itself.
Known results
The two-dimensional case was settled by : every two-dimensional bounded convex set may be covered with four smaller copies of itself, with the fourth copy needed only in the case of parallelograms. However, the conjecture remains open in higher dimensions except for some special cases. The best known asymptotic upper bound on the number of smaller copies needed to cover a given body is
where is a positive constant. For small the upper bound of established by is better than the asymptotic one. In three dimensions it is known that 16 copies always suffice, but this is still far from the conjectured bound of 8 copies.
The conjecture is known to hold for certain special classes of convex bodies, including, in dimension three, centrally symmetric polyhedra and bodies of constant width. The number of copies needed to cover any zonotope (other than a parallelepiped) is at most , while for bodies with a smooth surface (that is, having a single tangent plane per boundary point), at most smaller copies are needed to cover the body, as Levi already proved.
See also
Borsuk's conjecture on covering convex bodies with sets of smaller diameter
Notes
References
.
.
.
.
.
.
.
.
Discrete geometry
Conjectures
Unsolved problems in geometry
Eponyms in geometry | Hadwiger conjecture (combinatorial geometry) | [
"Mathematics"
] | 954 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Discrete mathematics",
"Eponyms in geometry",
"Discrete geometry",
"Unsolved problems in geometry",
"Conjectures",
"Geometry",
"Mathematical problems"
] |
21,858,084 | https://en.wikipedia.org/wiki/German%20Society%20for%20Aeronautics%20and%20Astronautics | German Society for Aeronautics and Astronautics (DGLR; ) is a German aerospace society. It was founded in 1912 under the name of Wissenschaftliche Gesellschaft für Flugtechnik (WGF). It is the second oldest technical and scientific society in aerospace in the world.
The DGLR published some of the Zeitschrift für Flugtechnik und Motorluftschiffahrt (ZFM) ("Journal of Aviation Engineering and Motorized-Airship Aeronautics") until 1933
History
In 1993 Hermann-Oberth-Gesellschaft, Otto-Lilienthal-Gesellschaft, Gesellschaft für Raketentechnik und Weltraumfahrt e.V. and Fachverband für Luftfahrt e.V. were combined to form Deutschen Gesellschaft für Luft- und Raumfahrt - Lilienthal - Oberth e.V.
Awards
The following awards are given out by DGLR for outstanding contributions:
Ludwig-Prandtl-Ring
Eugen-Sänger-Medaille
Otto-Lilienthal-Medaille
See also
German Aerospace Center
References
Aerospace engineering organizations
Aviation in Germany
Organisations based in Bonn | German Society for Aeronautics and Astronautics | [
"Engineering"
] | 238 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
26,240,106 | https://en.wikipedia.org/wiki/List%20of%20channel%20numbers%20assigned%20to%20FM%20frequencies%20in%20North%20America | In the Americas (defined as International Telecommunication Union (ITU) region 2), the FM broadcast band consists of 101 channels, each 200 kHz wide, in the frequency range from 87.8 to 108.0 MHz, with "center frequencies" running from 87.9 MHz to 107.9 MHz. For most purposes an FM station is associated with its center frequency. However, each FM frequency has also been assigned a channel number, which ranges from 200 to 300.
FM channel numbers are most commonly used for internal regulatory purposes. The range originally adopted in 1945 began with channel 201 (88.1 MHz), or a value high enough to avoid confusion with television channel numbers, which over the years have had values ranging from 1 to 83. Having a gap between the highest TV channel number and the lowest FM channel number allowed for expansion, which occurred in 1978 when FM channel 200 (87.9 MHz) was added.
FM channel numbers are commonly used for listing FM Station Allotments, which are the FM station assignments designated for individual communities. In the United States they are also used in the callsigns of low-powered FM translators relaying AM or FM station signals. For example, the "237" in the callsign for translator K237FR in Tumwater, Washington indicates that the station is transmitting on channel 237, which corresponds to 95.3 MHz.
References
Radio spectrum
Broadcast engineering | List of channel numbers assigned to FM frequencies in North America | [
"Physics",
"Engineering"
] | 286 | [
"Broadcast engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electronic engineering"
] |
26,244,469 | https://en.wikipedia.org/wiki/Recombination%20%28cosmology%29 | In cosmology, recombination refers to the epoch during which charged electrons and protons first became bound to form electrically neutral hydrogen atoms. Recombination occurred about years after the Big Bang (at a redshift of z = ). The word "recombination" is misleading, since the Big Bang theory does not posit that protons and electrons had been combined before, but the name exists for historical reasons since it was named before the Big Bang hypothesis became the primary theory of the birth of the universe.
Overview
Immediately after the Big Bang, the universe was a hot, dense plasma of photons, leptons, and quarks: the quark epoch. At 10−6 seconds, the Universe had expanded and cooled sufficiently to allow for the formation of protons: the hadron epoch. This plasma was effectively opaque to electromagnetic radiation due to Thomson scattering by free electrons, as the mean free path each photon could travel before encountering an electron was very short. This is the current state of the interior of the Sun. As the universe expanded, it also cooled. Eventually, the universe cooled to the point that the radiation field could not immediately ionize neutral hydrogen, and atoms became energetically favored. The fraction of free electrons and protons as compared to neutral hydrogen decreased to a few parts in .
Recombination involves electrons binding to protons (hydrogen nuclei) to form neutral hydrogen atoms. Because direct recombinations to the ground state (lowest energy) of hydrogen are very inefficient, these hydrogen atoms generally form with the electrons in a high energy state, and the electrons quickly transition to their low energy state by emitting photons. Two main pathways exist: from the 2p state by emitting a Lyman-a photon – these photons will almost always be reabsorbed by another hydrogen atom in its ground state – or from the 2s state by emitting two photons, which is very slow.
This production of photons is known as decoupling, which leads to recombination sometimes being called photon decoupling, but recombination and photon decoupling are distinct events. Once photons decoupled from matter, they traveled freely through the universe without interacting with matter and constitute what is observed today as cosmic microwave background radiation (in that sense, the cosmic background radiation is infrared and some red black-body radiation emitted when the universe was at a temperature of some 3000 K, redshifted by a factor of from the visible spectrum to the microwave spectrum).
Recombination time frames
The time frame for recombination can be estimated from the time dependence of the temperature of the cosmic microwave background (CMB). The microwave background is a blackbody spectrum representing the photons present at recombination, shifted in energy by the expansion of the universe. A blackbody is completely characterized by its temperature; the shift is called the redshift denoted by z:
where 2.7 K is today's temperature.
The thermal energy at the peak of the blackbody spectrum is the Boltzmann constant, , times the temperature, but simply comparing this to the ionization energy of hydrogen atoms will not consider the spectrum of energies. A better estimate evaluates the thermal equilibrium between matter (atoms) and radiation. The density of photons, with energy E sufficient to ionize hydrogen is the total density times a factor from the equilibrium Boltzmann distribution:
At equilibrium this will approximately equal the matter (baryon) density. The ratio of photons to baryons, , is known from several sources including measurements by the Planck satellite to be around 109. Solving for
gives value around 1100, which converts to a cosmic time value around 400,000 years.
Recombination history of hydrogen
The cosmic ionization history is generally described in terms of the free electron fraction xe as a function of redshift. It is the ratio of the abundance of free electrons to the total abundance of hydrogen (both neutral and ionized). Denoting by ne the number density of free electrons, nH that of atomic hydrogen and np that of ionized hydrogen (i.e. protons), xe is defined as
Since hydrogen only recombines once helium is fully neutral, charge neutrality implies ne = np, i.e. xe is also the fraction of ionized hydrogen.
Rough estimate from equilibrium theory
It is possible to find a rough estimate of the redshift of the recombination epoch assuming the recombination reaction is fast enough that it proceeds near thermal equilibrium. The relative abundance of free electrons, protons and neutral hydrogen is then given by the Saha equation:
where me is the mass of the electron, kB is the Boltzmann constant, T is the temperature, ħ is the reduced Planck constant, and EI = 13.6 eV is the ionization energy of hydrogen. Charge neutrality requires ne = np, and the Saha equation can be rewritten in terms of the free electron fraction xe:
All quantities in the right-hand side are known functions of z, the redshift: the temperature is given by , and the total density of hydrogen (neutral and ionized) is given by .
Solving this equation for a 50 percent ionization fraction yields a recombination temperature of roughly , corresponding to redshift z = .
Effective three-level atom
In 1968, physicists Jim Peebles in the US and Yakov Borisovich Zel'dovich and collaborators in the USSR independently computed the non-equilibrium recombination history of hydrogen. The basic elements of the model are the following.
Direct recombinations to the ground state of hydrogen are very inefficient: each such event leads to a photon with energy greater than 13.6 eV, which almost immediately re-ionizes a neighboring hydrogen atom.
Electrons therefore only efficiently recombine to the excited states of hydrogen, from which they cascade very quickly down to the first excited state, with principal quantum number .
From the first excited state, electrons can reach the ground state n = 1 through two pathways:
Decay from the 2p state by emitting a Lyman-α photon. This photon will almost always be reabsorbed by another hydrogen atom in its ground state. However, cosmological redshifting systematically decreases the photon frequency, and there is a small chance that it escapes reabsorption if it gets redshifted far enough from the Lyman-α line resonant frequency before encountering another hydrogen atom.
Decay from the 2s state by emitting two photons. This two-photon decay process is very slow, with a rate of 8.22 s−1. It is however competitive with the slow rate of Lyman-α escape in producing ground-state hydrogen.
Atoms in the first excited state may also be re-ionized by the ambient CMB photons before they reach the ground state. When this is the case, it is as if the recombination to the excited state did not happen in the first place. To account for this possibility, Peebles defines the factor C as the probability that an atom in the first excited state reaches the ground state through either of the two pathways described above before being photoionized.
This model is usually described as an "effective three-level atom" as it requires keeping track of hydrogen under three forms: in its ground state, in its first excited state (assuming all the higher excited states are in Boltzmann equilibrium with it), and in its ionized state.
Accounting for these processes, the recombination history is then described by the differential equation
where is the "case B" recombination coefficient to the excited states of hydrogen, is the corresponding photoionization rate and E21 = 10.2 eV is the energy of the first excited state. Note that the second term in the right-hand side of the above equation can be obtained by a detailed balance argument. The equilibrium result given in the previous section would be recovered by setting the left-hand side to zero, i.e. assuming that the net rates of recombination and photoionization are large in comparison to the Hubble expansion rate, which sets the overall evolution timescale for the temperature and density. However, is comparable to the Hubble expansion rate, and even gets significantly lower at low redshifts, leading to an evolution of the free electron fraction much slower than what one would obtain from the Saha equilibrium calculation. With modern values of cosmological parameters, one finds that the universe is 90% neutral at .
Modern developments
The simple effective three-level atom model described above accounts for the most important physical processes. However it does rely on approximations that lead to errors on the predicted recombination history at the level of 10% or so. Due to the importance of recombination for the precise prediction of cosmic microwave background anisotropies, several research groups have revisited the details of this picture over the last two decades.
The refinements to the theory can be divided into two categories:
Accounting for the non-equilibrium populations of the highly excited states of hydrogen. This effectively amounts to modifying the recombination coefficient αB.
Accurately computing the rate of Lyman-α escape and the effect of these photons on the 2s–1s transition. This requires solving a time-dependent radiative transfer equation. In addition, one needs to account for higher-order Lyman transitions. These refinements effectively amount to a modification of Peebles' C factor.
Modern recombination theory is believed to be accurate at the level of 0.1%, and is implemented in publicly available fast recombination codes.
Primordial helium recombination
Helium nuclei are produced during Big Bang nucleosynthesis, and make up about 24% of the total mass of baryonic matter. The ionization energy of helium is larger than that of hydrogen and it therefore recombines earlier. Because neutral helium carries two electrons, its recombination proceeds in two steps. The first recombination, proceeds near Saha equilibrium and takes place around redshift z ≈ 6000. The second recombination, , is slower than what would be predicted from Saha equilibrium and takes place around redshift z ≈ 2000. The details of helium recombination are less critical than those of hydrogen recombination for the prediction of cosmic microwave background anisotropies, since the universe is still very optically thick after helium has recombined and before hydrogen has started its recombination.
Primordial light barrier
Prior to recombination, photons were not able to freely travel through the universe, as they constantly scattered off the free electrons and protons. This scattering causes a loss of information, and "there is therefore a photon barrier at a redshift" near that of recombination that prevents us from using photons directly to learn about the universe at larger redshifts. Once recombination had occurred, however, the mean free path of photons greatly increased due to the lower number of free electrons. Shortly after recombination, the photon mean free path became larger than the Hubble length, and photons traveled freely without interacting with matter. For this reason, recombination is closely associated with the last scattering surface, which is the name for the last time at which the photons in the cosmic microwave background interacted with matter. However, these two events are distinct, and in a universe with different values for the baryon-to-photon ratio and matter density, recombination and photon decoupling need not have occurred at the same epoch.
See also
Chronology of the universe
Age of the universe
Big Bang
Notes
References
Bibliography
Physical cosmology | Recombination (cosmology) | [
"Physics",
"Astronomy"
] | 2,424 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
26,246,527 | https://en.wikipedia.org/wiki/C5H5N5O2 | {{DISPLAYTITLE:C5H5N5O2}}
The molecular formula C5H5N5O2 (molar mass: 167.13 g/mol, exact mass: 67.0443) u may refer to:
2,8-Dihydroxyadenine
8-Oxoguanine (8-oxo-Gua)
Molecular formulas | C5H5N5O2 | [
"Physics",
"Chemistry"
] | 82 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
26,250,710 | https://en.wikipedia.org/wiki/Two-dimensional%20correlation%20analysis | Two dimensional correlation analysis is a mathematical technique that is used to study changes in measured signals. As mostly spectroscopic signals are discussed, sometime also two dimensional correlation spectroscopy is used and refers to the same technique.
In 2D correlation analysis, a sample is subjected to an external perturbation while all other parameters of the system are kept at the same value. This perturbation can be a systematic and controlled change in temperature, pressure, pH, chemical composition of the system, or even time after a catalyst was added to a chemical mixture. As a result of the controlled change (the perturbation), the system will undergo variations which are measured by a chemical or physical detection method. The measured signals or spectra will show systematic variations that are processed with 2D correlation analysis for interpretation.
When one considers spectra that consist of few bands, it is quite obvious to determine which bands are subject to a changing intensity. Such a changing intensity can be caused for example by chemical reactions. However, the interpretation of the measured signal becomes more tricky when spectra are complex and bands are heavily overlapping. Two dimensional correlation analysis allows one to determine at which positions in such a measured signal there is a systematic change in a peak, either continuous rising or drop in intensity. 2D correlation analysis results in two complementary signals, which referred to as the 2D synchronous and 2D asynchronous spectrum. These signals allow amongst others
to determine the events that are occurring at the same time (in phase) and those events that are occurring at different times (out of phase)
to determine the sequence of spectral changes
to identify various inter- and intramolecular interactions
band assignments of reacting groups
to detect correlations between spectra of different techniques, for example near infrared spectroscopy (NIR) and Raman spectroscopy
History
2D correlation analysis originated from 2D NMR spectroscopy. Isao Noda developed perturbation based 2D spectroscopy in the 1980s. This technique required sinusoidal perturbations to the chemical system under investigation. This specific type of the applied perturbation severely limited its possible applications. Following research done by several groups of scientists, perturbation based 2D spectroscopy could be developed to a more extended and generalized broader base. Since the development of generalized 2D correlation analysis in 1993 based on Fourier transformation of the data, 2D correlation analysis gained widespread use. Alternative techniques that were simpler to calculate, for example the disrelation spectrum, were also developed simultaneously. Because of its computational efficiency and simplicity, the Hilbert transform is nowadays used for the calculation of the 2D spectra. To date, 2D correlation analysis is used for the interpretation of many types of spectroscopic data (including XRF, UV/VIS spectroscopy, fluorescence, infrared, and Raman spectra), although its application is not limited to spectroscopy.
Properties of 2D correlation analysis
2D correlation analysis is frequently used for its main advantage: increasing the spectral resolution by spreading overlapping peaks over two dimensions and as a result simplification of the interpretation of one-dimensional spectra that are otherwise visually indistinguishable from each other. Further advantages are its ease of application and the possibility to make the distinction between band shifts and band overlap. Each type of spectral event, band shifting, overlapping bands of which the intensity changes in the opposite direction, band broadening, baseline change, etc. has a particular 2D pattern. See also the figure with the original dataset on the right and the corresponding 2D spectrum in the figure below.
Presence of 2D spectra
2D synchronous and asynchronous spectra are basically 3D-datasets and are generally represented by contour plots. X- and y-axes are identical to the x-axis of the original dataset, whereas the different contours represent the magnitude of correlation between the spectral intensities. The 2D synchronous spectrum is symmetric relative to the main diagonal. The main diagonal thus contains positive peaks. As the peaks at (x,y) in the 2D synchronous spectrum are a measure for the correlation between the intensity changes at x and y in the original data, these main diagonal peaks are also called autopeaks and the main diagonal signal is referred to as autocorrelation signal. The off-diagonal cross-peaks can be either positive or negative. On the other hand, the asynchronous spectrum is asymmetric and never has peaks on the main diagonal.
Generally contour plots of 2D spectra are oriented with rising axes from left to right and top to down. Other orientations are possible, but interpretation has to be adapted accordingly.
Calculation of 2D spectra
Suppose the original dataset D contains the n spectra in rows. The signals of the original dataset are generally preprocessed. The original spectra are compared to a reference spectrum. By subtracting a reference spectrum, often the average spectrum of the dataset, so called dynamic spectra are calculated which form the corresponding dynamic dataset E. The presence and interpretation may be dependent on the choice of reference spectrum. The equations below are valid for equally spaced measurements of the perturbation.
Calculation of the synchronous spectrum
A 2D synchronous spectrum expresses the similarity between spectral of the data in the original dataset. In generalized 2D correlation spectroscopy this is mathematically expressed as covariance (or correlation).
where:
Φ is the 2D synchronous spectrum
ν1 and ν2 are two spectral channels
yν is the vector composed of the signal intensities in E in column ν
n the number of signals in the original dataset
Calculation of the asynchronous spectrum
Orthogonal spectra to the dynamic dataset E are obtained with the Hilbert-transform:
where:
Ψ is the 2D asynchronous spectrum
ν1 en ν2 are two spectral channels
yν is the vector composed of the signal intensities in E in column ν
n the number of signals in the original dataset
N the Noda-Hilbert transform matrix
The values of N, Nj, k are determined as follows:
0 if j = k
if j ≠ k
where:
j the row number
k the column number
Interpretation
Interpretation of two-dimensional correlation spectra can be considered to consist of several stages.
Detection of peaks of which the intensity changes in the original dataset
As real measurement signals contain a certain level of noise, the derived 2D spectra are influenced and degraded with substantial higher amounts of noise. Hence, interpretation begins with studying the autocorrelation spectrum on the main diagonal of the 2D synchronous spectrum. In the 2D synchronous main diagonal signal on the right 4 peaks are visible at 10, 20, 30, and 40 (see also the 4 corresponding positive autopeaks in the 2D synchronous spectrum on the right). This indicates that in the original dataset 4 peaks of changing intensity are present. The intensity of peaks on the autocorrelation spectrum are directly proportional to the relative importance of the intensity change in the original spectra. Hence, if an intense band is present at position x, it is very likely that a true intensity change is occurring and the peak is not due to noise.
Additional techniques help to filter the peaks that can be seen in the 2D synchronous and asynchronous spectra.
Determining the direction of intensity change
It is not always possible to unequivocally determine the direction of intensity change, such as is for example the case for highly overlapping signals next to each other and of which the intensity changes in the opposite direction. This is where the off diagonal peaks in the synchronous 2D spectrum are used for:
if there is a positive cross-peak at (x, y) in the synchronous 2D spectrum, the intensity of the signals at x and y changes in the same direction
if there is a negative cross-peak at (x, y) in the synchronous 2D spectrum, the intensity of the signals at x and y changes in the opposite direction
As can be seen in the 2D synchronous spectrum on the right, the intensity changes of the peaks at 10 and 30 are related and the intensity of the peak at 10 and 30 changes in the opposite direction (negative cross-peak at (10,30)). The same is true for the peaks at 20 and 40.
Determining the sequence of events
Most importantly, with the sequential order rules, also referred to as Noda's rules, the sequence of the intensity changes can be determined. By carefully interpreting the signs of the 2D synchronous and asynchronous cross peaks with the following rules, the sequence of spectral events during the experiment can be determined:
if the intensities of the bands at x and y in the dataset are changing in the same direction, the synchronous 2D cross peak at (x,y) is positive
if the intensities of the bands at x and y in the dataset are changing in the opposite direction, the synchronous 2D cross peak at (x,y) is negative
if the change at x mainly precedes the change in the band at y, the asynchronous 2D cross peak at (x,y) is positive
if the change at x mainly follows the change in the band at y, the asynchronous 2D cross peak at (x,y) is negative
if the synchronous 2D cross peak at (x,y) is negative, the interpretation of rule 3 and 4 for the asynchronous 2D peak at (x,y) has to be reversed
where x and y are the positions on the x-xaxis of two bands in the original data that are subject to intensity changes.
Following the rules above. It can be derived that the changes at 10 and 30 occur simultaneously and the changes in intensity at 20 and 40 occur simultaneously as well. Because of the positive asynchronous cross-peak at (10, 20), the changes at 10 and 30 (predominantly) occur before the intensity changes at 20 and 40.
In some cases the Noda rules cannot be so readily implied, predominately when spectral features are not caused by simple intensity variations. This may occur when band shifts occur, or when a very erratic intensity variation is present in a given frequency range.
See also
Correlation spectroscopy
Fluorescence correlation spectroscopy
Two-dimensional infrared spectroscopy
References
Spectroscopy
Scientific techniques | Two-dimensional correlation analysis | [
"Physics",
"Chemistry"
] | 2,101 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
43,213,708 | https://en.wikipedia.org/wiki/Two-Higgs-doublet%20model | The two-Higgs-doublet model (2HDM) is an extension of the Standard Model of particle physics. 2HDM models are one of the natural choices for beyond-SM models containing two Higgs doublets instead of just one. There are also models with more than two Higgs doublets, for example three-Higgs-doublet models etc.
The addition of the second Higgs doublet leads to a richer phenomenology as there are five physical scalar states viz., the CP even neutral Higgs bosons and (where is heavier than by convention), the CP odd pseudoscalar and two charged Higgs bosons . The discovered Higgs boson is measured to be CP even, so one can map either or with the observed Higgs. A special case occurs when , the alignment limit, in which the lighter CP even Higgs boson has couplings exactly like the SM-Higgs boson. In another limit such limit, where , the heavier CP even boson, i.e. is SM-like, leaving to be the lighter than the discovered Higgs; however, it is important to note that experiments have strongly pointed towards a value for that is close to 1.
Such a model can be described in terms of six physical parameters: four Higgs masses (), the ratio of the two vacuum expectation values () and the mixing angle () which diagonalizes the mass matrix of the neutral CP even Higgses. The SM uses only 2 parameters: the mass of the Higgs and its vacuum expectation value.
The masses of the H and A bosons could be below 1 TeV and the CMS has conducted searches around this range but no significant excess above the standard model prediction has been observed.
Classification
Two-Higgs-doublet models can introduce flavor-changing neutral currents which have not been observed so far. The Glashow-Weinberg condition, requiring that each group of fermions (up-type quarks, down-type quarks and charged leptons) couples exactly to one of the two doublets, is sufficient to avoid the prediction of flavor-changing neutral currents.
Depending on which type of fermions couples to which doublet , one can divide two-Higgs-doublet models into the following classes:
By convention, is the doublet to which up-type quarks couple.
See also
Alternatives to the Standard Model Higgs
Composite Higgs models
Preon
References
Physics beyond the Standard Model
Hypothetical composite particles | Two-Higgs-doublet model | [
"Physics"
] | 516 | [
"Particle physics stubs",
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
43,213,872 | https://en.wikipedia.org/wiki/Magnetic%20resonance%20velocimetry | Magnetic resonance velocimetry (MRV) is an experimental method to obtain velocity fields in fluid mechanics. MRV is based on the phenomenon of nuclear magnetic resonance and adapts a medical magnetic resonance imaging system for the analysis of technical flows. The velocities are usually obtained by phase contrast magnetic resonance imaging techniques. This means velocities are calculated from phase differences in the image data that has been produced using special gradient techniques. MRV can be applied using common medical MRI scanners. The term magnetic resonance velocimetry became current due to the increasing use of MR technology for the measurement of technical flows in engineering.
Applications
In engineering MRV can be applied to the following areas:
Analysis of technical flows in complex geometries (separation, recirculation zones)
Validation of numerical simulations in computational fluid dynamics using 3D velocity fields
Iterative design of complex inner flow channels (combined with rapid prototyping)
Measurement of concentration distributions in mixing processes
Analysis of flow through porous media
Interaction of immiscible fluids
Advantages and limitations
In contrast to other non-invasive velocimetry methods such as PIV or LDA, no optical access is required. Besides, no particles have to be added to the fluid. Thus, MRV enables to analyze the complete flow field in complex geometries and components.
Based on the fact that common MR scanners are designed to detect the nuclear magnetic resonance of hydrogen protons, the tested applications are limited to water flows. Common fluid mechanical scaling concepts compensate this limitation. To achieve the spatial resolution, single data acquisition steps have to be repeated a great number of times with slight variations. Thus, MRV technology is limited to steady or periodical flows.
See also
Flow measurement
Particle image velocimetry
Laser Doppler anemometry
References
Further reading
External links
Professor John Eaton's profile (Stanford University)
Group “Magnetic Resonance Imaging for Mechanical Engineering (Technische Universität Darmstadt, Germany)
"Velocity and thermometry in technical flows" (Medical Physics Group, Universitätsklinikum Freiburg, Germany)
"Measurement of cytoplasmic streaming in single plant cells by magnetic resonance velocimetry"
"State of the art MRI with accurate diagnosis and the best patient experience at Queen Square"
Fluid mechanics
Measurement
Magnetic resonance imaging | Magnetic resonance velocimetry | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 471 | [
"Nuclear magnetic resonance",
"Physical quantities",
"Magnetic resonance imaging",
"Quantity",
"Measurement",
"Size",
"Civil engineering",
"Fluid mechanics"
] |
36,092,416 | https://en.wikipedia.org/wiki/Hydrogen%20atom%20abstraction | In chemistry, hydrogen atom abstraction, or hydrogen atom transfer (HAT), refers to a class of chemical reactions where a hydrogen free radical (a neutral hydrogen atom) is removed from a substrate, another molecule. This process follows the general equation:
X^\bullet{} + H-Y -> X-H{} + Y^\bullet
HAT reactions are common in various redox reactions, hydrocarbon combustion, and interactions involving cytochrome P450 that contain an Fe(V)O unit. The entity removing the hydrogen atom, known as the abstractor (), is often a radical itself, though in some instances, it may be a species with a closed electron shell, such as chromyl chloride. Hydrogen atom transfer can occur via a mechanism known as proton-coupled electron transfer. An illustrative synthetic instance of HAT is observed in iron zeolites, which facilitate the stabilization of alpha-oxygen.
References
Chemical reactions
Reaction mechanisms | Hydrogen atom abstraction | [
"Chemistry"
] | 196 | [
"Reaction mechanisms",
"Chemical kinetics",
"nan",
"Physical organic chemistry"
] |
36,092,794 | https://en.wikipedia.org/wiki/InterWorking%20Labs | InterWorking Labs is a privately owned company in Scotts Valley, California, in the business of optimizing application performance for applications and embedded systems. Founded in 1993 by Chris Wellens and Marshall Rose, it was the first company formed specifically to test network protocol compliance. Its products and tests allow computer devices from many different companies to communicate over networks.
Products
InterWorking Labs' Products diagnose, replicate, and re-mediate application performance problems.
The company's first product, SilverCreek, tests a Simple Network Management Protocol (SNMP) agent implementation (switch, server, phone) with hundreds of thousands of individual tests, including conformance, stress, robustness, and negative testing. The tests detect and diagnose implementation errors in private and standard MIBs as well as SNMPv1, v2c, and v3 stacks and implementations.
The Maxwell family products emulate real-world networks, with problems such as delays, rerouting, corruption, impaired packets or protocols, Domain Name System delays or limited bandwidth. New impairments are added to Maxwell using C, C++, or Python extensions. It is controlled via graphical, command line, and script interfaces. It supports a set of protocol impairments for TCP/IP, DHCP, ICMP, TLS, and SIP testing.
The Maxwell products are named after Maxwell's demon, a thought experiment by 19th-century physicist James Clerk Maxwell. Maxwell's demon demonstrated that the second law of thermodynamics—which says that entropy increases—is true only on average. In his thought experiment, Maxwell imagined a double chamber with a uniform mixture of hot and cold gas molecules. A demon (some intelligent being) sits between the two chambers, operating a trap door. Every time a cold (low-energy) molecule comes by, the demon opens the door and lets the molecule through to the other side. Eventually, the cold gas molecules are all on one side of the chamber and the hot ones all on the other. Although the molecules continue to move randomly, the introduction of intelligence into the system reduces entropy instead of increasing it.
The Maxwell product sits in the middle of a network conversation and opens or closes a figurative "door" on the basis of specific criteria. Maxwell intelligently modifies the packet based on pre-selected criteria and sends the packet on its way. InterWorking Labs is advised by the Internet Engineering Task Force (IETF). Maxwell's network emulations reproduce real conditions in the lab before products are deployed.
History
InterWorking Labs was co-founded in 1993 by Chris Wellens and Marshall Rose. The two met in 1992 at the Interop Company, in Mountain View, California, where Wellens was Director of Technology and Rose was on the Interop Program Committee and also Working Group Chair of the IETF for the Simple Network Management Protocol (SNMP).
Wellens—who was overseeing the trade fair's 5000-node InteropNet as well as an array of interoperability demonstrations for network protocols— noticed that engineers from different companies often interpreted network protocols differently and ended up struggling to make their products send and receive data to one another—sometimes just minutes before showcase demonstrations. The engineers asked Interop to create an interoperability lab where these network communication issues could be worked out in a private and less stressful environment. Interop's founder and CEO Dan Lynch concluded that the industry needed an interoperability testing lab.
Lynch asked Wellens to write a business plan for a permanent Interoperability Lab. Rose proposed that the Lab's first task should be to create and try out a set of tests of the SNMP protocol, since SNMP was an area he was familiar with and one where engineers were having particular problems at Interop. Wellens volunteered to organize a group of developers for an interoperability test summit if Rose would create a set of tests and assist in developing the initial plan.
At about the same time, however, Ziff-Davis acquired Interop and chose not to proceed with the Interoperability Lab. Wellens and Lynch agreed that she should pursue the idea on her own. In 1993, Wellens established InterWorking Labs, and, in January 1994, organized the first SNMP interoperability test summit using 50 SNMP tests written by Rose. During that first test summit, a large number of implementations failed Rose's tests. The results persuaded several major corporations that interoperability testing would be a critical component of functioning networks. Participants at the second (1994) SNMP Test Summit included Cabletron Systems, Cisco Systems, Eicon Technology, Empirical Tools and Technologies, Epilogue Technology Corp., Fujitsu OSSI, IBM and IBM Research, Network General Corp., PEER Networks, SNMP Research, SynOptics Communications, TGV, Inc., and Wellfleet Communications.
In 2000, Wellens asked Karl Auerbach to join the InterWorking Labs Board of Directors. In 2002, Wellens hired Auerbach, who was part of Cisco's Advanced Internet Architecture group, to serve as chief technology officer at InterWorking Labs. An advisory board consisted of several members of the IETF who have expertise in networking protocols (Andy Bierman, Jeff Case, Dave Perkins, Randy Presuhn, and Steve Waldbusser).
Markets
Wireless networks used by hospitals, police, and military have turned computer networks into essential lifeline utilities. Computer networks that keep economies, transportation, energy, and food supplies flowing commonly belong to the critical infrastructure of a region. As such, the performance of networks under adverse conditions is a significant concern for militaries, industry, and local and regional governments.
According to Wellens, UCITA has significantly protected software publishers from liability for the failure of their products. But software publishers may become increasingly liable for the consequences of network failures—especially where comprehensive networking testing existed and was not used. Online retailers, for example, can demonstrate multimillion-dollar losses due to network problems such as security flaws or a network collapse after a denial of service attack.
References
External links
Video links:
Slow network speed on cruise ships
Introduction to SilverCreek
Implementing the IPv6 MIBs
1993 establishments in California
American companies established in 1993
Companies based in Santa Cruz County, California
Computer companies established in 1993
Computer companies of the United States
Computer security companies
Networking companies of the United States
Networking hardware companies
Software companies established in 1993
Software companies of the United States
Software testing | InterWorking Labs | [
"Engineering"
] | 1,318 | [
"Software engineering",
"Software testing"
] |
36,095,198 | https://en.wikipedia.org/wiki/Texas%20Commission%20on%20Fire%20Protection | The Texas Commission on Fire Protection (TCFP) is a Texas state governmental agency tasked with overseeing and regulating all paid fire departments, and firefighting standards within Texas. The agency provides a variety of services including the writing and publication of curriculum manuals, standard manuals, job postings, and injury reports. Commission members are appointed by the Texas State Governor and subsequently confirmed by the Texas State Senate. Once commissioned; commission members each acquire the title of Commissioner, and hold office for a six-year term.
References
External links
State agencies of Texas
Fire protection | Texas Commission on Fire Protection | [
"Engineering"
] | 113 | [
"Building engineering",
"Fire protection"
] |
36,095,331 | https://en.wikipedia.org/wiki/Vietnamese%20units%20of%20measurement | Vietnamese units of measurement () are the largely decimal units of measurement traditionally used in Vietnam until metrication. The base unit of length is the thước (; lit. "ruler") or xích (). Some of the traditional unit names have been repurposed for metric units, such as thước for the metre, while other traditional names remain in translations of imperial units, such as dặm Anh (English "dặm") for the mile.
History
Originally, many thước of varying lengths were in use in Vietnam, each used for different purposes. According to Hoàng Phê (1988), the traditional system of units had at least two thước of different lengths before 1890, the thước ta (lit. "our ruler") or thước mộc ("wooden ruler"), equal to , and the thước đo vải ("ruler for measuring cloth"), equal to . According to historian Nguyễn Đình Đầu, the trường xích and điền xích were both equal to , while according to Phan Thanh Hải, there were three main thước: the thước đo vải, from ; the thước đo đất ("ruler for measuring land"), at ; and the thước mộc, from .
With French colonization, Cochinchina converted to the metric system, the French standard, while Annam and Tonkin continued to use a thước đo đất or điền xích equal to . On June 2, 1897, Indochinese Governor-General Paul Doumer decreed that all the variations of thước (such as thước ta, thước mộc, and điền xích) would be unified at one thước ta to , effective January 1, 1898, in Tonkin. Annam retained the old standard for measuring land, so distance and area (such as sào) in Annam were 4.7/4 and (4.7/4)2 times the equivalent units in Tonkin, respectively.
Length
The following table lists common units of length in Vietnam in the early 20th century, according to a United Nations Statistical Commission handbook:
Notes:
The thước is also called thước ta to distinguish it from the metre (thước tây, lit. "Western ruler"). Other than for measuring length, the thước is also used for measuring land area (see below).
According to the UN handbook, some areas unofficially use 1 trượng = . According to Hoàng Phê (1988), the trượng has two definitions: 10 Chinese chi (about 3.33 m) or 4 thước mộc (about 1.70 m).
The tấc is also given as túc. According to the UN handbook, some areas unofficially use 1 tấc = .
Miscellaneous units:
chai vai
1 chai vai =
dặm
According to Hoàng Phê (1988), 1 dặm = . According to Vĩnh Cao and Nguyễn Phố (2001), 1 dặm = xích (Chinese chi) =
lý or lí
According to Vĩnh Cao and Nguyễn Phố (2001), there are two kinds of lý: 1 công lý = 1 km = xích, while thị lý is a traditional unit equal to xích.
sải
Area
The following table lists common units of area in Vietnam in the early 20th century, according to the UN handbook:
Notes:
Annamite units of area were ()2 times those of other areas, due to units of length (trượng, tấc, etc.) being times those of other areas, as explained above.
According to the UN handbook, the phân is also written phấn.
The sào is also given as cao. Tonkin and Annam had different definitions of the sào.
Miscellaneous units:
công or công đất
The công, used for surveying forested areas, typically in southwestern Vietnam, was equivalent to .
dặm vuông
The dặm vuông measures 1 dặm × 1 dặm.
Volume
The following table lists common units of volume in Vietnam in the early 20th century, according to the UN handbook and Thiều Chửu:
Additionally:
1 phương of husked rice = 13 thăng or 30 bát (bowls) in 1804
1 vuông of husked rice = 604 gr 50
1 phương or vuông or commonly giạ = , though it is sometimes given as 1 phương = ½ hộc or about 30 L
During French administration, 1 giạ was defined as for husked rice but only for some other goods. It was commonly used for measuring rice and salt.
1 túc =
1 uyên =
The following table lists units of volume in use during French administration in Cochinchina:
Notes:
Unhusked rice was measured in hộc while husked rice was measured in vuông because a hộc of unhusked rice becomes 1 vuông after husking.
1 hộc of unhusked rice weighs 1 tạ.
Miscellaneous units:
thùng
In Cochinchina and Cambodia, 1 thùng (lit. "bucket") = . The thùng is also given as tau.
Weight
The following table lists common units of weight in Vietnam in the early 20th century:
Notes:
The tấn in the context of ship capacity is equal to .
The cân (lit. "scale") is also called cân ta ("our scale") to distinguish it from the kilogram (cân tây, "Western scale").
The nén is also given in one source as , but this value conflicts with the lạng from the same source at . The 375-gram value is consistent with the system of units for measuring precious metals.
The đồng is also called đồng cân, to distinguish it from monetary uses.
The French colonial administration defined some additional units for use in trade: nén = 2 thoi = 10 đính = 10 lượng
Units for measuring precious metals:
The lạng, also called cây or lượng, is equal to 10 chỉ. 1 cây =
1 chỉ =
Miscellaneous units:
binh
The binh was equivalent to in Annam.
Time
canh (更)
The canh or trống canh is equal to .
giờ
The giờ, giờ đồng hồ, or tiếng đồng hồ is equal to .
Currency
Traditionally, the basic units of Vietnamese currency were quan (貫, quán), tiền, and đồng. One quan was 10 tiền, and one tiền was between 50 and 100 đồng, depending on the time period.
From the reign of Emperor Trần Thái Tông onward, 1 tiền was 69 đồng in ordinary commercial transactions but 1 tiền was 70 đồng for official transactions.
From the reign of Emperor Lê Lợi, 1 tiền was decreed to be 50 đồng.
During the Northern and Southern dynasties period, beginning in 1528, coins were reduced from to in diameter and diluted with zinc and iron. The smaller coinage was called tiền gián or sử tiền, in contrast to the larger tiền quý (literally, "valuable cash") or cổ tiền. One quan tiền quý was equivalent to 600 đồng, while 1 quan tiền gián was only 360 đồng.
During the Later Lê dynasty, 1 tiền was 60 đồng; therefore, 600 đồng was 1 quan.
During the Yuan dynasty, Vietnamese traders at the border with China used the rate 1 tiền to 67 đồng.
Zinc coins began to appear in Dai Viet during the 18th century. One copper (đồng) coin was worth 3 zinc (kẽm) coins.
Beginning with the reign of Emperor Gia Long, both copper and zinc coins were in use. Originally the two coins had equal value, but eventually a copper coin rose to double the worth of a zinc coin, then triple, then sixfold, until the reign of Emperor Thành Thái, it was worth ten times a zinc coin.
Under French colonial rule, Vietnam used the units hào, xu, chinh, and cắc. After independence, Vietnam used đồng, hào, and xu, with 1 đồng equaling 10 hào or 100 xu. After the Vietnam War, chronic inflation caused both subdivisions to fall out of use, leaving đồng as the only unit of currency. However, Overseas Vietnamese communities continue to use hào and xu to refer to the tenth and hundredth denominations, respectively, of a foreign currency, such as xu for the American cent.
See also
Heavenly Stems & Earthly Branches
Units, Systems, & History of measurement
Chinese, Taiwanese, Hong Kong, Japanese, Mongolian & Korean units of measurement
References
Customary units of measurement
Science and technology in Vietnam
Systems of units
History of Vietnam
Units of measurement by country | Vietnamese units of measurement | [
"Mathematics"
] | 1,809 | [
"Systems of units",
"Units of measurement by country",
"Quantity",
"Customary units of measurement",
"Units of measurement"
] |
36,096,691 | https://en.wikipedia.org/wiki/GC%20box | In molecular biology, a GC box, also known as a GSG box, is a distinct pattern of nucleotides found in the promoter region of some eukaryotic genes. The GC box is upstream of the TATA box, and approximately 110 bases upstream from the transcription initiation site. It has a consensus sequence GGGCGG which is position-dependent and orientation-independent. The GC elements are bound by transcription factors and have similar functions to enhancers. Some known GC box-binding proteins include Sp1, Krox/Egr, Wilms' tumor, MIGI, and CREA.
The GC box is commonly the binding site for zinc finger proteins. An alpha helix section of the protein corresponds with a major groove in the DNA. Zinc-fingers bind to triplet base pair sequences, with residue 21 binding to the first base pair, residue 18 binding to the second base pair, and residue 15 binding to the third base pair. The triplet base pairs can either be a GGG or a GCG. If residue 18 is a histidine, it will bind to a G, and if residue 18 is a glutamate, it will bind to a C. GC box-binding zinc fingers have between 2 and 4 fingers, making them interact with base pair sequences that are 6 to 8 base pairs in length.
References
Molecular biology | GC box | [
"Chemistry",
"Biology"
] | 285 | [
"Biochemistry",
"Molecular biology stubs",
"Molecular biology"
] |
36,099,150 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20ideal%20theory%20in%20number%20fields | In number theory, the fundamental theorem of ideal theory in number fields states that every nonzero proper ideal in the ring of integers of a number field admits unique factorization into a product of nonzero prime ideals. In other words, every ring of integers of a number field is a Dedekind domain.
References
Keith Conrad, Ideal factorization
Algebraic numbers
Theorems in algebraic number theory
Factorization | Fundamental theorem of ideal theory in number fields | [
"Mathematics"
] | 82 | [
"Number theory stubs",
"Theorems in algebraic number theory",
"Mathematical objects",
"Theorems in number theory",
"Arithmetic",
"Algebraic numbers",
"Factorization",
"Numbers",
"Number theory"
] |
36,099,910 | https://en.wikipedia.org/wiki/Augmented-fourths%20tuning | Among alternative tunings for guitar, each augmented-fourths tuning is a regular tuning in which the musical intervals between successive open-string notes are each augmented fourths. Because augmented fourths are alternatively called "tritones" ("tri-tones") or "diminished fifths", augmented-fourths tuning is also called tritone tuning or diminished-fifths tuning.
The standard guitar-tuning
E-A-d-g-b'-e'
interjects exactly one major third amid four perfect fourths for the intervals between its successive open strings. In contrast, the augmented fourths tunings
C-F-c-f-c'-f ' and
B-F-b-f-b'-f'
have only augmented-fourths intervals.
The set of augmented-fourths tunings has three properties that simplify learning by beginners and improvisation by experts: Regular intervals, string repetition, and lefty-righty symmetry. These properties characterize augmented-fourths tunings among non-trivial tunings.
Properties
The set of augmented-fourths tunings has three properties that simplify learning by beginners and improvisation by experts: Regular intervals, string repetition, and lefty-righty symmetry.
Besides the set of augmented-fourths tuning, exactly one other set of tunings has these three properties—the trivial class of one-note tunings, which contains the C-C-C-C-C-C tuning, for example.
Augmented-fourths tunings have extended range. Because each of its tritone-intervals between successive strings is wider than the perfect-fourth intervals (and one major third) of standard tuning, augmented-fourths tunings have greater range than standard tuning—six additional notes, only one less note than Robert Fripp's new standard tuning.
Regular intervals
In each regular tuning, the musical intervals are the same for each pair of consecutive strings. Other regular tunings include major-thirds, all-fourths, and all-fifths tunings. For each regular tuning, chord patterns may be moved around the fretboard, a property that simplifies beginners' learning of chords and that simplifies advanced players' improvisation.
Thrice repeated open-string notes
Two other regular tunings, all-fourths and all-fifths tunings, have strings with five and six distinct open-notes, respectively. Thus, they have no repetition of open-notes, and so they require that the guitarist remember five and six strings, respectively.
In contrast, augmented fourths is a repetitive tuning that begins the next octave after two strings. These tunings' repetition of open-string notes again simplifies the learning of chords and improvisation.
Left-handed involution
For left-handed guitars, the ordering of the strings reverse the ordering of right-handed guitars. Consequently, left-handed tunings have different chords than right-handed tunings. Regular guitar-tunings have the property that their left-handed ("lefty" versions) are also regular tunings. For example, the left-handed version of all-fourths tuning is all-fifths tuning, and the left-handed version of all-fifths tuning is all-fourths tuning. In general, the left-handed involute of the regular tuning based on the interval with semitones is the regular tuning based on its involuted interval with semitones: All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones), as mentioned previously.
The left-handed involute of an augmented-fourth tuning is the augmented-fourths tuning with the same open-string notes. "The augmented-fourth interval is the only interval whose inverse is the same as itself. The augmented-fourths tuning is the only tuning (other
than the 'trivial' tuning C-C-C-C-C-C) for which all chords-forms remain unchanged when the strings are reversed. Thus the augmented-fourths tuning is its own 'lefty' tuning."
Examples
The "standard tuning" consists of perfect fourths and a single major-third between the G (g) and B (b') strings:
E-A-d-g-b'-e'
C-F-C-F-C-F
Of all the augmented-fourths tunings, the C-F-c-f-c'-f ' tuning is the closest approximation to the standard tuning, and its fretboard is displayed next:
Each fret displays the open strings of exactly one augmented-fourths tuning.
B-F-B-F-B-F
There are no sharps or flats in the open strings of exactly one augmented-fourths tuning, that with only B and F notes (B-F-b-f-b'-f'). This tuning would appear, for the C-F augmented-fourths tuning displayed above, to the left of the open strings, at the negative-first fret.
This tuning "makes it very easy for playing half-whole scales, diminished 7 licks, and whole tone scales," stated guitarist Ron Jarzombek, who has used it on two albums. This tuning was used in "Tri 7/5" by Shawn Lane (The Tri-Tone Fascination and Powers of Ten; Live!).
See also
Scordatura
Notes
References
Regular guitar-tunings
Repetitive guitar-tunings
Tritones | Augmented-fourths tuning | [
"Physics"
] | 1,149 | [
"Tritones",
"Symmetry",
"Musical symmetry"
] |
36,100,100 | https://en.wikipedia.org/wiki/FOMP | The magnetocrystalline anisotropy energy of a ferromagnetic crystal can be expressed as a power series of direction cosines of the magnetic moment with respect to the crystal axes. The coefficient of those terms is the constant anisotropy. In general, the expansion is limited to a few terms. Normally the magnetization curve is continuous with respect to the applied field up to saturation but, in certain intervals of the anisotropy constant values, irreversible field-induced rotations of the magnetization are possible, implying first-order magnetization transition between equivalent magnetization minima, the so-called first-order magnetization process (FOMP).
Theory
The total energy of a uniaxial magnetic crystal in an applied magnetic field can be written as a summation of the anisotropy term up to six order, neglecting the sixfold planar contribution,
and the field dependent Zeeman energy term
where:
are the anisotropy constants up to six order
is the applied magnetic field
is the saturation magnetization
is the angle between the magnetization and the easy c-axis
is the angle between the field and the easy c-axis
The total energy can be written:
Phase diagram of easy and hard directions
In order to determine the preferred directions the magnetization vector in the absence of the external magnetic field we analyze first the case of uniaxial crystal. The maxima and minima of energy with respect to must satisfy
while for the existence of the minima
For symmetry reasons the c-axis and the basal plane are always points of extrema and can be easy or hard directions depending on the anisotropy constant values. We can have two additional extrema along conical directions at angles given by:
The and are the cones associated to the + and - sign. It can be verified that the only is always a minimum and can be an easy direction, while is always a hard direction.
A useful representation of the diagram of the easy directions and other extrema is the representation in term of reduced anisotropy constant
and . The following figure shows the phase diagram for the two cases and . All the information concerning the easy directions and the other extrema are contained in a special symbol that marks every different region. It simulate a polar type of energy representation indicating existing extrema by concave (minimum) and convex tips (maximum). Vertical and horizontal stems refer to the symmetry axis and the basal plane respectively. The left-hand and the right-hand oblique stems indicate the and cones respectively. The absolute minimum (easy direction) is indicated by filling of the tip.
Transformation of Anisotropy Constant into Conjugate Quantities
Before going into the details of the calculation of the various types of FOMP we call the readers attention to a convenient transformation () of the anisotropy constants into conjugate quantities, denoted by. This transformation can be found in such a way that all the results obtained for the case of parallel to c-axis can be immediately transferred to the case of perpendicular to c-axis and vice versa according to the following symmetrical dual correspondence:
The use of the table is very simple. If we have a magnetization curve obtained with perpendicular to c-axis and with the anisotropy constant , we can have exactly the same magnetization curve using according to the table but applying the parallel to c-axis and vice versa.
FOMP examples
The determination of the conditions for the existence of FOMP requires the analysis of the magnetization curve dependence on the anisotropy constant values, for different directions of the magnetic field. We limit the analysis to the cases for parallel or perpendicular to the c-axis, hereafter indicated as A-case and P-case, where A denotes axial while P stands for planar. The analysis of the equilibrium conditions shows that two types of FOMP are possible, depending
on the final state after the transition, in case of saturation we have (type-1 FOMP) otherwise (type-2 FOMP). In the case when an easy cone is present we add the suffix C to the description of the FOMP-type. So all possible cases of FOMP-type are: A1, A2, P1, P2, P1C, A1C. In the following figure some examples of FOMP-type are represented, i.e. P1, A1C and P2 for different anisotropy constants, reduced variable are given on the axes, in particular on the abscissa and on the ordinate .
FOMP diagram
Tedious calculations allow now to determine completely the regions of existence of type 1 or type 2 FOMP. As in the case of the diagram of the easy directions and other extrema is convenient the representation in term of reduced anisotropy constant and . In the following figure we summarize all the FOMP-types distinguished by the labels A1, A2, P1, P2, P1C, A1C which specifies the magnetic field directions (A axial; P planar) and the type of FOMP (1 and 2) and the easy cone regions with type 1 FOMP (A1C, P1C).
Polycrystalline system
Since the FOMP transition represents a singular point in the magnetization curve of a single crystal, we analyze how this singularity is transformed when we magnetize a polycrystalline sample. The result of the mathematical analysis shows the possibility of carrying out the measurement of critical field () where the FOMP transition takes place in the case of polycrystalline samples.
For determining the characteristics of FOMP when the magnetic field is applied at a variable angle with respect to the c axis, we have to examine the evolution of the total energy of the crystal with increasing field for different values of between and . The calculations are complicated and we report only the conclusions. The sharp FOMP transition, evident in single crystal, in the case of polycrystalline samples moves at higher fields for different from hard direction and then becomes smeared out. For higher value of the magnetization curve becomes smooth, as is evident from computer magnetization curves obtained by summation of all curves corresponding to all angles between and .
Origin of high order anisotropy constants
The origin of high anisotropy constant can be found in the interaction of two sublattices ( and ) each of them having a competing high anisotropy energy, i.e. having different easy directions. In particular we can no longer consider the system as a rigid collinear magnetic structure, but we must allow for substantial deviations from the equilibrium configuration present at zero field. Limiting up to fourth order, neglecting the in plane contribution, the anisotropy energy becomes:
where:
is the exchange integral () in case of ferromagnetism
are the anisotropy constants of the sublattice
are the anisotropy constants of the sublattice respectively
is the applied field
are the saturation magnetizations of and sublattices
are the angles between the magnetization of and sublattices with the easy c-axis
The equilibrium equation of the anisotropy energy has not a complete analytical solution so computer analysis is useful. The interesting aspect regards the simulation of the resulted magnetization curves, analytical or discontinuous with FOMP.
By computer it is possible to fit the obtained results by an equivalent anisotropy energy expression:
where:
are the equivalent anisotropy constants up to six order
is the angle between the magnetization and the easy c-axis
So starting from a forth order anisotropy energy expression we obtain an equivalent expression of sixth order, that is higher anisotropy constant can be derived from the competing anisotropy of different sublattices.
FOMP in other symmetries
The problem for cubic crystal system has been approached by Bozorth, and partial results have been obtained by different authors, but exact complete phase diagrams with anisotropy contributions up to sixth and eighth order have only been determined more recently.
The FOMP in the trigonal crystal system has been analyzed for case of the anisotropy energy expression up to forth order:
where and are the polar angles of the magnetization vector with respect to c-axis. The study of the energy derivatives allows the determination of the magnetic phase and the FOMP-phase as in the hexagonal case, see the reference for the diagrams.
References
Ferromagnetism | FOMP | [
"Chemistry",
"Materials_science"
] | 1,753 | [
"Magnetic ordering",
"Ferromagnetism"
] |
35,023,447 | https://en.wikipedia.org/wiki/Single-%20and%20double-acting%20cylinders | In mechanical engineering, the cylinders of reciprocating engines are often classified by whether they are single- or double-acting, depending on how the working fluid acts on the piston.
Single-acting
A single-acting cylinder in a reciprocating engine is a cylinder in which the working fluid acts on one side of the piston only. A single-acting cylinder relies on the load, springs, other cylinders, or the momentum of a flywheel, to push the piston back in the other direction. Single-acting cylinders are found in most kinds of reciprocating engine. They are almost universal in internal combustion engines (e.g. petrol and diesel engines) and are also used in many external combustion engines such as Stirling engines and some steam engines. They are also found in pumps and hydraulic rams.
Double-acting
A double-acting cylinder is a cylinder in which the working fluid acts alternately on both sides of the piston. In order to connect the piston in a double-acting cylinder to an external mechanism, such as a crank shaft, a hole must be provided in one end of the cylinder for the piston rod, and this is fitted with a gland or "stuffing box" to prevent escape of the working fluid. Double-acting cylinders are common in steam engines but unusual in other engine types. Many hydraulic and pneumatic cylinders use them where it is needed to produce a force in both directions. A double-acting hydraulic cylinder has a port at each end, supplied with hydraulic fluid for both the retraction and extension of the piston. A double-acting cylinder is used where an external force is not available to retract the piston or it can be used where high force is required in both directions of travel.
Steam engines
Steam engines normally use double-acting cylinders. However, early steam engines, such as atmospheric engines and some beam engines, were single-acting. These often transmitted their force through the beam by means of chains and an "arch head", as only a tension in one direction was needed.
Where these were used for pumping mine shafts and only had to act against a load in one direction, single-acting designs remained in use for many years. The main impetus towards double-acting cylinders came when James Watt was trying to develop a rotative beam engine, that could be used to drive machinery via an output shaft. Compared to a single-cylinder engine, a double-acting cylinder gave a smoother power output. The high-pressure engine, as developed by Richard Trevithick, used double-acting pistons and became the model for most steam engines afterwards.
Some of the later steam engines, the high-speed steam engines, used single-acting pistons of a new design. The crosshead became part of the piston, and there was no longer any piston rod. This was for similar reasons to the internal combustion engine, as avoiding the piston rod and its seals allowed a more effective crankcase lubrication system.
Small models and toys often use single-acting cylinders for the above reason but also to reduce manufacturing costs.
Internal combustion engines
In contrast to steam engines, nearly all internal combustion engines have used single-acting cylinders.
Their pistons are usually trunk pistons, where the gudgeon pin joint of the connecting rod is within the piston itself. This avoids the crosshead, piston rod and its sealing gland, but it also makes a single-acting piston almost essential. This, in turn, has the advantage of allowing easy access to the bottom of the piston for lubricating oil, which also has an important cooling function. This avoids local overheating of the piston and rings.
Crankcase compression two-stroke engines
Small petrol two-stroke engines, such as for motorcycles, use crankcase compression rather than a separate supercharger or scavenge blower. This uses both sides of the piston as working faces, the lower side of the piston acting as a piston compressor to compress the inlet charge ready for the next stroke. The piston is still considered as single-acting, as only one of these faces produces power.
Double-acting internal combustion engines
Some early gas engines, such as Lenoir's original engines, from around 1860, were double-acting and followed steam engines in their design.
Internal combustion engines soon switched to single-acting cylinders. This was for two reasons: as for the high-speed steam engine, the high force on each piston and its connecting rod was so great that it placed large demands upon the bearings. A single-acting piston, where the direction of the forces was consistently compressive along the connecting rod, allowed for tighter bearing clearances. Secondly the need for large valve areas to provide good gas flow, whilst requiring a small volume for the combustion chamber so as to provide good compression, monopolised the space available in the cylinder head. Lenoir's steam engine-derived cylinder was inadequate for the petrol engine and so a new design, based around poppet valves and a single-acting trunk piston appeared instead.
Extremely large gas engines were also built as blowing engines for blast furnaces, with one or two extremely large cylinders and powered by the burning of furnace gas. These, particularly those built by Körting, used double-acting cylinders. Gas engines require little or no compression of their charge, in comparison to petrol or compression-ignition engines, and so the double-acting cylinder designs were still adequate, despite their narrow, convoluted passageways.
Double-acting cylinders have been infrequently used for internal combustion engines since, although Burmeister & Wain made 2-stroke cycle double-acting (2-SCDA) diesels for marine propulsion before 1930. The first, of 7,000 hp, was fitted in the British MV Amerika (United Baltic Co.) in 1929. The two B&W SCDA engines fitted to the in 1937 produced 24,000 hp each.
USS Pompano
In 1935 the US submarine USS Pompano was ordered as part of the Perch class Six boats were built, with three different diesel engine designs from different makers. Pompano was fitted with H.O.R. (Hooven-Owens-Rentschler) 8-cylinder double-acting engines that were a licence-built version of the MAN auxiliary engines of the cruiser Leipzig. Owing to the limited space available within the submarines, either opposed-piston, or, in this case, double-acting engines were favoured for being more compact. Pompanos engines were a complete failure and were wrecked during trials before even leaving the Mare Island Navy Yard. Pompano was laid up for eight months until 1938 while the engines were replaced. Even then the engines were regarded as unsatisfactory and were replaced by Fairbanks-Morse engines in 1942. While Pompano was still being built, the Salmon-class submarines were ordered. Three of these were built by Electric Boat, with a 9-cylinder development of the H.O.R. engine. Although not as great a failure as Pompanos engines, this version was still troublesome and the boats were later re-engined with the same single-acting General Motors 16-248 V16 engines as their sister boats. Other Electric Boat-constructed submarines of the Sargo and Seadragon classes, as well as the first few of the Gato class, were also built with these 9-cylinder H.O.R. engines, but later re-engined.
Hydraulic cylinders
A hydraulic cylinder is a mechanical actuator that is powered by a pressurised liquid, typically oil. It has many applications, notably in construction equipment (engineering vehicles), manufacturing machinery, and civil engineering.
Footnotes
References
Piston engines
Steam engines
Hydraulics | Single- and double-acting cylinders | [
"Physics",
"Chemistry",
"Technology"
] | 1,565 | [
"Engines",
"Piston engines",
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
35,025,857 | https://en.wikipedia.org/wiki/RAMP%20Simulation%20Software%20for%20Modelling%20Reliability%2C%20Availability%20and%20Maintainability | RAMP Simulation Software for Modelling Reliability, Availability and Maintainability (RAM) is a computer software application developed by WS Atkins specifically for the assessment of the reliability, availability, maintainability and productivity characteristics of complex systems that would otherwise prove too difficult, cost too much or take too long to study analytically. The name RAMP is an acronym standing for Reliability, Availability and Maintainability of Process systems.
RAMP models reliability using failure probability distributions for system elements, as well as accounting for common mode failures. RAMP models availability using logistic repair delays caused by shortages of spare parts or manpower, and their associated resource conditions defined for system elements. RAMP models maintainability using repair probability distributions for system elements, as well as preventive maintenance data and fixed logistic delays between failure detection and repair commencement.
RAMP consists of two parts:
RAMP Model Builder. A front-end interactive graphical user interface (GUI).
RAMP Model Processor. A back-end discrete-event simulation that employs the Monte Carlo method.
RAMP Model Builder
The RAMP Model Builder enables the user to create a block diagram describing the dependency of the process being modelled on the state of individual elements in the system.
Elements
Elements are the basic building blocks of a system modelled in RAMP and can have user-specified failure and repair characteristics in the form probability distributions, typically of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR) values respectively, chosen from the following:
Weibull: Defined by scale and shape parameters (or optionally 50th and 95th percentiles for repairs).
Negative exponential: Defined by mean average.
Lognormal: Defined by median average and dispersion (or optionally 50th and 95th percentiles for repairs).
Fixed (Uniform): Defined by a maximum time to failure or repair.
Empirical (user-defined): Defined by a multiplier.
Elements can represent any part of a system from a specific failure mode of a minor component (e.g. isolation valve fails open) to major subsystems (e.g. compressor or power turbine failure) depending on the level and detail of the analysis required.
Deterministic elements
RAMP allows the user to define deterministic elements which are failure free and/or are unrepairable. These elements may be used to represent parameters of the process (e.g. purity of feedstock or production demand at a particular time) or where necessary in the modelling logic (e.g. to provide conversion factors).
Q values
Each element of the model has a user-defined process 'q value' representing a parameter of interest (e.g. mass flow, generation capacity etc.). Each element is considered to be either operating or not operating and has associated performance values q = Q or q = 0 respectively. The interpretation of each 'q value' in the model depends on the parameter of interest being modelled, which is typically chosen during the system analysis stage of model design.
Groups
Elements with interacting functionality can be organised into groups. Groups can be further combined (to any depth) to produce a Process Dependency Diagram (PDD) of the system, which is similar to a normal reliability block diagram (RBD) commonly used in reliability engineering, but also allows complex logical relationships between groups and elements to permit a more accurate representation of the process being modelled. The PDD should not be confused with a flow diagram since it describes dependency, not flow. For example, an element may appear in more than one position in the PDD if this is required to represent the true dependency of the process on that element. Groups may also be shown in full or may be compressed to allow the screen to show other areas to greater resolution.
Group types
Each group can be one of eleven group types, each with its own rule for combining 'q values' of elements and/or other groups within it to produce a 'q value' output. Groups thus define how the behaviour of each element affects the reliability, availability, maintainability and productivity of the system. The eleven group types are divided into two classes:
Five 'Flow' group types:
Minimum (M): qM = min[q1, q2,...qn]
Active Redundant (A): qA = min[Rating, (q1 + q2 + ... + qn)] unless qA < Cut-off, then qA = 0
Standby Redundant (S): qS = as for Active Redundant, but where the first component is always assumed to be duty equipment.
Time (T): qT = 0 if component with 'q value' q1 is in a "down" state when time through mission t < t0, otherwise qT = q1 + ... + qm if component with 'q value' q1 is in an "up" state when time t ≥ t0 + (m-1) x Time Delay, where m = 1 to n.
Buffer (B): if the buffer is not empty qB = q2 else qB = min[q1,q2], where the buffer empties as output if component with 'q value' q2 is in an "up" state with level at time 0 = Initial Level, otherwise level at time t = level at time (t-1) - (q2 - q1), and the buffer fills as input if component with 'q value' q2 is in a "down" state with level at time 0 = Initial Level, otherwise level at time t = Capacity if level at time (t-1) + q1 > C, otherwise level at time t = level at time (t-1) + (q2 - q1). Buffer input and output may also be limited by buffer constraints.
Six 'Logic' group types:
Product (P): qP = q1 x q2 x ... x qn
Quotient (Q): pQ = q1 / q2
Conditionally Greater Than (G): if q1 > q2 then qG = q1 else qG = 0
Conditionally Less Than (L): if q1 < q2 then qG = q1 else qG = 0
Difference (D): max[q1 - q2, 0]
Equality (E): q1 if q1 lies outside the range PA to PB, q2 if q1 lies inside the range PA to PB
Three group types (Active Redundant, Standby Redundant and Time) are displayed in parallel configurations (vertically down the screen). All others are displayed in series configurations (horizontally across the screen).
Six group types (Buffer, Quotient, Conditionally Greater Than, Conditionally Less Than, Difference and Equality) contain exactly two components with 'q values' q1 and q2. All others contain two or more components with 'q values' q1, q2 to qn.
Element states
An element may be in one of five possible states and its 'q value' is determined by its state:
Undergoing preventive maintenance (q = 0).
Being repaired following failure, including queueing for repair (q = 0).
Failed but undetected, dormant failure (q = 0). (e.g. standby equipment unavailable in the event of failure of duty equipment. Thus a problem may not be apparent until a failure of the duty equipment occurs.)
Up but passive, available but not being used (q = 0). (e.g. standby equipment available in the event of failure of duty equipment.)
Up and active, being used (q = Q > 0). (i.e. operating as intended.)
Occurrence of a state transition for an element is determined largely by the user-defined parameters for that element (i.e. its failure and repair distributions and any preventive maintenance cycles).
Element resource and repair conditions
There is often a time delay between an element failing and the commencement of repair of the element. This may be caused by a lack of spare parts, the unavailability of manpower or the element cannot be repaired due to dependencies on other elements (e.g. a pump cannot be repaired because the isolating valve is defective and cannot be closed). In all of these cases, the element must be queued for repair. RAMP allows the user to define multiple resource conditions per element, all of which must be satisfied to allow a repair to be commenced. Each resource condition is one of five types:
Repair Trade: a specified number of a repair trade must be available.
Spare: a specified number of a spare part must be available.
Group Q Value: a specified group must satisfy a condition regarding its 'q value'.
Buffer Level: a specified buffer must satisfy a condition regarding its level.
Element State: a specified element must satisfy a condition regarding its state.
Repair trades repair condition
Repair trades can be specified for the repair of any element, and they represent manpower in the form of a set of skilled maintenance workers with a particular trade. A repair trade can be used for the duration of an element repair (i.e. logistic delay plus a time value drawn from the element repair distribution). On completion of the repair, the Repair Trade becomes available to repair another element. the number of repairs which can be performed simultaneously for elements requiring a particular repair trade depends on the number of repair trade resources allocated and the number of that repair trade specified as a requirement for the repair.
Spares repair condition
If a spare part is required for an element repair, then the spare part is withdrawn from stock at the instant the repair commences (i.e. as soon as the element leaves the repair queue). The maximum number of spare parts of each type that may be held in stock is user-defined. The stock may either be replenished periodically at a user-defined time interval, or when the stock falls below a user-defined level, in which case RAMP allows a user-defined a time delay that must occur between reordering and the actual replenishment of the stock.
Group Q value repair condition
RAMP allows the user to specify that an element cannot be repaired until the 'q value' of a nominated group satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint. These conditions may be used to model certain rules in a system (e.g. a pump cannot be repaired until a tank is empty).
Buffer level repair condition
Specifying a buffer level constraint means that preventive maintenance of an element can be restricted until the buffer level of a nominated buffer group satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint. These conditions may be used to model certain rules in a system (e.g. it may be a requirement for maintenance of a submersible pump that the tank it is in should be empty before repair work commences).
Element state repair condition
RAMP allows the user to specify that an element cannot be repaired until the state of another nominated element satisfies one of six conditions (>, ≥, <, ≤, =, ≠) relative to a user-defined non-negative real number repair constraint.
Repair policy
Each element has user-defined parameters that can affect how it is repaired:
Logistic repair delay: A time period that must elapse before a repair can start on an element. It is a fixed time that is added to the repair time sampled from the user-defined repair probability distribution for the element. Typically, it represents a combination of the time taken for the repair team to reach the site of failure, time to isolate the failed item, and time taken to obtain the required spare part from store.
Repair 'good-as-new' or 'bad-as-old': Refers to the failure rate of an element rather than its 'q-value'. By default an element is restored to 'good-as-new' following repair, but there is an option to toggle a 'bad-as-old' state that simulates a quick-fix equivalent to restoring the element to the beginning of the wear-out phase of a Weibull bathtub curve, should a Weibull probability distribution with shape greater than one be used for repairs.
Repair priority: Used only if element resource and repair conditions are specified (i.e. it is only used if an element has to queue for repair rather than going directly for repair). The purpose of this field is to help determine the sequence in which elements are drawn from the repair queue as resources become available for element repair. Elements are repaired according to their repair priority, where 1 is highest priority, 2 is next highest, and so on. Elements with the same priority are repaired on a 'first come first served' basis.
In addition, each element in a Standby Redundant group has more parameters that can affect how it is repaired:
Passive failure rate factor: Factor by which the element failure rate is multiplied when operating in the passive state as opposed to the active state. By default this factor will be one and typically between zero and one, indicating a lower passive failure rate than active failure rate.
Probability of switching failure: Percentage probability that the element will fail when switched from the passive state into the active state. If such a switching failure occurs, the element must be repaired in the normal way before it can be used again.
Startup delay: Startup of the element going from a passive state to an active state is delayed by a specified time.
Preventive maintenance
RAMP allows the user to model preventive maintenance for each system element by cycles expressed using the three parameters 'up-time'. 'down-time' and 'down-time' start time. RAMP also has an option to toggle 'intelligent preventive maintenance' on each system element, which attempts to improve system performance by doing preventive maintenance when the element is already in 'down-time' for other reasons.
Common mode failures
Common mode failures (CMFs) that cause a number of elements to fail at the same time (e.g. due to the occurrence of a fire or some other catastrophic event, or the failure of a power supply that provides power to several separately defined elements). RAMP allows the user to define CMFs by stating the set of affected elements and the frequency distribution for occurrences of the CMF. When a CMF occurs, any elements which are affected by that particular CMF are placed in the failed state and must be repaired, being queued for repair if necessary. Any elements failed by a CMF will be repaired according to the repair distribution defined for that element. Elements which are already being repaired, are in the repair queue, or are undergoing preventive maintenance remain unaffected by the occurrence of an associated CMF.
Criticalities
The criticality of an element is a measure of how much the element has affected the 'q value' (i.e. performance) of the group to which it belongs. Elements with a high criticality cause more 'down-time' or unavailability on average and are thus critical to the performance of the group. The criticality of an element may vary according to the level of the group (e.g. a motor failure may have a very high criticality for a group that contains failure modes for one pump, but a very low criticality for a group that contains several redundant pumps).
Time units
RAMP allows the user to set the time unit of interest, according to scale and fidelity considerations. The only requirement is that time units should be used consistently across a model to avoid misleading results. Time units are expressed in the following input data:
Element failure probability distributions.
Element repair probability distributions.
Element logistic delay times (before repair).
Element preventive maintenance 'up-times', 'down-times' and start points.
Common mode failure probability distributions.
Percentile times in empirical probability distributions (for failure or repair).
Delay times in Time groups.
Spare part replenishment intervals or re-order delay times.
Rolling average span and increment.
Histogram 'down-times'.
Simulated time period of interest.
Element types
Elements that are assumed to have the same failure and repair characteristics and share a common pool of spare parts can be assigned the same user-defined element type (i.e. pump, motor, tank etc.). This allows for faster construction of complex systems containing many elements that are similar in function since the entry of element data does not need to be repeated for such elements.
Import functionality
Previously built systems can be imported as subsystems of the system currently displayed. This allows for faster construction of complex systems containing many subsystems since they can be constructed in parallel by multiple users before being imported into a common system.
RAMP Model Processor
The RAMP Model Processor mimics the system operating over the time period of interest - known in RAMP as a mission - by sampling failure and repair times from probability distributions (with probabilities drawn from a pseudo-random number generator) and combining with other data defined in the RAMP Model Builder to determine state transition events for each element in the model. The simulation uses discrete events that are queued in chronological order with each event being processed in turn to determine the states and thus the 'q values' of every element in the model at that discrete point in time. Group combination rules are used to determine the 'q values' at successively higher levels of groups, culminating in 'q values' of the outermost groups that when averaged over the events of the simulation typically provide performance measures of the system, which are output in model results in terms of the chosen parameters of interest.
By running enough missions over the same time period of interest (different possible histories from the same starting point), RAMP can be used to generate statistically significant results that establish the likely distribution of the user-defined parameters of interest and thus objectively assess the system, with the confidence bands on the results dependent on the number of missions simulated. On the other hand, by running a mission length that is long in comparison with the failure frequencies and repair times, and simulating only one mission, RAMP can be used to establish the steady-state performance of the system.
History of RAMP
RAMP was originally developed by Rex Thompson & Partners Ltd. in the mid-1980s as an availability simulation program, primarily used for plant and process modelling. The ownership of RAMP was transferred to T.A. Group upon its founding in January 1990, and then to Fluor Corporation when it acquired T.A. Group in April 1996, before passing to the Advantage Technical Consulting business of parent company Advantage Business Group Ltd., formed in February 2001 by a management buy-out of the consulting and information technology businesses of Fluor Corporation, operating in the transport, defence, energy and manufacturing sectors. RAMP is currently owned by Atkins following its acquisition of Advantage Business Group Ltd. in March 2007. Extensive redevelopment by Atkins of the original RAMP application for DOS has produced a series of RAMP applications for the Microsoft Windows platform, with the RAMP Model Builder written in Visual Basic and the RAMP Model Processor written in FORTRAN.
Uses of RAMP
Due to its inherent flexibility, RAMP is now used to optimise system design and support critical decision making in many sectors RAMP provides the capability to model many factors that may affect a system such as changes in specification or procurement contracts, 'what if' studies, sensitivity analysis, equipment redundancy, equipment criticality, delayed failures, as well as allowing the generation of results that can be exported for failure mode, effects and criticality analysis (FMECA) and cost-benefit analysis.
References
Reliability analysis
Reliability engineering
Simulation software
Monte Carlo software | RAMP Simulation Software for Modelling Reliability, Availability and Maintainability | [
"Engineering"
] | 4,028 | [
"Systems engineering",
"Reliability analysis",
"Reliability engineering"
] |
35,026,040 | https://en.wikipedia.org/wiki/Abscopal%20effect | The abscopal effect is a hypothesis in the treatment of metastatic cancer whereby shrinkage of untreated tumors occurs concurrently with shrinkage of tumors within the scope of the localized treatment. R.H. Mole proposed the term "abscopal" ('ab' - away from, 'scopus' - target) in 1953 to refer to effects of ionizing radiation "at a distance from the irradiated volume but within the same organism".
Initially associated with single-tumor, localized radiation therapy, the term "abscopal effect" has also come to encompass other types of localized treatments such as electroporation and intra-tumoral injection of therapeutics. However, the term should only be used when truly local treatments result in systemic effects. For instance, chemotherapeutics commonly circulate through the blood stream and therefore exclude the possibility of any abscopal response.
The mediators of the abscopal effect of radiotherapy were unknown for decades. In 2004, it was postulated for the first time that the immune system might be responsible for these "off-target" anti-tumor effects. Various studies in animal models of melanoma, mammary, and colorectal tumors have substantiated this hypothesis. Abscopal effects of Targeted intraoperative radiotherapy have been seen in clinical studies, including in randomized trials where women treated with lumpectomy for breast cancer combined with whole breast radiotherapy showed reduced mortality from non-breast-cancer causes when compared with whole breast radiotherapy. Furthermore, immune-mediated abscopal effects were also described in patients with metastatic cancer. Whereas these reports were extremely rare throughout the 20th century, the clinical use of immune checkpoint blocking antibodies such as ipilimumab or pembrolizumab has greatly increased the number of abscopally responding patients in selected groups of patients such as those with metastatic melanoma or lymphoma.
Mechanisms
Similar to immune reactions against antigens from bacteria or viruses, the abscopal effect requires priming of immune cells against tumor antigens. Local irradiation of a tumor nodule may lead to immunogenic forms of tumor cell death and liberation of tumor cell-derived antigens. These antigens can be recognized and processed by antigen-presenting cells within the tumor (dendritic cells and macrophages). Cytotoxic T cells which recognize these tumor antigens may in turn be primed by the tumor antigen-presenting cells. In contrast to the local effect of irradiation on the tumor cells, these cytotoxic T cells circulate through the blood stream and are thus able to destroy remaining tumor cells in distant parts of the body which were not irradiated. Accordingly, increases in tumor-specific cytotoxic T cells were shown to correlate with abscopal anti-tumor responses in patients. Vice versa, the abscopal effect is abolished after experimental depletion of T cells in various animal models.
Abscopal effects of ionizing radiation are often blocked by the immunosuppressive microenvironment inside the irradiated tumor which prevents effective T cell priming. This explains why the effect is so rarely seen in patients receiving radiotherapy alone. In contrast, the combination of immunomodulatory drugs such as ipilimumab and pembrolizumab can partially reconstitute systemic anti-tumor immune reactions induced after local tumor radiotherapy. The optimal combination of radiation dose and fractionation with immunomodulatory drugs is currently under intensive investigation. In this context, it was proposed that radiation doses above 10 to 12 Gray might be ineffective in inducing immunogenic forms of cell death. However, there is so far no consensus on the optimal radiation regimen needed to increase the chance of abscopal tumor regression.
References
Cancer treatments
Radiation therapy
Immune system
Medical treatments | Abscopal effect | [
"Biology"
] | 802 | [
"Immune system",
"Organ systems"
] |
35,026,180 | https://en.wikipedia.org/wiki/Holomorphic%20Lefschetz%20fixed-point%20formula | In mathematics, the Holomorphic Lefschetz formula is an analogue for complex manifolds of the Lefschetz fixed-point formula that relates a sum over the fixed points of a holomorphic vector field of a compact complex manifold to a sum over its Dolbeault cohomology groups.
Statement
If f is an automorphism of a compact complex manifold M with isolated fixed points, then
where
The sum is over the fixed points p of f
The linear transformation Ap is the action induced by f on the holomorphic tangent space at p
See also
Bott residue formula
References
Complex manifolds
Theorems in algebraic geometry | Holomorphic Lefschetz fixed-point formula | [
"Mathematics"
] | 132 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
35,026,878 | https://en.wikipedia.org/wiki/Midwestern%20Universities%20Research%20Association | The Midwestern Universities Research Association (MURA) was a collaboration between 15 universities with the goal of designing and building a particle accelerator for the Midwestern United States. It existed between 1953–1967, but could not achieve its goal in this time and lost funding. It was thought that President John F. Kennedy would have supported the MURA machine, while one of President Lyndon B. Johnson's first actions was the shutdown of the MURA machine and laboratory.
In its formative years, Donald Kerst was the director of MURA. At this institution, Keith Symon invented the FFAG accelerator, independently to Tihiro Ohkawa, which combines several concepts of cyclotrons and synchrotrons. FFAG concepts were extensively developed in MURA. The proposed MURA accelerators were scaling FFAG synchrotrons, meaning that orbits of any momentum are photographic enlargements of those of any other momentum.
The concept of FFAG acceleration was revived in the early 1980s, and gained interest up to the present day, see e.g. EMMA (accelerator).
References
Further reading
F. Cole. O Camelot. Supplement to Proc. 16th Intl. Conf. on Cyclotrons and their Applications (Cyclotrons 2001)
this paper was published posthumously.
A book by MURA veterans that was designed to augment Cole's posthumous manuscript.
Daniel S. Greenberg, Chapters X and XI of The Politics of Pure Science, Plume Books, 1967, University of Chicago Press, 1999.
deals with politics around MURA, particularly the battle over funding higher energy machines being studied at Berkeley and at Brookhaven (New York) and funding MURA, a machine that would produce the same energy as Argonne's Zero Gradient Synchrotron but at 100 fold the intensity.
College and university associations and consortia in the United States
Research projects
Defunct organizations based in the United States
1953 establishments in the United States
1967 disestablishments in the United States
Physics organizations
Collaborative projects
Accelerator physics | Midwestern Universities Research Association | [
"Physics"
] | 415 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
35,030,397 | https://en.wikipedia.org/wiki/C10H13N5O3 | {{DISPLAYTITLE:C10H13N5O3}}
The molecular formula C10H13N5O3 (molar mass: 251.246 g/mol, exact mass: 251.1018 u) may refer to:
Cordycepin
Deoxyadenosine
Molecular formulas | C10H13N5O3 | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,030,417 | https://en.wikipedia.org/wiki/C9H12N2O5 | {{DISPLAYTITLE:C9H12N2O5}}
The molecular formula C9H12N2O5 (molar mass: 228.20 g/mol, exact mass: 228.0746 u) may refer to:
Deoxyuridine (dU)
Zebularine
Molecular formulas | C9H12N2O5 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,031,328 | https://en.wikipedia.org/wiki/C47H80O19P3 | The molecular formula C47H80O19P3 may refer to:
Phosphatidylinositol 3,4-bisphosphate
Phosphatidylinositol 3,5-bisphosphate
Phosphatidylinositol 4,5-bisphosphate
Molecular formulas | C47H80O19P3 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,031,587 | https://en.wikipedia.org/wiki/C20H32O4 | The molecular formula C20H32O4 may refer to:
Arachidonic acid 5-hydroperoxide
Hepoxilin
Leukotriene B4 (LTB4)
the main (acid resin) constituent of frankincense resin
Molecular formulas | C20H32O4 | [
"Physics",
"Chemistry"
] | 56 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
46,339,949 | https://en.wikipedia.org/wiki/Bogdanov%20map | In dynamical systems theory, the Bogdanov map is a chaotic 2D map related to the Bogdanov–Takens bifurcation. It is given by the transformation:
The Bogdanov map is named after Rifkat Bogdanov.
See also
List of chaotic maps
References
DK Arrowsmith, CM Place, An introduction to dynamical systems, Cambridge University Press, 1990.
Arrowsmith, D. K.; Cartwright, J. H. E.; Lansbury, A. N.; and Place, C. M. "The Bogdanov Map: Bifurcations, Mode Locking, and Chaos in a Dissipative System." Int. J. Bifurcation Chaos 3, 803–842, 1993.
Bogdanov, R. "Bifurcations of a Limit Cycle for a Family of Vector Fields on the Plane." Selecta Math. Soviet 1, 373–388, 1981.
External links
Bogdanov map at MathWorld
Chaotic maps
Exactly solvable models
Dynamical systems | Bogdanov map | [
"Physics",
"Mathematics"
] | 216 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Mechanics",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
38,973,439 | https://en.wikipedia.org/wiki/Higgs%20field%20%28classical%29 | Spontaneous symmetry breaking, a vacuum Higgs field, and its associated fundamental particle the Higgs boson are quantum phenomena. A vacuum Higgs field is responsible for spontaneous symmetry breaking the gauge symmetries of fundamental interactions and provides the Higgs mechanism of generating mass of elementary particles.
At the same time, classical gauge theory admits comprehensive geometric formulation where gauge fields are represented by connections on principal bundles. In this framework, spontaneous symmetry breaking is characterized as a reduction of the structure group of a principal bundle to its closed subgroup . By the well-known theorem, such a reduction takes place if and only if there exists a global section of the quotient bundle . This section is treated as a classical Higgs field.
A key point is that there exists a composite bundle where is a principal bundle with the structure group . Then matter fields, possessing an exact symmetry group , in the presence of classical Higgs fields are described by sections of some composite bundle , where is some associated bundle to . Herewith, a Lagrangian of these matter fields is gauge invariant only if it factorizes through the vertical covariant differential of some connection on a principal bundle , but not .
An example of a classical Higgs field is a classical gravitational field identified with a pseudo-Riemannian metric on a world manifold . In the framework of gauge gravitation theory, it is described as a global section of the quotient bundle where is a principal bundle of the tangent frames to with the structure group .
See also
Gauge gravitation theory
Reduction of the structure group
Spontaneous symmetry breaking
Bibliography
External links
G. Sardanashvily, Geometry of classical Higgs fields, Int. J. Geom. Methods Mod. Phys. 3 (2006) 139; .
Theoretical physics
Gauge theories
Symmetry | Higgs field (classical) | [
"Physics",
"Mathematics"
] | 361 | [
"Symmetry",
"Theoretical physics",
"Quantum mechanics",
"Geometry",
"Theoretical physics stubs",
"Quantum physics stubs"
] |
38,974,895 | https://en.wikipedia.org/wiki/Slip%20factor | In turbomachinery, the slip factor is a measure of the fluid slip in the impeller of a compressor or a turbine, mostly a centrifugal machine. Fluid slip is the deviation in the angle at which the fluid leaves the impeller from the impeller's blade/vane angle. Being quite small in axial impellers (inlet and outlet flow in the same direction), slip is a very important phenomenon in radial impellers and is useful in determining the accurate estimation of work input or the energy transfer between the impeller and the fluid, rise in pressure and the velocity triangles at the impeller exit.
A simple explanation for the fluid slip can be given as: Consider an impeller with z number of blades rotating at angular velocity ω. A difference in pressure and velocity during the course of clockwise flow through the impeller passage can be observed between the trailing and leading faces of the impeller blades. High pressure and low velocity are observed at the leading face of the impeller's blade as compared to lower pressure with high velocity at the trailing face of the blade. This results in circulation in the direction of ω around the impeller blade which prevents the air from acquiring the whirl velocity equivalent to impeller speed with non-uniform velocity distribution at any radius.
This phenomenon reduces the output whirl velocity, which is a measure of the net power output from a turbine or a compressor. Hence, the slip factor accommodates for a slip loss which affects the net power developed which increases with increasing flow-rate.
Factors accounting for slip factor
Relative eddy.
Back eddy.
Impeller design or geometry
Mean blade loading.
Thickness of blade.
Finite number of blades.
Fluid entry conditions.
Working fluid's viscosity.
Effect of boundary layer growth.
Flow separation.
Friction forces on the walls of flow packages.
Boundary layer blockage.
Mathematical Formulae for Slip factor
Mathematically, the Slip factor denoted by 'σ' is defined as the ratio of the actual & ideal values of the whirl velocity components at the exit of the impeller. The ideal values can be calculated using an analytical approach while the actual values should be observed experimentally.
where,
V'w2 : Actual Whirl Velocity Component ,
Vw2 : Ideal Whirl Velocity Component
Usually,σ varies from 0-1 with an average ranging from 0.8-0.9.
The Slip Velocity is given as:
VS = Vw2 - V'w2 = Vw2(1-σ)
The Whirl Velocity is given as:
V'w2 = σ Vw2
Slip Factor correlations
Stodola's Equation: According to Stodola, it is the relative eddy that fills the entire exit session of the impeller passage. For a given flow geometry, the slip factor increases with the increase in the number of impeller blades, thus, accounts for one of the important parameter for losses.
where, z = number of blades and
For Radial tip, β2 = 900 ∴
Theoretically, In order to get the perfect ideal flow guidance, one can infinitesimally increase the number of thin vanes so that the flow should leave the impeller at an exact vane angle.
However, later experiments proved that beyond a particular value, a further increase in number of blades results in reduction of slip factor due to increase in blockage area.
Stanitz's Equation: Stanitz found the slip velocity does not depend upon the blade exit angle and hence, gave the following equation.
where, z = number of blades,
β2 varies from 450 to 900.
For radial tip: β2 = 900 ∴
Balje's formula: An approximate formula given by Balje for radial-tipped (β2=900) blade impellers:
where, z = number of blades , n =
The above explained models clearly states that the Slip factor is solely a function of geometry of Impeller. However, later studies proved that Slip factor depends on other factors as well namely 'mass flow rate',viscosity etc..
See also
Notes
It is found that the downfall in blade angle towards the impeller exit results in increment of slip factor with increasing flow rate and vice versa.
Slip Factor being a function of mass flow rate because of back eddy.
References
Flow simulation in radial pump impellers and evaluation of slip factor (July, 2015), http://pia.sagepub.com/content/early/2015/07/08/0957650915594953.full.pdf?ijkey=pW8QmRIKoDzyXzO&keytype=finite.
Seppo A. Korpela (2011), Principles of Turbomachinery. John Wiley & Sons, Inc. .
S.L. Dixon (1998), Fluid Mechanics And Thermodynamics of Turbomachinery. Elsevier Butterworth-Heinemann, Inc. .
Rama Gorla, Aijaz Khan, Turbomachinery:Design and Theory. Marcel Dekker, Inc. .
Fluid Machine - FKM
Analysis and Validation of a Unified Slip Factor Model for Impellers at Design and off-Design Conditions
Numerical Study of Slip Factor in Centrifugal Pumps and Study Factors Affecting its Performance
Fluid Machinery - NPTEL
Experimental and Analytical Investigations of Slip Factor In Radial Tipped Centrifugal Fan.
Gas compressors
Pumps | Slip factor | [
"Physics",
"Chemistry"
] | 1,101 | [
"Pumps",
"Turbomachinery",
"Gas compressors",
"Physical systems",
"Hydraulics"
] |
38,975,888 | https://en.wikipedia.org/wiki/Oncolytic%20herpes%20virus | Many variants of herpes simplex virus have been considered for viral therapy of cancer; the early development of these was thoroughly reviewed in the journal Cancer Gene Therapy in 2002. This page describes (in the order of development) the most notable variants—those tested in clinical trials: G207, HSV1716, NV1020 and Talimogene laherparepvec (previously Oncovex-GMCSF). These attenuated versions are constructed by deleting viral genes required for infecting or replicating inside normal cells but not cancer cells, such as ICP34.5, ICP6/UL39, and ICP47.
HSV1716
HSV1716 is a first generation oncolytic virus developed by the Glasgow Institute of Virology, and subsequently by Virttu Biologics (formerly Crusade Laboratories, a spin-out from The Institute of Virology), to selectively destroy cancer cells. The virus has the trade name SEPREHVIR. It is based on the herpes simplex virus (HSV-1). The HSV1716 strain has a deletion of the gene ICP34.5. ICP34.5 is a neurovirulence gene (enabling the virus to replicate in neurons of the brain and spinal cord). Deletion of this gene provides the property of tumor-selective replication to the virus (i.e. largely prevents replication in normal cells, while still allowing replication in tumor cells), although it also reduces replication in tumor cells as compared to wild type HSV.
A vital part of the normal mechanism of HSV-1, the ICP34.5 protein has been proposed to condition post-mitotic cells for viral replication. With no ICP34.5 gene, the HSV-1716 variant is unable to overcome normal defences of healthy differentiated cells (mediated by PKR) to replicate efficiently. However, tumour cells have much weaker PKR-linked defences, which may be the reason why HSV1716 effectively kills a wide range of tumour cell lines in tissue culture.
An HSV1716 variant, HSV1716NTR is an oncolytic virus generated by inserting the enzyme NTR into the virus HSV1716 as a GDEPT strategy. In-vivo, administration of the prodrug CB1954 to athymic mice bearing either A431 or A2780 tumour xenografts, 48 hours after intra-tumoral injection of HSV1790, resulted in a marked reduction in tumour volumes and significantly improved survival compared to administration of virus alone. A similar approach has been taken with a variant of HSV1716 that expresses the noradrenaline transporter to deliver radioactive iodine into individual infected cancer cells, by tagging a protein that cancer cells transport. The nor-adrenaline transporter specifically transports a compound containing radioactive iodine across the cell membrane, using genes from the virus. The only cells in the body that receive a significant radiation dose are those infected and their immediate neighbours.
Clinical trials
High grade glioma: Three phase I trials have been completed and two phase II trials are in preparation.
Squamous cell carcinoma of head and neck: A phase I trial has been completed.
Malignant melanoma: A phase I trial has been completed.
Hepatocellular carcinoma: A phase I/II trial is in preparation.
Malignant pleural mesothelioma: A phase I/II trial is in progress.
Non-CNS pediatric cancer: A phase I trial is in progress.
G207
G207 was constructed as a second-generation vector from HSV-1 laboratory strain F, with ICP34.5 deleted and the ICP6 gene inactivated by insertion of the E. coli LacZ gene.
Two phase I clinical trials in glioma were completed. The results of the first trial were published simultaneously with the first trial of HSV1716 in 2000, with commentators praising the demonstration of safety of these viruses when injected into brain tumours but also expressing disappointment that viral replication could not be demonstrated due to the difficulty of taking biopsies from brain tumours.
NV1020
NV1020 is an oncolytic herpes virus initially developed by Medigene Inc. and licensed for development by Catherex Inc. in 2010. NV1020 has a deletion of just one copy of the ICP34.5 gene and ICP6 is intact. A direct comparison of NV1020 and G207 in a mouse model of peritoneal cancer showed that NV1020 is more effective at lower doses.
Clinical trials
A Phase I/II study completed in 2008 evaluating NV1020 for treatment of metastatic colorectal cancer in the liver. The study assessed tumour response by CT scan and FDG-PET scans, showing 67% of patients had an initial increase in tumour size then followed by a decrease in 64% of patients.
Talimogene laherparepvec
Talimogene laherparepvec is the USAN name for the oncolytic virus also known as 'OncoVEX GM-CSF'. It was developed by BioVex Inc. (Woburn, MA, USA & Oxford, UK) until BioVex was purchased by Amgen in January 2011.
It is a second-generation herpes simplex virus based on the JS1 strain and expressing the immune stimulatory factor GM-CSF. Like other oncolytic versions of HSV it has a deletion of the gene encoding ICP34.5, which provides tumor selectivity. It also has a deletion of the gene encoding ICP47, a protein that inhibits antigen presentation, and an insertion of a gene encoding GM-CSF, an immune stimulatory cytokine. Deletion of the gene encoding ICP47 also puts the US11 gene (a late gene) under control of the immediate early ICP47 promoter. The earlier and greater expression of US11 (also involved in overcoming PKR-mediated responses) largely overcomes the reduction in replication in tumor cells of ICP34.5-deleted HSV as compared to wild type virus, but without reducing tumor selectivity.
Clinical trials
Including phase III : See Talimogene laherparepvec
See also
Virotherapy
Oncolytic virus
Oncolytic adenovirus
Herpes simplex virus
References
External links
Virttu Biologics
Amgen – Pipeline
Experimental cancer treatments
Simplexviruses
Virotherapy
Biotechnology | Oncolytic herpes virus | [
"Biology"
] | 1,382 | [
"Biotechnology",
"nan"
] |
38,980,437 | https://en.wikipedia.org/wiki/Marklund%20convection | Marklund convection, named after Swedish physicist Göran Marklund, is a convection process that takes place in filamentary currents of plasma. It occurs within a plasma with an associated electric field, that causes convection of ions and electrons inward towards a central twisting filamentary axis. A temperature gradient within the plasma will also cause chemical separation based on different ionization potentials.
Mechanism
In Marklund's paper, the plasma convects radially inwards towards the center of a cylindrical flux tube. During this convection, the different chemical constituents of the plasma, each having its specific ionization potential, enters into a progressively cooler region. The plasma constituents will recombine and become neutral, and thus no longer under the influence of the electromagnetic forcing. The ionization potentials will thus determine where the different chemicals will be deposited.
This provides an efficient means to accumulate matter within a plasma. In a partially ionized plasma, electromagnetic forces act on the non-ionized material indirectly through the viscosity between the ionized and non-ionized material.
Hannes Alfvén showed that elements with the lowest ionization potential are brought closest to the axis, and form concentric hollow cylinders whose radii increase with ionization potential. The drift of ionized matter from the surroundings into the rope means that the rope acts as an ion pump, which evacuates surrounding regions, producing areas of extremely low density.
See also
QCD string, sometimes called a flux tube
Flux transfer event
Birkeland current
Magnetohydrodynamics (MHD)
References
Plasma phenomena
Convection | Marklund convection | [
"Physics",
"Chemistry"
] | 319 | [
"Transport phenomena",
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Convection",
"Thermodynamics"
] |
38,983,016 | https://en.wikipedia.org/wiki/Proton-gated%20urea%20channel | The proton-gated urea channel is an inner-membrane protein essential for the survival to Helicobacter pylori. It enables the rapid influx of urea into the bacterium. It is closed at pH 7.0 and fully open at pH 5.0. Urease activity buffers the periplasm to pH 6.1. Using multiwavelength anomalous dispersion (MAD), its structure, a compact hexameric ring about 95 Å in diameter and 45 A ̊ in height, was determined. The centre is filled with a lipid plug that forms an asymmetric bilayer. Amino residues that can be protonated in the periplasmic domain are important for proton sensing or gating.
References
Bacterial proteins
Membrane proteins | Proton-gated urea channel | [
"Chemistry",
"Biology"
] | 160 | [
"Biochemistry stubs",
"Protein stubs",
"Protein classification",
"Membrane proteins"
] |
38,984,221 | https://en.wikipedia.org/wiki/Project%201640 | Project 1640 is a high contrast imaging project at Palomar Observatory. It seeks to image brown dwarfs and Jupiter-sized planets around nearby stars. Rebecca Oppenheimer, associate curator and chair of the Astrophysics Department at the American Museum of Natural History, is the principal investigator for the project.
Instruments
The two main instruments behind Project 1640 is an Integral field spectrograph (IFS) and an Apodized-pupil Lyot coronagraph at the Hale Telescope at Palomar. This instrument is the basis of a high-contrast imaging long-term observational program. The instrument uses the chromatic nature of the speckle noise to distinguish it from any true astrophysical companions, including software which increases sensitivity by 10-20 times. Such sensitivity can detect planets of several Jupiter masses. The spectrograph obtains 23 images across the J and H bands (1.06-1.78 μm), with a spectral resolution of 45. In approximately 2013, a Post-coronagraph Wave Front Calibration System was added. Its goal is to achieve a wave front irregularity of less than 10 nm.
Preliminary results
On March 10, 2013, Project 1640 made its first remote imaging of another solar system. It imaged four red exoplanets orbiting the star HR8799, 128 light years away from Earth, determining the spectra for all four. One significant result was the detection of a chemical abnormality. At normal temperatures, such as those surrounding the four exoplanets, ammonia and methane should both be present in significant amounts. However, the exoplanets have either ammonia or methane in abundance, while the other chemical is missing. Other chemicals such as acetylene (not previously detected on any exoplanet) may also be present. There is also a significant cloud cover on the planets.
References
Astronomical imaging
Exoplanet search projects
Infrared spectroscopy
Palomar Observatory | Project 1640 | [
"Physics",
"Chemistry",
"Astronomy"
] | 387 | [
"Exoplanet search projects",
"Spectrum (physical sciences)",
"Infrared spectroscopy",
"Astronomy projects",
"Spectroscopy"
] |
47,814,230 | https://en.wikipedia.org/wiki/Delta%20Flume | Delta Flume is a 300 meter long man-made flume with a wave generator that is capable of producing waves as tall as five meters, the world's largest artificial waves. It is located at the Deltares Research Institute outside the city of Delft, Netherlands. It is used to simulate forces generated by natural waves in order to test materials used in the construction of dykes. Especially for testing the effect of vegetation the full scale testing is essential.
See also
O. H. Hinsdale Wave Research Laboratory
References
External links
Simulation
Research projects
Coastal engineering
Buildings and structures in Delft | Delta Flume | [
"Engineering"
] | 120 | [
"Coastal engineering",
"Civil engineering"
] |
47,815,949 | https://en.wikipedia.org/wiki/Collagen-induced%20arthritis | Collagen-induced arthritis (CIA) is a condition induced in mice (or rats) to study rheumatoid arthritis.
CIA is induced in mice by injecting them with an emulsion of complete Freund's adjuvant and type II collagen.
In rats, only one injection is needed, but mice are normally injected twice.
References
Further reading
External links
Animal testing
Arthritis
Collagens | Collagen-induced arthritis | [
"Chemistry"
] | 88 | [
"Animal testing"
] |
47,826,553 | https://en.wikipedia.org/wiki/Enhanced%20privacy%20ID | Enhanced Privacy ID (EPID) is Intel Corporation's recommended algorithm for attestation of a trusted system while preserving privacy. It has been incorporated in several Intel chipsets since 2008 and Intel processors since 2011. At RSAC 2016 Intel disclosed that it has shipped over 2.4B EPID keys since 2008. EPID complies with international standards ISO/IEC 20008 / 20009, and the Trusted Computing Group (TCG) TPM 2.0 for authentication. Intel contributed EPID intellectual property to ISO/IEC under RAND-Z terms. Intel is recommending that EPID become the standard across the industry for use in authentication of devices in the Internet of Things (IoT) and in December 2014 announced that it was licensing the technology to third-party chip makers to broadly enable its use.
EPID
EPID is an enhancement of the Direct Anonymous Attestation (DAA) algorithm. DAA is a digital signature algorithm supporting anonymity. Unlike traditional digital signature algorithms, in which each entity has a unique public verification key and a unique private signature key, DAA provides a common group public verification key associated with many (typically millions) of unique private signature keys. DAA was created so that a device could prove to an external party what kind of device it is (and optionally what software is running on the device) without needing to provide device identity, i.e., to prove you are an authentic member of a group without revealing which member. EPID enhances DAA by providing an additional utility of being able to revoke a private key given a signature created by that key, even if the key itself is still unknown.
Background
In 1999 the Pentium III added a Processor Serial Number (PSN) as a way to create identity for security of endpoints on the internet. However, privacy advocates were especially concerned and Intel chose to remove the feature in later versions. Building on improving asymmetric cryptography of the time and group keys, Intel Labs researched and then standardized a way to get to the benefits of PSN while preserving privacy.
Roles
There are three roles when using EPID: Issuer, Member and Verifier. The issuer is the entity that issues unique EPID private keys for each member of a group. The member is the entity that is trying to prove its membership in a group. The verifier is the entity who is checking an EPID signature to establish whether it was signed by an entity or device which is an authentic member of the group. Current usage by Intel has the Intel Key Generation Facility as the Issuer, an Intel-based PC with embedded EPID key as a member, and a server (possibly running in the cloud) as the verifier (on behalf of some party that wishes to know that it is communicating with some trusted component in a device).
Key issuing options
The issuing of an EPID key can be done directly by the issuer creating an EPID key and delivering securely to the member, or blinded so that the issuer does not know the EPID private key. Having EPID keys embedded in devices before they ship is an advantage for some usages so that EPID is available inherently in the devices as they arrive in the field. Having the EPID key issued using the blinded protocol is an advantage for some usages, since there is never a question about whether the issuer knew the EPID key in the device. It is an option to have one EPID key in the device at time of shipment, and use that key to prove to another issuer that it is a valid device and then get issued a different EPID key using the blinded issuing protocol.
Uses
In recent years EPID has been used for attestation of applications in the platforms used for protected content streaming and financial transactions. It is also used for attestation in Software Guard Extensions (SGX), released by Intel in 2015. It is anticipated that EPID will become prevalent in IoT, where inherent key distribution with the processor chip, and optional privacy benefits will be especially valued.
Proof that a part is genuine
An example usage for EPID is to prove that a device is a genuine device. A verifier wishing to know that a part was genuine would ask the part to sign a cryptographic nonce with its EPID key. The part would sign the nonce and also provide a proof that the EPID key was not revoked. The verifier after checking the validity of the signature and proof would know that the part was genuine. With EPID, this proof is anonymous and unlinkable.
Content protection
EPID can be used to attest that a platform can securely stream digital rights management (DRM)-protected content because it has a minimum level of hardware security. The Intel Insider program uses EPID for platform attestation to the rights-holder.
Securing financial transactions
Data Protection Technology (DPT) for Transactions is a product for doing a 2-way authentication of a point of sale (POS) terminal to a backend server based on EPID keys. Using hardware roots of trust based on EPID authentication, the initial activation and provisioning of a POS terminal can securely be performed with a remote server. In general, EPID can be used as the basis to securely provision any cryptographic key material over the air or down the wire with this method.
Internet of things attestation
For securing the IoT, EPID can be used to provide authentication while also preserving privacy. EPID keys placed in devices during manufacturing are ideal for provisioning other keys for other services in a device. EPID keys can be used in devices for services while not allowing users to be tracked by their IoT devices using these services. Yet if required, a known transaction can be used for when an application and user choose (or require) the transaction to be unambiguously known (e.g., a financial transaction). EPID can be used for both persistent identity and anonymity. Whereas alternative approaches exist for persistent identity, it is difficult to convert persistent identity to anonymous identity. EPID can serve both requirements and can enable anonymous identity in a mode of operation that enables persistence, as well. Thus, EPID is ideal for the broad range of anticipated IoT uses.
Security and privacy are foundational to the IoT. Since IoT security and privacy extend beyond Intel processors to other chipmaker's processors in sensors, Intel announced on December 9, 2014 their intent to license EPID broadly to other chip manufacturers for Internet of things applications. On August 18, 2015, Intel jointly announced the licensing of EPID to Microchip and Atmel, and showed it running on a Microchip microcontroller at the Intel Developers Forum.
Internet of things complexity hiding
Internet of things has been described as a "network of networks" where internal workings of one network may not be appropriate to disclose to a peer or foreign network. For example, a use case involving redundant or spare IoT devices facilitates availability and serviceability objectives, but network operations that load balances or replaces different devices need not be reflected to peer or foreign networks that "share" a device across network contexts. The peer expects a particular type of service or data structure but likely doesn't need to know about device failover, replacement or repair. EPID can be used to share a common public key or certificate that describes and attests the group of similar devices used for redundancy and availability, but doesn't allow tracking of specific device movements. In many cases, peer networks do not want to track such movements as it would require, potentially, maintaining context involving multiple certificates and device lifecycles. Where privacy is also a consideration, the details of device maintenance, failover, load balancing and replacement cannot be inferred by tracking authentication events.
Internet of things secure device onboard
Because of EPID's privacy preserving properties, it is ideal for IoT Device identity to allow a device to securely and automatically onboard itself into an IoT Service immediately at the first power on of the device. Essentially the device performs a secure boot, and then before anything else, reaches out across the internet to find the IoT Service that the new owner has chosen for managing the device. An EPID attestation is integral to this initial communication. As a consequence of the EPID attestation, a secure channel is created between the device and IoT Service. Because of the EPID attestation, the IoT Service knows it is talking to the real IoT Device. (Using the secure channel created, there is reciprocal attestation so the IoT Device knows it is talking to the IoT Service the new owner selected to manage it.) Unlike PKI, where the key is unchanging transaction to transaction, an adversary lurking on the network cannot see and correlate traffic by the key used when EPID is employed. Thus privacy of onboarding is preserved and adversaries can no longer collect data to create attack maps for later use when future IoT Device vulnerabilities are discovered. Moreover, additional keys can be securely provisioned over the air or down the wire, the latest version of software, perhaps specific to the IoT Service, can be downloaded and default logins disabled to secure the IoT Device without operator intervention.
On October 3, 2017, Intel announced Intel Secure Device Onboard, a software solution to help IoT Device Manufacturers and IoT Cloud Services privately, securely and quickly onboard IoT Devices into IoT Services. The objective is to onboard "Any Device to Any IoT Platform" for a "superior Onboarding experience and ecosystem enablement ROI". The use cases and protocols from SDO have been submitted to the FIDO Alliance IoT working group.
See also
Elliptic Curve Digital Signature Algorithm
Elliptical curve cryptography
Loss of Internet anonymity
Privacy enhancing technologies
Proof of knowledge
Public-key cryptography
Trusted platform module
References
External links
Puri, Deepak, "IoT security: Intel EPID simplifies authentication of IoT devices," NetworkWorld , retrieved October 10, 2016.
Xiaoyu Ruan: “Chapter 5 – Privacy at the Next Level: Intel’s Enhanced Privacy Identification (EPID) Technology”, Platform Embedded Security Technology Revealed. Apress Media, LLC, 2014. ()
E. Brickell and Jiangtao Li: “Enhanced Privacy ID from Bilinear Pairing for Hardware Authentication and Attestation”. IEEE International Conference on Social Computing / IEEE International Conference on Privacy, Security, Risk and Trust. 2010. (IACR eprint )
Data Protection Technology for Transactions
Intel & Microsoft Class Video on EPID and "0 Touch" IoT Device Onboarding at IDF'16
Elliptic curve cryptography
Cryptography
Digital signature schemes
Trusted computing
Internet privacy software | Enhanced privacy ID | [
"Mathematics",
"Engineering"
] | 2,200 | [
"Trusted computing",
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
47,827,692 | https://en.wikipedia.org/wiki/Giant%20oscillator%20strength | Giant oscillator strength is inherent in excitons that are weakly bound to impurities or defects in crystals.
The spectrum of fundamental absorption of direct-gap semiconductors such as gallium arsenide (GaAs) and cadmium sulfide (CdS) is continuous and corresponds to band-to-band transitions. It begins with transitions at the center of the Brillouin zone, . In a perfect crystal, this spectrum is preceded by a hydrogen-like series of the transitions to s-states of Wannier-Mott excitons. In addition to the exciton lines, there are surprisingly strong additional absorption lines in the same spectral region. They belong to excitons weakly bound to impurities and defects and are termed 'impurity excitons'. Anomalously high intensity of the impurity-exciton lines indicate their giant oscillator strength of about per impurity center while the oscillator strength of free excitons is only of about per unit cell. Shallow impurity-exciton states are working as antennas borrowing their giant oscillator strength from vast areas of the crystal around them. They were predicted by Emmanuel Rashba first for molecular excitons and afterwards for excitons in semiconductors. Giant oscillator strengths of impurity excitons endow them with ultra-short radiational life-times ns.
Bound excitons in semiconductors: Theory
Interband optical transitions happen at the scale of the lattice constant which is small compared to the exciton radius. Therefore, for large excitons in direct-gap crystals the oscillator strength of exciton absorption is proportional to which is the value of the square of the wave function of the internal motion inside the exciton at coinciding values of the electron and hole coordinates. For large excitons where is the exciton radius, hence, , here is the unit cell volume. The oscillator strength for producing a bound exciton can be expressed through its wave function and as
.
Coinciding coordinates in the numerator, , reflect the fact the exciton is created at a spatial scale small compared with its radius. The integral in the numerator can only be performed for specific models of impurity excitons. However, if the exciton is weakly bound to impurity, hence, the radius of the bound exciton satisfies the condition ≥ and its wave function of the internal motion is only slightly distorted, then the integral in the numerator can be evaluated as . This immediately results in an estimate for
.
This simple result reflects physics of the phenomenon of giant oscillator strength: coherent oscillation of electron polarization in the volume of about .
If the exciton is bound to a defect by a weak short-range potential, a more accurate estimate holds
.
Here is the exciton effective mass, is its reduced mass, is the exciton ionization energy, is the binding energy of the exciton to impurity, and and are the electron and hole effective masses.
Giant oscillator strength for shallow trapped excitons results in their short radiative lifetimes
Here is the electron mass in vacuum, is the speed of light, is the refraction index, and is the frequency of emitted light. Typical values of are about nanoseconds, and these short radiative lifetimes favor the radiative recombination of excitons over the non-radiative one. When quantum yield of radiative emission is high, the process can be considered as resonance fluorescence.
Similar effects exist for optical transitions between exciton and biexciton states.
An alternative description of the same phenomenon is in terms of polaritons: giant cross-sections of the resonance scattering of electronic polaritons on impurities and lattice defects.
Bound excitons in semiconductors: Experiment
While specific values of and are not universal and change within collections of specimens, typical values confirm the above regularities. In CdS, with meV, were observed impurity-exciton oscillator strengths . The value per a single impurity center should not be surprising because the transition is a collective process including many electrons in the region of the volume of about . High oscillator strength results in low-power optical saturation and radiative life times ps. Similarly, radiative life times of about 1 ns were reported for impurity excitons in GaAs. The same mechanism is responsible for short radiative times down to 100 ps for excitons confined in CuCl microcrystallites.
Bound molecular excitons
Similarly, spectra of weakly trapped molecular excitons are also strongly influenced by adjacent exciton bands. It is an important property of typical molecular crystals with two or more symmetrically-equivalent molecules in the elementary cell, such as benzine and naphthalene, that their exciton absorption spectra consist of doublets (or multiplets) of bands strongly polarized along the crystal axes as was demonstrated by Antonina Prikhot'ko. This splitting of strongly polarized absorption bands that originated from the same molecular level and is known as the 'Davydov splitting' is the primary manifestation of molecular excitons. If the low-frequency component of the exciton multiplet is situated at the bottom of the exciton energy spectrum, then the absorption band of an impurity exciton approaching the bottom from below is enhanced in this component of the spectrum and reduced in two other components; in the spectroscopy of molecular excitons this phenomenon is sometimes referred to as the 'Rashba effect'. As a result, the polarization ratio of an impurity exciton band depends on its spectral position and becomes indicative of the energy spectrum of free excitons. In large organic molecules the energy of impurity excitons can be shifted gradually by changing the isotopic content of guest molecules. Building on this option, Vladimir Broude developed a method of studying the energy spectrum of excitons in the host crystal by changing the isotopic content of guest molecules. Interchanging the host and the guest allows studying energy spectrum of excitons from the top. The isotopic technique has been more recently applied to study the energy transport in biological systems.
See also
Exciton
Polariton
Oscillator strength
Quantum yield
Resonance fluorescence
References
Condensed matter physics | Giant oscillator strength | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,327 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
47,827,978 | https://en.wikipedia.org/wiki/Heat%20transfer%20vinyl | Heat transfer vinyl (HTV) is a speciality polyurethane with a heat-activated adhesive (typically polyester-based) that can be used on certain fabrics and materials to apply designs to promotional products, textiles and apparel, such as T-shirts. It comes laminated together with a clear polyester carrier in a roll or sheet form, with an adhesive tacky backing, so it can be cut, weeded, and placed on a substrate for application via a heat press. The design is cut into the material with a cutting plotter in reverse (adhesive/vinyl side up). The excess material is removed with tools such as hooks or tweezers - a manual and dextrous process referred to as "weeding". The tacky adhesive between the carrier and the vinyl holds together complex designs, although the labour naturally increases the more weeding that is required. The clear polyester carrier keeps the design visible to aid positioning on the substrate. For these and other reasons, it is a popular and more robust alternative to transfer paper (that does not incorporate a carrier sheet). Heat transfer vinyl is made in single colors and also has special options such as patterned, glitter, flocked, holographic, glow-in-the-dark, reflective and 3D puff. Heat transfer vinyl also benefits from a high degree of stretch and rebound, achieved by a memory effect, making it suitable for use on apparel and other flexible items including the garments typically used, such as sports jerseys.
Types of heat transfer vinyl
Heat transfer vinyl comes in single colors, in the specialty options listed above, in full-color pattern options, and in a printable version that must be used with solvent ink & a solvent printer. It is best used for simple designs with minimal colors since each individual color or pattern used in the design must be cut, weeded, and heat pressed. Certain heat transfer vinyl can be layered to form multi-colored designs. The more layers involved, the harder it is to match up each to achieve the end result. Heat transfer vinyl cannot be used for full-color pictures or anything with gradients. There are other applications for those options.
Heat transfer vinyl can be used to create special effects with its glitter, flocked, holographic, glow-in-the-dark, and 3D puff options. The layering of these types of vinyl is dependent on the type of vinyl used.
Heat transfer vinyl, in sizes, ranges from small sheets to large "master" rolls, that can go up to 60" x 50 yards. Typical sizes are 15" and 19" wide rolls in 1 yard, 5 yard, 10 yard, 25 yard, and 50-yard lengths. In metric territories, widths are typically 200, 300 and 500 millimetres, sold per the metre or in increments i.e. 5, 10, 25, 50, 75 and 100 metre "logs", priced according to economy of scale.
The film is typically 20 to 100 micrometres thick, although specifications vary as to whether this includes the adhesive layer or not. In recent years, very thin vinyls have become available with improvements in the polymer extrusion and calendering process, which varies by manufacturer.
Despite being commonly referred to as vinyl, the material does not contain PVC, meaning it is generally considered non-toxic, and sometimes (for instance in Europe) carries internationally recognised ecological certifications such as from Oeko-Tex (although laboratories including SGS S.A. are also used). A common certification for heat transfer vinyl is Oeko-Tex Standard 100 (Class 1). Thus, the use of heat transfer vinyl to decorate apparel is often cited as an ecologically friendly alternative to plastisol screen-printing inks, that is despite the consumable nature of the plastics involved.
Usage
Heat transfer vinyl is traditionally placed on textile products. Because of the nature of the way the vinyl is applied, it must be used on products that can take the heat and pressure required to make the transfer adhere properly. For fabrics and clothing, typically this is temperatures in the range of 250-300 deg Fahrenheit, or 120-150 degrees Celsius. The product (also known as a substrate) will also need to hold up under the clamping action and pressure of the heat press.
Each heat transfer vinyl manufacturer will list what products can be used for each type of vinyl. Fabrics such as cotton, cotton/polyester blends, polyester, and canvas work well with heat transfer vinyl. There are types of vinyl that can also be used with nylon and leather. However, products such as paper and plastics do not work well because they cannot take the heat required to adhere the vinyl to the substrate. It is very important to make sure the correct vinyl type is used with the correct substrate.
Equipment
Equipment needed to work with heat transfer vinyl includes design software and a vinyl cutter.
Desktop cutters are suitable for low volume and low budget while standalone cutters are more appropriate for higher volumes. Print/cut printers have the ability to do printed vinyl.
Weeding tools are used to remove the heat transfer vinyl that is not going to be pressed onto the product from its adhesive carrier sheet. A heat press or iron is used to transfer the vinyl onto the product. Heat presses have the ability to set a specific temperature and pressure level to suit a specific vinyl and is recommended for professional results.
Durability
Heat transfer vinyl should last the lifetime of the product if it is attached to the substrate according to the manufacturer's instructions.
References
Vinyl polymers
Heat transfer | Heat transfer vinyl | [
"Physics",
"Chemistry"
] | 1,145 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
37,525,024 | https://en.wikipedia.org/wiki/Critical%20exponent%20of%20a%20word | In mathematics and computer science, the critical exponent of a finite or infinite sequence of symbols over a finite alphabet describes the largest number of times a contiguous subsequence can be repeated. For example, the critical exponent of "Mississippi" is 7/3, as it contains the string "ississi", which is of length 7 and period 3.
If w is an infinite word over the alphabet A and x is a finite word over A, then x is said to occur in w with exponent α, for positive real α, if there is a factor y of w with y = xax0 where x0 is a prefix of x, a is the integer part of α, and the length |y| = α |x|: we say that y is an α-power. The word w is α-power-free if it contains no factors which are β-powers for any β ≥ α.
The critical exponent for w is the supremum of the α for which w has α-powers, or equivalently the infimum of the α for which w is α-power-free.
Definition
If is a word (possibly infinite), then the critical exponent of is defined to be
where .
Examples
The critical exponent of the Fibonacci word is (5 + )/2 ≈ 3.618.
The critical exponent of the Thue–Morse sequence is 2. The word contains arbitrarily long squares, but in any factor xxb the letter b is not a prefix of x.
Properties
The critical exponent can take any real value greater than 1.
The critical exponent of a morphic word over a finite alphabet is either infinite or an algebraic number of degree at most the size of the alphabet.
Repetition threshold
The repetition threshold of an alphabet A of n letters is the minimum critical exponent of infinite words over A: clearly this value RT(n) depends only on n. For n=2, any binary word of length four has a factor of exponent 2, and since the critical exponent of the Thue–Morse sequence is 2, the repetition threshold for binary alphabets is RT(2) = 2. It is known that RT(3) = 7/4, RT(4) = 7/5 and that for n≥5 we have RT(n) ≥ n/(n-1). It is conjectured that the latter is the true value, and this has been established for 5 ≤ n ≤ 14 and for n ≥ 33.
See also
Critical exponent of a physical system
Notes
References
Formal languages
Combinatorics on words | Critical exponent of a word | [
"Mathematics"
] | 539 | [
"Formal languages",
"Mathematical logic",
"Combinatorics on words",
"Combinatorics"
] |
37,525,778 | https://en.wikipedia.org/wiki/Mittag-Leffler%20summation | In mathematics, Mittag-Leffler summation is any of several variations of the Borel summation method for summing possibly divergent formal power series, introduced by
Definition
Let
be a formal power series in z.
Define the transform of by
Then the Mittag-Leffler sum of y is given by
if each sum converges and the limit exists.
A closely related summation method, also called Mittag-Leffler summation, is given as follows .
Suppose that the Borel transform converges to an analytic function near 0 that can be analytically continued along the positive real axis to a function growing sufficiently slowly that the following integral is well defined (as an improper integral). Then the Mittag-Leffler sum of y is given by
When α = 1 this is the same as Borel summation.
See also
Mittag-Leffler distribution
Mittag-Leffler function
Nachbin's theorem
References
Summability methods | Mittag-Leffler summation | [
"Mathematics"
] | 199 | [
"Sequences and series",
"Summability methods",
"Mathematical structures"
] |
37,528,165 | https://en.wikipedia.org/wiki/Plebanski%20tensor | The Plebanski tensor is an order 4 tensor in general relativity constructed from the trace-free Ricci tensor. It was first defined by Jerzy Plebański in 1964.
Let be the trace-free Ricci tensor:
Then the Plebanski tensor is defined as
The advantage of the Plebanski tensor is that it shares the same symmetries as the Weyl tensor. It therefore becomes possible to classify different spacetimes based on additional algebraic symmetries of the Plebanski tensor in a manner analogous to the Petrov classification.
References
Tensors
Tensors in general relativity | Plebanski tensor | [
"Physics",
"Engineering"
] | 126 | [
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Tensors in general relativity",
"Relativity stubs",
"Theory of relativity"
] |
40,271,536 | https://en.wikipedia.org/wiki/SolarWave | SolarWave was a Swedish company producing solar powered water purification systems.
SolarWave was founded in 2009 by Bengt Skörelid and Thomas Larsson. The company had its headquarters in Gävle and was represented internationally by its subsidiaries SolarWave Tanzania Limited in Tanzania and SolarWave Uganda Limited in Uganda, as well as by resellers in several African countries. The company ceased operations in 2019 and was liquidated in 2021.
References
Water supply | SolarWave | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 90 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
40,280,433 | https://en.wikipedia.org/wiki/Eat%20Just | Eat Just, Inc. is a private company headquartered in San Francisco, California, US. It develops and markets plant-based alternatives to conventionally produced egg products, as well as cultivated meat products. Eat Just was founded in 2011 by Josh Tetrick and Josh Balk. It raised about $120 million in early venture capital and became a unicorn in 2016 by surpassing a $1 billion valuation. It has been involved in several highly publicized disputes with traditional egg industry interests. In December 2020, its cultivated chicken meat became the first cultured meat to receive regulatory approval in Singapore. Shortly thereafter, Eat Just's cultured meat was sold to diners at the Singapore restaurant 1880, making it the "world's first commercial sale of cell-cultured meat".
History
2011–2014
Eat Just Inc. was founded in 2011 under the name Beyond Eggs and then Hampton Creek Foods by childhood friends Josh Balk and Josh Tetrick. It started in Los Angeles, California, then moved to Tetrick's garage in San Francisco in 2012. At the time, the company had about 30 employees. Initially, it had $500,000 then $2 million in venture capital funding from Khosla Ventures.
Hampton Creek's first two years were spent in research and development. It tested plant varieties in a lab in order to identify plant proteins with properties similar to chicken eggs, such as gelling and emulsifying. Eat Just created an automated process for testing plants that was patented in 2016. Information like each plant's drought tolerance, taste, and any likely allergenic problems were compiled into a database called Orchard.
In September 2013, Whole Foods became the first major grocery chain to sell Hampton Creek products, when it started using JUST Mayo in certain prepared foods. This was followed by deals with Costco and Safeway. By early 2014, the company had raised $30 million in venture capital funding. Later that year, it raised another $90 million.
The American Egg Board responded to the growth of Hampton Creek and other egg substitute companies with an advertising campaign featuring the slogan "Accept No Substitutes."
2014–2016
In October 2014, competitor Unilever sued Hampton Creek Foods alleging the "JUST Mayo" name misled consumers into believing the product contained real eggs. Public sentiment favored Hampton Creek and more than 100,000 people eventually signed a Change.org petition asking Unilever to "stop bullying sustainable food companies." Unilever withdrew its lawsuit six weeks after filing it. However, the Food and Drug Administration sent a warning letter saying the Just Mayo name was misleading to consumers, since the product must contain real eggs to be called "mayonnaise." In December 2015, Hampton Creek reached an agreement with the FDA to make it more clear in the Just Mayo packaging that it does not contain real eggs. Publicity from the lawsuit and an egg shortage from the avian bird flu helped Hampton Creek grow.
In late 2015, several former employees anonymously alleged Hampton Creek was exaggerating the science behind its products, mislabeling the ingredients in pre-production samples, and manipulated employment contracts. Then, emails secured through the Freedom of Information Act showed that the American Egg Board hired Edelman Public Relations to engage in a campaign targeting Hampton Creek's reputation. Among other things, the emails inferred the Egg Board discussed interfering with Hampton's contract with Whole Foods, encouraged Unilever in their legal actions against Hampton Creek, and made jokes about hiring a hitman to kill the Hampton Creek CEO. The United States Department of Agriculture opened an investigation and the CEO of the Egg Board resigned.
Then, in 2016, a Bloomberg story reported on evidence inferring that Hampton Creek bought its own products off of store shelves in order to inflate sales numbers during fund-raising. Hampton Creek said this was part of an unorthodox quality control program. The Securities and Exchange Commission and the Department of Justice started an inquiry that was closed in March 2017 after concluding the allegations were insignificant.
2016–present
By 2016, Eat Just had 142 employees. Late that year, it also substantially expanded its laboratory testing of prospective plant proteins, using robots and automation. In August 2016, Hampton Creek raised another round of funding from investors. The funding made the company a unicorn with a valuation of over $1 billion, but the amount of the funding was not disclosed.
In June 2017, Target stopped selling Hampton Creek products after seeing an anonymous letter alleging food safety issues, such as Salmonella and Listeria at Eat Just's manufacturing facility. Target said none of its customers reported getting sick and an FDA investigation found no contaminants in Hampton Creek's products.
Several of Hampton Creek's executives were fired in 2017, after the company alleged they were trying to take away CEO Josh Tetrick's control of the company. By July 2017, the entire board had been fired, resigned, or moved to an advisory role except for the CEO and founder Josh Tetrick, reportedly over disputes with the CEO. Five new board members were appointed.
Hampton Creek started transitioning its website and other branding to focus on the "Just" name in June 2017. The company's legal name was changed the following year. This prompted trademark litigation with a bottled water company run by Jaden Smith that also uses the "Just" brand.
In late 2019, Eat Just Inc. acquired its first manufacturing plant. The 30,000 square foot plant in Appleton, Minnesota, was originally a Del Dee Foods plant. Eat Just sales increased by more than 100% from February to July 2020, due to the COVID-19 pandemic.
In 2020, Eat Just created an Asian subsidiary with Proterra Investment Partners Asia. Through the joint venture, Proterra promised to invest up to $100 million and, with Eat Just, started building a manufacturing facility in Singapore.
Eat Just raised $200 million in funding in March 2021 to fund global expansion. Also in 2021, Eat Just's GOOD Meat subsidiary raised $267 million in venture capital funding.
In May, 2022, Eat Just signed a contract with ABEC Inc., which manufactures bioprocess equipment, to build 10 bioreactors for growing meat. Tetrick estimates that the new bioreactors could potentially produce 30 million pounds of cultured meat per year. The location for the bioreactors is pending regulatory approval by the Food and Drug Administration and U.S. Department of Agriculture. In 2023, ABEC filed a lawsuit again Eat Just for breach of contract. The lawsuit alleges that Eat Just has failed to live up to its financial obligations and has failed to pay over $30 million worth of invoices.
Eat Just partnered with C2 Capital Partners in 2022, receiving $25 million from the private equity firm to expand Eat Just's operations in China.
In April 2023, Barnes & Noble launched breakfast sandwich made with Just Egg in 500 B&N Cafés nationwide.
Lawsuits
Eat Just has been the subject of a number of lawsuits, primarily over unpaid bills. In 2023, bioreactor specialist ABEC sued Eat Just in a lawsuit claiming breach of contract due to Eat Just not paying over $61 million worth of invoices. Insiders said to Wired in 2023 that the company had a culture of paying suppliers late or withholding payment entirely. In 2021, the company was sued by the landlord of its San Francisco headquarters, 2000 Folsom Partners LLC, over $2.6 million in overdue rent payments. Eat Just was also sued in 2021 by the Archer-Daniels-Midland Company over failing to pay a $15,000 2015 bill for hemp seeds, and by VWR International for $189,000 in unpaid debt.
Food products
Eat Just develops and markets plant-based substitutes for foods that ordinarily use chicken eggs, such as scrambled eggs and mayonnaise. The company is best known for its plant-based JUST Egg made from mung beans. According to Eat Just, the company has made the equivalent of 100 million eggs worth of food products as of March 2021.
The company's egg substitutes are developed by finding plant proteins that serve a function eggs are normally used for, such as binding or emulsifying. For example, plant proteins are analyzed for molecular weight, amino acid sequences, and performance under heat or pressure. Much of the testing is focused on finding high-protein plants with specific types of proteins.
Eat Just's first product, Beyond Eggs, was intended to replace eggs in baked goods and was released in February 2013. It is made with peas and other ingredients. Later on, Eat Just developed plant-based substitutes for mayonnaise and cookie dough. Initially, the company focused on foods that use eggs as an ingredient, like muffins. In July 2017, it started selling a substitute for scrambled eggs called Just Egg that is made from mung beans. It released a frozen version in January 2020.
In late 2017, Eat Just announced it was developing a cultivated meat product to make chicken nuggets. The meat is grown in a bioreactor in a fluid of amino acids, sugar, and salt. The chicken nuggets are 70% cultivated meat, while the remainder is made from mung bean proteins and other ingredients. The company is also working on cultivated Japanese Wagyu beef. Cultured, also known as cultivated or cell-based meat, cannot be sold commercially until it is allowed by government regulators.
In December 2020, the Government of Singapore approved cultivated meat created by Eat Just, branded as GOOD Meat. A restaurant in Singapore called 1880 became the first place to sell Eat Just's cultured meat. Eat Just subsequently got additional approvals for different types of chicken products, such as shredded and breast chicken. In 2023, the company got approval from the United States Department of Agriculture and Food and Drug Administration to sell its cultured meat in the United States.
References
External links
American companies established in 2011
Food and drink companies established in 2011
2011 establishments in California
Meat substitutes
Condiment companies of the United States
Vegetarian companies and establishments of the United States
Cellular agriculture | Eat Just | [
"Engineering",
"Biology"
] | 2,042 | [
"Biological engineering",
"Cellular agriculture"
] |
31,890,027 | https://en.wikipedia.org/wiki/DM%20domain | In molecular biology the DM domain is a protein domain first discovered in the doublesex proteins of Drosophila melanogaster and is also seen in C. elegans and mammalian proteins. In D. melanogaster the doublesex gene controls somatic sexual differentiation by producing alternatively spliced mRNAs encoding related sex-specific polypeptides. These proteins are believed to function as transcription factors on downstream sex-determination genes, especially on neuroblast differentiation and yolk protein genes transcription.
The DM domain binds DNA as a dimer, allowing the recognition of pseudopalindromic sequences . The NMR analysis of the DSX DM domain revealed a novel zinc module containing 'intertwined' CCHC and HCCC zinc-binding sites. The recognition of the DNA requires the carboxy-terminal basic tail which contacts the minor groove of the target sequence.
Proteins with this domain
Proteins with the DM domain are found in many model organisms. Many C. elegans Mab proteins contain this domain, the best-known one being mab-3. Human proteins containing this domain include DMRT1, DMRT2, DMRT3, DMRTA1, DMRTA2, DMRTB1, and DMRTC2; each of these have a mouse homolog.
DMRT1 homologs have an additional common domain C-terminal to the DM domain. This domain is only found in bony vertebrates, and neither its structure nor function is unknown. Jpred predicts the human version of the section to be mostly coils; it also suggests a weak similarity to by BLAST.
DMRTA proteins have an additional motif in their C-termina. This motif, ubiquitous in eukaryotes, has an unknown function. It is similar in sequence to some ubiquitin-associated motifs.
References
Protein families
Protein domains | DM domain | [
"Biology"
] | 392 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
31,890,842 | https://en.wikipedia.org/wiki/Bost%E2%80%93Connes%20system | In mathematics, a Bost–Connes system is a quantum statistical dynamical system related to an algebraic number field, whose partition function is related to the Dedekind zeta function of the number field. introduced Bost–Connes systems by constructing one for the rational numbers. extended the construction to imaginary quadratic fields.
Such systems have been studied for their connection with Hilbert's Twelfth Problem. In the case of a Bost–Connes system over Q, the absolute Galois group acts on the ground states of the system.
References
Number theory
Dynamical systems | Bost–Connes system | [
"Physics",
"Mathematics"
] | 116 | [
"Discrete mathematics",
"Mechanics",
"Number theory",
"Dynamical systems"
] |
31,895,749 | https://en.wikipedia.org/wiki/Bombyx%20hybrid | The Bombyx hybrid is a hybrid between a male Bombyx mandarina moth and a female Bombyx mori moth. They produce larvae called silkworms, like all species of Bombyx. The larvae look a lot like the other variations. They are brown in the first half and gray at the bottom half, but they get larger black spots than other variations. Generally, they look like a normal Bombyx moth, but a bit darker. Hybrids are not used for silk, but for research. Because Bombyx mori males lost their ability to fly, their females are much more likely to mate with a male Bombyx mandarina. The reverse is possible, but both species have to be kept in the same container. Since Bombyx hybrids are much more common than the other variation, more is known about them.
B. mori is a domesticated version of the wild B. mandarina. This domestication occurred over 5,000 years ago.
See also
Bombyx second hybrid
References
Bombycidae
Hybrid animals | Bombyx hybrid | [
"Biology"
] | 204 | [
"Hybrid animals",
"Animals",
"Hybrid organisms"
] |
31,896,081 | https://en.wikipedia.org/wiki/Pickering%20series | The Pickering series (also known as the Pickering–Fowler series) consists of three lines of singly ionised helium found, usually in absorption, in the spectra of hot stars like Wolf–Rayet stars. The name comes from Edward Charles Pickering and Alfred Fowler. The lines are produced by transitions from a higher energy level of an electron to a level with principal quantum number n = 4. The lines have wavelengths:
10124 Å (n = 5 to n = 4) (infrared)
6560 Å (n = 6 to n = 4)
5412 Å (n = 7 to n = 4)
4859 Å (n = 8 to n = 4)
4541 Å (n = 9 to n = 4)
4339 Å (n = 10 to n = 4)
3645.56 Å (n = ∞ to n = 4, theoretical limit, ultraviolet)
The transitions from the even-n states overlap with hydrogen lines and are therefore masked in typical absorption stellar spectra. However, they are seen in emission in the spectra of Wolf-Rayet stars, as these stars have little or no hydrogen.
In 1896, Pickering published observations of previously unknown lines in the spectra of the star Zeta Puppis. Pickering attributed the observation to a new form of hydrogen with half-integer transition levels. Fowler managed to produce similar lines from a hydrogen–helium mixture in 1912, and supported Pickering's conclusion as to their origin. Niels Bohr, however, included an analysis of the series in his 'trilogy' on atomic structure and concluded that Pickering and Fowler were wrong and that the spectral lines arise instead from singly ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory.
Wavelength formula
The energy differences between levels in the Bohr model, and hence the wavelengths of emitted or absorbed photons, is given by the Rydberg formula:
where
For helium, , the Pickering-Fowler series is for and the reduced mass for is thus , which is usually approximated as (in fact, although this number changes for each isotope of helium, it is approximately constant). A more accurate description may be used with the Bohr–Sommerfeld model of the atom.
The theoretical limit for the wavelength in the Pickering-Fowler is given by:
, which is approximatedly 364.556 nm, which is the same limit as in the Balmer series (hydrogen spectral series for ). Notice how the transitions in the Pickering-Fowler series for n=6,8,10 (6560Å ,4859Å and 4339Å respectively), are nearly identical to the transitions in the Balmer series for n=3,4,5 (6563Å ,4861Å and 4340Å respectively). The fact that the Pickering-Fowler series has entries inbetween those values, led scientist to believe it was due to hydrogen with half transitions ("half-hydrogen"). However, Niels Bohr showed, using his model, it was due to the singly ionised helium , a hydrogen-like atom. This also shows the predictability of Bohr model.
References
External links
PROTO-HYDROGEN
Astronomical spectroscopy
Helium | Pickering series | [
"Physics",
"Chemistry"
] | 716 | [
"Astronomical spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"Astrophysics"
] |
31,899,558 | https://en.wikipedia.org/wiki/Piancatelli%20rearrangement | In 1976, the Italian chemist, Giovanni Piancatelli and coworkers developed a new method to synthesize 4-hydroxycyclopentenone derivatives from 2-furylcarbinols through an acid-catalyzed rearrangement. This discovery occurred when Piancatelli was studying heterocyclic steroids and their reactive abilities in an acidic environment. As this rearrangement has continued to be studied, it has become a commonly used rearrangement in natural product synthesis because of the ability to create 4-hydroxy-5-substitutedcyclopent-2-enones. Piancatelli’s motive for looking into this new rearrangement stemmed from the ever present 3-oxycyclopentene molecule, specifically its 5-hydroxy derivative, found in biologically active natural products.
Reaction mechanism
The mechanism of this reaction is proposed to be a 4-π electrocyclization very much like the Nazarov cyclization reaction. To obtain the 2-furyl carbinols, Piancatelli subjected furfural, an inedible biomass, to a Grignard reaction. This is then submitted to acid-catalyzed hydrolysis to cause a molecular rearrangement and obtain the final 2-furyl carbinols.
It was proposed by Piancatelli that the reaction is a thermal electrocyclic reaction of a conrotatory 4π electron system while studying specifics of the mechanism conditions when synthesizing the 4-hydroxycyclopentenone derivatives. This mechanism was suggested when studying 1H NMR spectra as it became apparent that the final products exclusively delivered the trans isomer.
Piancatelli's proposed mechanism
In Piancatelli's proposed mechanism, the formation of the carbocation due to a protonation-dehydration sequence results in the two hydroxy groups being anti allowing for the trans-4hydroxy-5-substituted-cyclopent-2-enone from a 4π electrocylization ring closure.
Alternative mechanisms
D'Auria proposed a possible mechanism that included zwitterionic intermediates as a way to form the cis isomer alongside the abundant trans isomer of the 2-furylcarbinol. D'Auria performed the rearrangement in boiling water without an acid catalyst.
Another proposed mechanism is from Yin and co-workers that was studied while completing the rearrangement of 2-furylcarbinols with a hydroxyalkyl chain at the 5 position. Yin rationalized the mechanism by utilizing an aldol-type intramolecular addition.
Reaction conditions
The harness of the reaction conditions needed for the rearrangement differed based upon the reactivity of the substrates. Piancatelli observed that the more reactive substrates such as 5-methyl-2-furylcarbinols can undergo the rearrangement with much milder conditions in order to avoid any possible side products. Lewis acids were discovered to help drive the reaction to completion as long as there was an equimolar ratio, whereas alkyl groups on the hydroxy-bearing carbon leave the starting material more stable and cause longer reaction times and lower yields with the formation of side products due to the increased reactivity of those carbocations.
Applications
An important use of the Piancatelli rearrangement that was studied by Piancatelli himself is the synthesis of prostaglandins and their derivatives. Piancatelli was able to synthesize key intermediates for the preparation of prostanoic acid starting from his 2-furylcarbinols bearing a second functional group. This study was able to demonstrate the versatility of the sequence of the rearrangement.
A few of the products synthesized due to utilizing the Piancatelli rearrangement include: 3E,5Z-misoprostol, enisoprost, 4-fluoro-enisoprost, 2-normisoprostol, prostaglandin E1 (PGE1), ent-phytoprostane E1, 16-epi-phytoprostane E1, bimatoprost, and travoprost.
References
Carbon-carbon bond forming reactions
Rearrangement reactions
Name reactions | Piancatelli rearrangement | [
"Chemistry"
] | 890 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Rearrangement reactions",
"Organic reactions"
] |
31,899,667 | https://en.wikipedia.org/wiki/MED29 | Mediator of RNA polymerase II transcription subunit 29 (Med29) is a transcription suppressor that in humans is encoded by the MED29 gene. It represents subunit MED29 of the Mediator complex.
Med29, along with Med11 and Med28 in mammals, is part of the core head-region of the Mediator complex. Med29 is the apparent orthologue of the Drosophila melanogaster Intersex protein (IXL), which interacts directly with, and functions as a transcriptional coactivator for, the DNA-binding transcription factor Doublesex, so it is likely that mammalian Med29 serves as a target for one or more DNA-binding transcriptional activators.
See also
Mediator
References
Protein families | MED29 | [
"Biology"
] | 156 | [
"Protein families",
"Protein classification"
] |
31,900,653 | https://en.wikipedia.org/wiki/Demineralization%20%28physiology%29 | Demineralization is the opposite process of mineralization; it is a process of reduction in the mineral content in tissue or an organism. Examples include bone demineralization or tooth demineralization. Demineralization can lead to serious diseases such as osteoporosis, rickets, or tooth decay.
Usually, treatment involves administration of appropriate dietary supplements to help restore the remineralization of human tissues and their physiological state.
See also
Bone resorption
Bone remodeling
References
Physiology | Demineralization (physiology) | [
"Biology"
] | 106 | [
"Physiology"
] |
31,903,706 | https://en.wikipedia.org/wiki/European%20Solar%20Telescope | The European Solar Telescope (EST) is a pan-European project to build a next-generation 4-metre class solar telescope, to be located at the Roque de los Muchachos Observatory in the Canary Islands, Spain. It will use state-of-the-art instruments with high spatial and temporal resolution that can efficiently produce two-dimensional spectral information in order to study the Sun's magnetic coupling between its deep photosphere and upper chromosphere. This will require diagnostics of the thermal, dynamic and magnetic properties of the plasma over many scale heights, by using multiple wavelength imaging, spectroscopy and spectropolarimetry.
The EST design will strongly emphasise the use of a large number of visible and near-infrared instruments simultaneously, thereby improving photon efficiency and diagnostic capabilities relative to other existing or proposed ground-based or space-borne solar telescopes. In May 2011 EST was at the end of its conceptual design study.
The EST is being developed by the European Association for Solar Telescopes (EAST), which was set up to ensure the continuation of solar physics within the European community. Its main goal is to develop, construct and operate the EST. The European Solar Telescope is often regarded as the counterpart of the American Daniel K. Inouye Solar Telescope which finished construction in November 2021.
Conceptual design study
The conceptual design study conducted by research institutions and industrial companies was finalized in May 2011. The study took 3 years, cost €7 million and was co-financed by the European Commission under the EU's Seventh Framework Programme for Research (FP7). The study estimates a €150 million to design and construct the EST and projects about €6.5 million annually for its operation.
Partners
The European Association for Solar Telescopes (EAST) is a consortium of 7 research institutions and 29 industrial partners from 15 European countries, that exists with the aim, among others, of undertaking the development of EST, to keep Europe in the frontier of Solar Physics in the world. As well as EAST intends to develop, construct and operate a next-generation large aperture European Solar Telescope (EST) in the Roque de los Muchachos Observatory, Canaries, Spain.
See also
Daniel K. Inouye Solar Telescope
List of solar telescopes
References
Notes
External links
Official EST homepage
European Association for Solar Telescopes website
European Solar Telescope in the Canary Island
Solar telescopes
Proposed telescopes
European Union
Astrophysics
Plasma physics facilities
Astronomical observatories in the Canary Islands | European Solar Telescope | [
"Physics",
"Astronomy"
] | 494 | [
"Astrophysics",
"Astronomical sub-disciplines",
"Plasma physics facilities",
"Plasma physics"
] |
33,462,592 | https://en.wikipedia.org/wiki/Outline%20of%20geophysics | The following outline is provided as an overview of and topical guide to geophysics:
Geophysics – the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods. The term geophysics sometimes refers only to the geological applications: Earth's shape; its gravitational and magnetic fields; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations have a broader definition that includes the hydrological cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial relations; and analogous problems associated with the Moon and other planets.
Nature of geophysics
Geophysics can be described as all of the following:
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. There are several geophysics-related scientific journals.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A physical science – one that studies non-living systems.
An earth science – one that studies the planet Earth and its surroundings.
A biological science – one that studies the effect of organisms on their physical environment.
An interdisciplinary field – one that overlaps atmospheric sciences, geology, glaciology, hydrology, oceanography and physics.
Branches of geophysics
Biogeophysics – study of how plants, microbial activity and other organisms alter geologic materials and affect geophysical signatures.
Exploration geophysics – the use of surface methods to detect concentrations of ore minerals and hydrocarbons.
Geophysical fluid dynamics – study of naturally occurring, large-scale flows on Earth and other planets.
Geodesy – measurement and representation of the Earth, including its gravitational field.
Geodynamics – study of modes of transport deformation within the Earth: rock deformation, mantle convection, heat flow, and lithosphere dynamics.
Geomagnetism – study of the Earth's magnetic field, including its origin, telluric currents driven by the magnetic field, the Van Allen belts, and the interaction between the magnetosphere and the solar wind.
Mathematical geophysics – development and applications of mathematical methods and techniques for the solution of geophysical problems.
Mineral physics – science of materials that compose the interior of planets, particularly the Earth.
Near-surface geophysics – the use of geophysical methods to investigate small-scale features in the shallow (tens of meters) subsurface.
Paleomagnetism – measurement of the orientation of the Earth's magnetic field over the geologic past.
Planetary Science – science of studying planets, celestial bodies, and planetary systems and their properties and processes.
Seismology – study of the structure and composition of the Earth through seismic waves, and of surface deformations during earthquakes and seismic hazards.
Tectonophysics – study of the physical processes that cause and result from plate tectonics.
History of geophysics
History of geophysics
History of geomagnetism
Timeline of the development of tectonophysics
Vine–Matthews–Morley hypothesis
General geophysics concepts
Gravity
Gravity of Earth
Bouguer anomaly
Isostatic gravity anomaly
Geoid
Geopotential
Gravity anomaly
Undulation of the geoid
Heat flow
Geothermal gradient
Internal heating
Electricity
Atmospheric electricity
Atmospheric electricity
Lightning
Sprite (lightning)
Electricity in Earth
Electrical resistivity tomography
Induced polarization
Seismoelectrical method
Spectral induced polarisation
Spontaneous potential
Telluric current
Electromagnetic waves
Alfvén wave
Dawn chorus (electromagnetic)
Hiss (electromagnetic)
Magnetotellurics
Seismo-electromagnetics
Transient electromagnetics
Whistler (radio)
Fluid dynamics
Geophysical fluid dynamics
Isostasy
Post-glacial rebound
Mantle convection
Geodynamo
Magnetism
Geomagnetism subfields
Environmental magnetism
Magnetostratigraphy
Paleomagnetism
Rock magnetism
Earth's magnetic field
Description
Geomagnetic pole
Magnetic declination
Magnetic inclination
North Magnetic Pole
South Magnetic Pole
Sources
Geodynamo
Magnetic anomaly
Magnetosphere
Short-term changes
Secular variation
Geomagnetic secular variation
Geomagnetic jerk
Long term behavior
Apparent polar wander
Geomagnetic excursion
Geomagnetic pole
Geomagnetic reversal
Geomagnetic secular variation
Polar wander
True polar wander
Magnetostratigraphy
Archaeomagnetic dating
Polarity chron
Magnetostratigraphy
Superchron (currently redirected to Geomagnetic reversal#Moyero Reversed Superchron)
Rock magnetism
Rock magnetism
Magnetic mineralogy
Natural remanent magnetization
Saturation isothermal remanence
Thermoremanent magnetization
Viscous remanent magnetization
Tectonic applications
Plate reconstruction
Magnetic survey
Aeromagnetic survey
Geophysical survey
Magnetic survey (archaeology)
Magnetometer
Radioactivity
Age of the Earth
Geochronology
Radiometric dating
Mineral physics
Mineral physics
Creep
Elasticity
Melting
Rheology
Thermal expansion
Viscosity
Vibration
Seismology
Earthquake – a motion that causes seismic waves.
Aftershock – follows larger earthquake.
Blind thrust – along a thrust fault that does not show on the Earth's surface.
Foreshock – precedes larger earthquake.
Harmonic tremor – long-duration, with distinct frequencies, associated with a volcanic eruption.
Interplate – at the boundary between two tectonic plates.
Intraplate – in the interior of a tectonic plate.
Megathrust – at subduction zones
Remotely triggered earthquakes – after main shock but outside the aftershock zone.
Slow – over a period of hours to months.
Submarine – under a body of water.
Supershear – rupture propagates faster than seismic shear wave velocity.
Tsunami – triggers a tsunami.
Seismic waves
P
S
Surface
Love
Raleigh
Reflection seismology
Seismic refraction
Seismic tomography
Structure of the Earth
Closely allied sciences
Atmospheric sciences
Atmospheric sciences
Aeronomy – the study of the physical structure and chemistry of the atmosphere.
Meteorology – the study of weather processes and forecasting.
Climatology – the study of weather conditions averaged over a period of time.
Geology
Geology
Mineralogy – the study of chemistry, crystal structure, and physical (including optical) properties of minerals.
Petrophysics – The study of the origin, structure, and composition of rocks.
Volcanology – the study of volcanoes, volcanic features (hot springs, geysers, fumaroles), volcanic rock, and heat flow related to volcanoes.
Engineering
Geophysical engineering – the application of geophysics to the engineering design of facilities including roads, tunnels, and mines.
Water on the Earth
Glaciology – the study of ice and natural phenomena that involve ice, particularly glaciers.
Hydrology – the study of the movement, distribution, and quality of water on Earth and other planets.
Physical oceanography – the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters.
Society
Influential persons
List of geophysicists
Organizations
American Geophysical Union
Canadian Geophysical Union
Environmental and Engineering Geophysical Society
European Association of Geoscientists and Engineers
European Geosciences Union
International Union of Geodesy and Geophysics
Royal Astronomical Society
Society of Exploration Geophysicists
Seismological Society of America
Publications
Geophysics journals
Important publications in geophysics (physics)
Geophysics lists
See also
Outline of geology
Outline of physics
External links
geophysics
geophysics | Outline of geophysics | [
"Physics"
] | 1,582 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
33,465,080 | https://en.wikipedia.org/wiki/Fiber%20pushout%20test | The fiber pushout test is a mechanical test performed on composite materials where a fiber is mechanically pushed out of the material. This test is carried out with the purpose of measuring the matrix/fiber interface de-bonding energy and the effects of frictional sliding between the matrix and the fiber.
To perform this test flat indentation tips, usually made of diamond or tungsten, are used. These tips are mechanically lowered onto the location of the fiber on the composite using a CCD
This test is not to be confused with fiber pull-out, which is a composite crack propagation phenomenon.
The mechanics behind the test are as follows:
1. Elastic loading
The flat tip indenter is lowered onto the fiber using a CCD camera to guide the indenter downwards
2. Progressive de-bonding
The indenter touches the fiber and begins applying load, bonds begin to break between the matrix and the fiber
3. Fiber push through
The bonds between the matrix and fiber are totally broken and the fiber begins to slide out of the matrix
4. Interfacial sliding
the indenter continues to push the fiber through the matrix, the only force resisting this movement is frictional
5. Indenter matrix collision
The fiber has been totally pushed out of the matrix and the indenter collides with the matrix surface. This gives the total displacement of the fiber.
References
Mechanical Behavior of Materials,(2009) Cambridge University Press by M.A. Meyers and K.K. Chawlars, Second Edition, Prentice-Hall, Upper Saddle River, NJ, 1999
External links
http://www.seas.harvard.edu/hutchinson/papers/424.pdf
Article title
Materials testing | Fiber pushout test | [
"Materials_science",
"Engineering"
] | 337 | [
"Materials testing",
"Materials science"
] |
33,469,486 | https://en.wikipedia.org/wiki/BCS%3A%2050%20Years | BCS: 50 Years is a review volume on the topic of superconductivity edited by Leon Cooper, a 1972 Nobel Laureate in Physics, and Dmitri Feldman of Brown University, first published in 2010.
The book consists of 23 articles written by outstanding physicists, including many Nobel prize-winners, and presents the complete theory of superconductivity - a phenomenon where the electrical resistance of some metallic materials suddenly vanish at temperatures near absolute zero.
Background
In 1957, John Bardeen, Leon Cooper and John Robert Schrieffer finally pieced together the puzzle of superconductivity, explaining in detail its mechanism and the associated effects. The BCS theory, named after the three scientists, won Professor Cooper the Nobel Prize in Physics in 1972, which he shared with John Robert Schrieffer and his teacher, John Bardeen.
Contents
Section 1: Historical Perspectives
The first section of the book describes important discoveries which led to the development of BCS theory.
Chapter 1: "Remembrance of Superconductivity Past" by Leon N Cooper
Chapter 2: "The Road to BCS" by John Robert Schrieffer
Chapter 3: "Development of Concepts in Superconductivity" by John Bardeen
Chapter 4: "Failed Theories of Superconductivity" by Jörg Schmalian
Chapter 5: "Nuclear Magnetic Resonance and the BCS Theory" by Charles Pence Slichter
Chapter 6: "Superconductivity: From Electron Interaction to Nuclear Superfluidity" by David Pines
Chapter 7: "Developing BCS Ideas in the Former Soviet Union" by Lev P. Gor'kov
Chapter 8: "BCS: The Scientific "Love of my Life"" by Philip Warren Anderson
Section 2: Fluctuations, Tunneling and Disorder
The second section focuses on quantum phenomena which occur in superconductors.
Chapter 9: "SQUIDs: Then and Now" by John Clarke
Chapter 10: "Resistance in Superconductors" by Bertrand I. Halperin, Gil Refael and Eugene Demler
Chapter 11: "Cooper Pair Breaking" by Peter Fulde
Chapter 12: "Superconductor-Insulator Transitions" by Allen M. Goldman
Chapter 13: "Novel Phases of Vortices in Superconductors" by Pierre Le Doussal
Chapter 14: "Breaking Translational Invariance by Population Imbalance: The Fulde-Ferrell-Larkin-Ovchinnikov States" by Gertrud Zwicknagl and Jochen Wosnitza
Section 3: New Superconductors
Section three of the book is on various experimental and theoretical methods used to identify new superconducting materials.
Chapter 15: "Predicting and Explaining and Other Properties of BCS Superconductor" by Marvin L. Cohen
Chapter 16: "The Evolution of HTS: -Experiment Perspectives" by Paul Chu
Chapter 17: "The Evolution of High-Temperature Superconductivity: Theory Perspective" by Elihu Abrahams
Section 4: BCS Beyond Superconductivity
The final section of the book is on the application of BCS theory beyond the field of superconductivity.
Chapter 18: "The Superfluid Phases of Liquid 3He: BCS Theory" by Anthony James Leggett
Chapter 19: "Superfluidity in a Gas of Strongly Interacting Fermions" by Wolfgang Ketterle, Y. Shin, André Schirotzek and C. H. Schunk
Chapter 20: "BCS from Nuclei and Neutron Stars to Quark Matter and Cold Atoms" by Gordon Baym
Chapter 21: "Energy Gap, Mass Gap, and Spontaneous Symmetry Breaking" by Yoichiro Nambu
Chapter 22: "BCS as Foundation and Inspiration: The Transmutation of Symmetry" by Frank Wilczek
Chapter 23: "From BCS to the LHC" by Steven Weinberg
Reception
John Swain writing for CERN Courier describes the book as a wonderful review of a powerful unifying concept which covers an enormous range of phenomena. Malcolm Beasley for Physics Today adds that the book will provide any person curious about superconductivity with something to enjoy. In addition, Jermey Matthews, the book editor from Physics Today, had chosen BCS: 50 years as one of the five books to put on your 2011 holiday wish list.
Additional information
13 papers from the book have been published concurrently as a special issue of the International Journal of Modern Physics B.
See also
Solid-state physics
Charles Kittel
David Mermin
References
Superconductivity
Physics books
2010 non-fiction books | BCS: 50 Years | [
"Physics",
"Materials_science",
"Engineering"
] | 935 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
21,862,537 | https://en.wikipedia.org/wiki/Isomorphism%20%28crystallography%29 | In chemistry, isomorphism has meanings both at the level of crystallography and at a molecular level. In crystallography, crystals are isomorphous if they have identical symmetry and if the atomic positions can be described with a set of parameters (unit cell dimensions and fractional coordinates) whose numerical values differ only slightly.
Molecules are isomorphous if they have similar shapes. The coordination complexes tris(acetylacetonato)iron (Fe(acac)3) and tris(acetylacetonato)aluminium (Al(acac)3) are isomorphous. These compounds, both of D3 symmetry have very similar shapes, as determined by bond lengths and bond angles. Isomorphous compounds give rise to isomorphous crystals and form solid solutions. Historically, crystal shape was defined by measuring the angles between crystal faces with a goniometer. Whereas crystals of Fe(acac)3 are deep red and crystals of Al(acac)3 are colorless, a solid solution of the two, i.e. Fe1−xAlx(acac)3 will be deep or pale pink depending on the Fe/Al ratio, x.
Double sulfates, such as Tutton's salt, with the generic formula MI2MII(SO4)2.6H2O, where MI is an alkali metal and MII is a divalent ion of Mg, Mn, Fe, Co, Ni, Cu or Zn, form a series of isomorphous compounds which were important in the nineteenth century in establishing the correct atomic weights of the transition elements. Alums, such as KAl(SO4)2.12H2O, are another series of isomorphous compounds, though there are three series of alums with similar external structures, but slightly different internal structures. Many spinels are also isomorphous.
In order to form isomorphous crystals two substances must have the same chemical formulation (i.e., atoms in the same ratio), they must contain atoms which have corresponding chemical properties and the sizes of corresponding atoms should be similar. These requirements ensure that the forces within and between molecules and ions are approximately similar and result in crystals that have the same internal structure. Even though the space group is the same, the unit cell dimensions will be slightly different because of the different sizes of the atoms involved.
Mitscherlich's law
Mitscherlich's law of isomorphism, or the law of isomorphism, is an approximate law suggesting that crystals composed of the same number of similar elements tend to demonstrate isomorphism.
Mitscherlich's law is named for German chemist Eilhard Mitscherlich, who formulated the law and published it between 1819 and 1823.
According to Ferenc Szabadváry, one of the clues that helped Berzelius determine the atomic weights of the elements was "the discovery of Mitscherlich that compounds which contain the same number of atoms and have similar structures, exhibit similar crystal forms (isomorphism)."
See also
Asterism (gemology)
Polymorphism (materials science)
Goldschmidt tolerance factor
Solid solution
Vegard's law
References
Crystallography
Mineralogy concepts | Isomorphism (crystallography) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 678 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
21,863,927 | https://en.wikipedia.org/wiki/Effective%20mass%20%28spring%E2%80%93mass%20system%29 | In a real spring–mass system, the spring has a non-negligible mass . Since not all of the spring's length moves at the same velocity as the suspended mass (for example the point completely opposed to the mass , at the other end of the spring, is not moving at all), its kinetic energy is not equal to . As such, cannot be simply added to to determine the frequency of oscillation, and the effective mass of the spring, , is defined as the mass that needs to be added to to correctly predict the behavior of the system.
Uniform spring (homogeneous)
The effective mass of the spring in a spring-mass system when using a heavy spring (non-ideal) of uniform linear density is of the mass of the spring and is independent of the direction of the spring-mass system (i.e., horizontal, vertical, and oblique systems all have the same effective mass). This is because external acceleration does not affect the period of motion around the equilibrium point.
The effective mass of the spring can be determined by finding its kinetic energy. For a differential mass element of the spring at a position (dummy variable) moving with a speed , its kinetic energy is:
In order to find the spring's total kinetic energy, it requires adding all the mass elements' kinetic energy, and requires the following integral:
If one assumes a homogeneous stretching, the spring's mass distribution is uniform, , where is the length of the spring at the time of measuring the speed. Hence,
The velocity of each mass element of the spring is directly proportional to length from the position where it is attached (if near to the block then more velocity, and if near to the ceiling then less velocity), i.e. , from which it follows:
Comparing to the expected original kinetic energy formula the effective mass of spring in this case is . This result is known as Rayleigh's value, after Lord Rayleigh.
To find the gravitational potential energy of the spring, one follows a similar procedure:
Using this result, the total energy of system can be written in terms of the displacement from the spring's unstretched position (taking the upwards direction as positive, ignoring constant potential terms and setting the origin of potential energy at ):
Note that here is the acceleration of gravity along the spring. By differentiation of the equation with respect to time, the equation of motion is:
The equilibrium point can be found by letting the acceleration be zero:
Defining , the equation of motion becomes:
This is the equation for a simple harmonic oscillator with angular frequency:
Thus, it has a smaller angular frequency than in the ideal spring. Also, its period is given by:
Which is bigger than the ideal spring. Both formulae reduce to the ideal case in the limit .
So the effective mass of the spring added to the mass of the load gives us the "effective total mass" of the system that must be used in the standard formula to determine the period of oscillation.
Finally, the solution to the initial value problem:
Is given by:
Which is a simple harmonic motion.
General case
As seen above, the effective mass of a spring does not depend upon "external" factors such as the acceleration of gravity along it. In fact, for a non-uniform heavy spring, the effective mass solely depends on its linear density along its length:
So the effective mass of a spring is:
This result also shows that , with occurring in the case of an unphysical spring whose mass is located purely at the end farthest from the support.
Three special cases can be considered:
is the idealised case where the spring has no mass, and .
is the homogeneous case (uniform spring) where Rayleigh's value appears in the equation, i.e., .
, where is Dirac delta function, is the extreme case when all the mass is located at , resulting in .
To find the corresponding Lagrangian, one must find beforehand the potential gravitational energy of the spring:
Due to the monotonicity of the integral, it follows that:
With the Lagrangian being:
Real spring
The above calculations assume that the stiffness coefficient of the spring does not depend on its length. However, this is not the case for real springs. For small values of , the displacement is not so large as to cause elastic deformation. In fact for , the effective mass is . Jun-ichi Ueda and Yoshiro Sadamoto have found that as increases beyond , the effective mass of a spring in a vertical spring-mass system becomes smaller than Rayleigh's value and eventually reaches negative values at about . This unexpected behavior of the effective mass can be explained in terms of the elastic after-effect (which is the spring's not returning to its original length after the load is removed).
Comparison with pendulum
Consider the pendulum differential equation:
Where is the natural frequency of oscillations (and the angular frequency for small oscillations). The parameter stands for in an ideal pendulum, and in a compound pendulum, where is the length of the pendulum, is the total mass of the system, is the distance from the pivot point (the point the pendulum is suspended from) to the pendulum's centre-of-mass, and is the moment of inertia of the system with respect to an axis that goes through the pivot.
Consider a system made of a homogeneous rod swinging from one end, and having attached bob at the other end. Let be the length of the rod, the mass of the rod, and the mass of the bob, thus the linear density is given by , with Dirac's delta function. The total mass of the system is . To find out one must solve by definition of centre-of-mass (this would be an integral equation in the general case, ,but it simplifies to this in the homogeneous case), whose solution is give by . The moment of inertia of the system is the sum of the two moments of inertia, (once again in the general case the integral equation would be ). Thus the expression can be simplified:
Notice how the final expression is not a function on both the mass of the bob, , and the mass of the rod, , but only on their ratio,. Also notice that initially it has the same structure as the spring-mass system: the product of the ideal case and a correction (with Rayleigh's value). Notice that for , the last correction term can be approximated by:
Let's compare both results:
For the spring-mass system:
For the pendulum:
See also
Simple harmonic motion (SHM) examples.
Reduced mass
References
External links
http://tw.knowledge.yahoo.com/question/question?qid=1405121418180
http://tw.knowledge.yahoo.com/question/question?qid=1509031308350
https://web.archive.org/web/20110929231207/http://hk.knowledge.yahoo.com/question/article?qid=6908120700201
https://web.archive.org/web/20080201235717/http://www.goiit.com/posts/list/mechanics-effective-mass-of-spring-40942.htm
http://www.juen.ac.jp/scien/sadamoto_base/spring.html
"The Effective Mass of an Oscillating Spring" Am. J. Phys., 38, 98 (1970)
"Effective Mass of an Oscillating Spring" The Physics Teacher, 45, 100 (2007)
Mechanical vibrations
Mass | Effective mass (spring–mass system) | [
"Physics",
"Mathematics",
"Engineering"
] | 1,593 | [
"Structural engineering",
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Mechanics",
"Mechanical vibrations",
"Wikipedia categories named after physical quantities",
"Matter"
] |
21,867,246 | https://en.wikipedia.org/wiki/Mahler%20volume | In convex geometry, the Mahler volume of a centrally symmetric convex body is a dimensionless quantity that is associated with the body and is invariant under linear transformations. It is named after German-English mathematician Kurt Mahler. It is known that the shapes with the largest possible Mahler volume are the balls and solid ellipsoids; this is now known as the Blaschke–Santaló inequality. The still-unsolved Mahler conjecture states that the minimum possible Mahler volume is attained by a hypercube.
Definition
A convex body in Euclidean space is defined as a compact convex set with non-empty interior. If is a centrally symmetric convex body in -dimensional Euclidean space, the polar body is another centrally symmetric body in the same space, defined as the set
The Mahler volume of is the product of the volumes of and .
If is an invertible linear transformation, then . Applying to multiplies its volume by and multiplies the volume of by . As these determinants are multiplicative inverses, the overall Mahler volume of is preserved by linear transformations.
Examples
The polar body of an -dimensional unit sphere is itself another unit sphere. Thus, its Mahler volume is just the square of its volume,
where is the Gamma function.
By affine invariance, any ellipsoid has the same Mahler volume.
The polar body of a polyhedron or polytope is its dual polyhedron or dual polytope. In particular, the polar body of a cube or hypercube is an octahedron or cross polytope. Its Mahler volume can be calculated as
The Mahler volume of the sphere is larger than the Mahler volume of the hypercube by a factor of approximately .
Extreme shapes
The Blaschke–Santaló inequality states that the shapes with maximum Mahler volume are the spheres and ellipsoids. The three-dimensional case of this result was proven by ; the full result was proven much later by using a technique known as Steiner symmetrization by which any centrally symmetric convex body can be replaced with a more sphere-like body without decreasing its Mahler volume.
The shapes with the minimum known Mahler volume are hypercubes, cross polytopes, and more generally the Hanner polytopes which include these two types of shapes, as well as their affine transformations. The Mahler conjecture states that the Mahler volume of these shapes is the smallest of any n-dimensional symmetric convex body; it remains unsolved when . As Terry Tao writes:
proved that the Mahler volume is bounded below by times the volume of a sphere for some absolute constant , matching the scaling behavior of the hypercube volume but with a smaller constant. proved that, more concretely, one can take in this bound. A result of this type is known as a reverse Santaló inequality.
Partial results
The 2-dimensional case of the Mahler conjecture has been solved by Mahler and the 3-dimensional case by Iriyeh and Shibata.
It is known that each of the Hanner polytopes is a strict local minimizer for the Mahler volume in the class of origin-symmetric convex bodies endowed with the Banach–Mazur distance. This was first proven by Nazarov, Petrov, Ryabogin, and Zvavitch for the unit cube, and later generalized to all Hanner polytopes by Jaegil Kim.
The Mahler conjecture holds for zonotopes.
The Mahler conjecture holds in the class of unconditional bodies, that is, convex bodies invariant under reflection on each coordinate hyperplane {xi = 0}. This was first proven by Saint-Raymond in 1980. Later, a much shorter proof was found by Meyer. This was further generalized to convex bodies with symmetry groups that are more general reflection groups. The minimizers are then not necessarily Hanner polytopes, but were found to be regular polytopes corresponding to the reflection groups.
Reisner et al. (2010) showed that a minimizer of the Mahler volume must have Gaussian curvature equal to zero almost everywhere on its boundary, suggesting strongly that a minimal body is a polytope.
For asymmetric bodies
The Mahler volume can be defined in the same way, as the product of the volume and the polar volume, for convex bodies whose interior contains the origin regardless of symmetry. Mahler conjectured that, for this generalization, the minimum volume is obtained by a simplex, with its centroid at the origin. As with the symmetric Mahler conjecture, reverse Santaló inequalities are known showing that the minimum volume is at least within an exponential factor of the simplex.
Notes
References
Revised and reprinted in
Convex geometry
Geometric inequalities
Volume | Mahler volume | [
"Physics",
"Mathematics"
] | 980 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Extensive quantities",
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry",
"Volume",
"Wikipedia categories named after physical quantities"
] |
21,868,302 | https://en.wikipedia.org/wiki/Tert-Butyldiphenylsilyl | tert-Butyldiphenylsilyl, also known as TBDPS, is a protecting group for alcohols. Its formula is C16H19Si-.
Development
The tert-butyldiphenylsilyl group was first suggested as a protecting group by Hanessian and Lavallée in 1975. It was designed to supersede the use of Corey's tert-butyldimethylsilyl as a protecting group for alcohols:
The novel features that they highlight are the increased resistance to acidic hydrolysis and increased selectivity towards protection of primary hydroxyl groups. The group is unaffected by treatment with 80% acetic acid, which catalyses the deprotection of O-tetrapyranyl, O-trityl and O-tert-butyldimethylsilyl ethers. It is also unaffected by 50% trifluoroacetic acid (TFA), and survives the harsh acidic conditions used to install and remove isopropylidene or benzylidene acetals.
Applications in chemical synthesis
The TBDPS group is prized for its increased stability towards acidic conditions and nucleophilic species over the other silyl ether protecting groups. This can be thought of as arising from the extra steric bulk of the groups surrounding the silicon atom. The protecting group is easily introduced by using the latent nucleophilicity of the hydroxyl group and an electrophilic source of TBDPS. This might involve using the triflate or the less reactive chloride of TBDPS along with a mild base such as 2,6-lutidine or pyridine and potentially a catalyst such as DMAP or imidazole.
The ease of installation of the protecting group follows the order: 1o > 2o > 3o, allowing the least hindered hydroxyl group to be protected in the presence of more hindered hydroxyls.
Protection of equatorial hydroxyl groups can be achieved over axial hydroxyl groups by the use of a cationic silyl species generated by tert-butyldiphenylsilyl chloride and a halogen abstractor, silver nitrate.
The increased stability towards acidic hydrolysis and nucleophilic species allows for the TBDPS groups in a substrate to be retained while other silyl ethers are removed.
The TMS group may easily be removed in the presence of a TBDPS group by reaction with TsOH. The group is even more resistant to acid hydrolysis than the bulky TIPS. However, in the presence of a fluoride source such as TBAF or TAS-F, TIPS groups are more stable than TBDPS groups. The TBDPS group is of similar stability to the TBDMS group and is more stable in the presence of fluoride than all other simple alkyl silyl ethers. It is possible to remove the TBDPS group selectively, leaving a TBDMS group intact, using NaH in HMPA at 0 °C for five minutes.
Stability
The TBDPS group is stable under a wide variety of conditions:
References
Protecting groups
Tert-butyl compounds
Organosilicon compounds | Tert-Butyldiphenylsilyl | [
"Chemistry"
] | 669 | [
"Protecting groups",
"Functional groups",
"Reagents for organic chemistry"
] |
41,714,108 | https://en.wikipedia.org/wiki/CDP-choline%20pathway | The CDP-choline pathway, first identified by Eugene P. Kennedy in 1956, is the predominant mechanism by which mammalian cells synthesize phosphatidylcholine (PC) for incorporation into membranes or lipid-derived signalling molecules. The CDP-choline pathway represents one half of what is known as the Kennedy pathway. The other half is the CDP-ethanolamine pathway which is responsible for the biosynthesis of the phospholipid phosphatidylethanolamine (PE).
The CDP-choline pathway begins with the uptake of exogenous choline into the cell. The first enzymatic reaction is catalyzed by choline kinase (CK) and involves the phosphorylation of choline to form phosphocholine. Phosphocholine is then activated by the addition of CTP catalyzed by the rate-limiting enzyme, CTP:phosphocholine cytidylyltransferase to form CDP-choline. The final step of the pathway involves the addition of the choline headgroup onto a diacylglycerol (DAG) backbone to form PC, catalyzed by choline/ethanolamine phosphotransferase (CEPT).
Phosphatidylcholine can be acted upon by phospholipases to form different metabolites.
Choline transport
Mammalian cells are unable to synthesize sufficient quantities of de novo choline to meet physiologic requirements, and therefore must rely on exogenous sources from the diet. The uptake of choline is accomplished predominantly by the high-affinity, sodium dependent choline transporter (CHT) and requires ATP as an energy source. On the other hand, choline may enter the cell through the activation of low-affinity, sodium-independent organic cation transport proteins (OCTs) and/or carnitine/organic cation transporters (OCTNs), and do not require ATP. Lastly, choline may enter the cell through intermediate-affinity transporters, which include the choline transporter-like protein 1 (CTL1).
The fate of internalized choline depends on the cell type. In pre-synaptic neurons the majority of choline will be acetylated by the enzyme choline acetyltransferase to form the neurotransmitter acetylcholine. Most other cells will phosphorylate choline by the enzyme choline kinase, the first committed step of CDP-choline pathway.
Choline kinase (CK)
Choline kinase (CK) is a cytosolic protein that catalyzes the following reaction:
choline + ATP ⇌ phosphocholine + ADP
In addition to the phosphorylation of choline, CK has also been shown to phosphorylate ethanolamine, a precursor to another important glycerophospholipid, phosphatidylethanolamine. CK functions as a dimer consisting of either α1, α2 or β subunits. Each CK isoform is ubiquitously expressed throughout tissues, however CKα is enriched in the testis and liver, whereas CKβ is enriched in the liver and the heart. Homozygous deletion of CKα is embryonic lethal after about 5 days, whereas deletion of CKβ is not.
Under normal circumstances, choline kinase is not the rate-limiting step of the CDP-choline pathway. However in rapidly dividing cells, there is increased CK expression and activity as a result of increased demand for PC synthesis.
CTP:phosphocholine cytidylyltransferase (CCT)
CTP:phosphocholine cytidylyltransferase (CCT), the rate-limiting enzyme of the pathway, is a nuclear/cytosolic enzyme and catalyzes the following reaction:
phosphocholine + CTP ⇌ CDP-choline + PPi
CCT functions as a dimer of either α and β subunits encoded by Pcyt1a and Pcyt1b, respectively. CCTα has four domains; a Nuclear localization signal (NLS), an α-helical membrane binding domain, a catalytic domain, and a phosphorylation domain. The major difference between the α and β isoforms is that CCTβ lacks the NLS resulting in a predominantly cytosolic pool of CCTβ. On the other hand, the presence of an NLS results in a predominantly nuclear pool of CCTα. CCTα shuttles between the nucleus (where it is considered inactive) to the cytoplasm where it associates with membranes and is activated in response to lipid activators or during progression through the cell cycle when PC demand is high.
CCTα is an amphitropic enzyme, meaning that it exists as either an inactive soluble form, or an active, membrane bound form. Whether or not CCTα is membrane bound is largely dictated by the relative composition of membranes. If membranes are low in PC, and relatively enriched in anionic lipids, diacylglycerol, or phosphatidylethanolamine, CCT inserts into the membrane bilayer via its membrane binding domain. This binding event relieves an autoinhibitory constraint on the catalytic domain, resulting in a decrease in the Km for phosphocholine.
Choline/ethanolamine phosphotransferase (CEPT)
Choline/ethanolamine phosphotransferase (CEPT), or Choline Phosphotransferase (CPT) the last enzymatic reaction in the CDP-choline pathway, catalyzes the following reaction:
CDP-choline + 1,2-diacylglycerol ⇌ phosphatidylcholine + CMP
The last step in the CDP-choline pathway is catalyzed by either CPT or CEPT and are localized to the Golgi or endoplasmic reticulum, respectively. CPT and CEPT are encoded by separate genes that share 60% sequence similarity. Both isoforms contain 7 transmembrane segments, and an α-helix near the catalytic domain that is required for CDP-alcohol binding.
CPT recognizes only CDP-choline, whereas CEPT recognizes both CDP-choline and CDP-ethanolamine. The reason for this dual specificity is not exclusively known. CEPT is largely considered to be the enzyme responsible for the bulk of PC synthesis, with CPT having an exclusive role in the Golgi, where it may control the levels of the precursor DAG, an important second messenger.
Neither CPT or CEPT are considered to be rate-limiting, but can be if DAG is restricted.
References
Biosynthesis | CDP-choline pathway | [
"Chemistry"
] | 1,447 | [
"Biosynthesis",
"Metabolism",
"Chemical synthesis"
] |
30,903,129 | https://en.wikipedia.org/wiki/Federation%20of%20Analytical%20Chemistry%20and%20Spectroscopy%20Societies | The Federation of Analytical Chemistry and Spectroscopy Societies or FACSS is a scientific society incorporated on June 28, 1972, with the goal of promoting research and education in analytical chemistry. The organization combined the many smaller meetings of individual societies into an annual meeting that includes all of analytical chemistry. The meetings are intended to provide a forum for scientists to address the development of analytical chemistry, chromatography, and spectroscopy.
The society's main activity is its annual conference held every fall. These conference offer plenary sessions, workshops, job fairs, oral presentations, poster presentations, and conference networking events. The conference was held internationally for the first time in 1999 when it was hosted in Vancouver, BC. The annual conference is often discussed in the journal Applied Spectroscopy, Spectroscopy Magazine, and American Pharmaceutical Reviews.
At the 2011 FACSS Conference in Reno, NV, the FACSS organization changed the name of the annual conference to SciX. The first SciX Conference presented by FACSS was held in Kansas City, MO in 2012. The name change was discussed in Spectroscopy in fall 2011:
. More information about the new name can be found at scixconference.org
Awards
FACSS presents several awards to both students and professionals. These awards honor scientists who have made significant contributions to the field of Analytical Chemistry.
FACSS Student Award and Tomas A. Hirshfeld Award [3]
SAS Student Poster Awards and FACSS Student Poster Awards [4]
FACSS Distinguished Service Award [5]
FACSS Innovation Award [6]
Charles Mann Award for Applied Raman Spectroscopy [7]
Anachem Award [8]
Lester W. Strock Award [9]
Applied Spectroscopy William F. Meggers Award [10]
Ellis R. Lippincott Award [11]
William G. Fateley Student Award
Coblentz Society Craver Award [12]
ACS Div of Analytical Chem Arthur F. Findeis Award for Achievements by a Young Analytical Scientist [13]
The FACSS Innovation Award was started in 2011 at the Reno meeting.
Sponsoring societies
American Chemical Society Analytical Division
AES Electrophoresis Society
American Society for Mass Spectrometry
Anachem
Coblentz Society
Council for Near-Infrared Spectroscopy
Infrared Raman and Discussion Group
International Society of Automation Analysis Division
The North American Society for Laser-Induced Breakdown Spectroscopy
Royal Society of Chemistry Analytical Division
Society for Applied Spectroscopy
The Spectroscopical Society of Japan
Conferences
2015 (forthcoming) Providence, RI, September 27-October 2, 2015
2014 (forthcoming) Reno, NV, September 28-October 3, 2014
2013 (forthcoming) Milwaukee, WI, September 29-October 3, 2013, which will be the 40th annual meeting of the FACSS organization
2012 - Kansas City, MO
2011 - Reno, NV
2010 - Raleigh, NC
2009 - Louisville, KY
2008 - Reno, NV
2007 - Memphis, TN
2006 - Lake Buena Vista, FL
2005 - Quebec City, Canada
2004 - Portland, OR
2003 - Ft. Lauderdale, FL
2002 - Providence, RI
2001 - Detroit, Michigan
2000 - Nashville, Tennessee
1999 - Vancouver, BC
1998 - Austin, TX
Accompanying each conference, attendees receive a final program book of abstracts which includes the schedule of talks, profiles of award winners, a list of exhibitors, and much more. Copies of these final programs for all forty of the conferences held by FACSS are available for download as .pdf files from the FACSS website, under Past Events.
References
External links
American Chemical Society- Analytical Division
AES Electrophoresis Society
American Society for Mass Spectrometry
Anachem
Coblentz Society
Council for Near-Infrared Spectroscopy
Infrared Raman and Discussion Group
International Society of Automation- Analysis Division
The North American Society for Laser-Induced Breakdown Spectroscopy
Royal Society of Chemistry - Analytical Division
Society for Applied Spectroscopy
The Spectroscopical Society of Japan
Scientific societies based in the United States
Annual events in the United States
Spectroscopy
Analytical chemistry | Federation of Analytical Chemistry and Spectroscopy Societies | [
"Physics",
"Chemistry"
] | 805 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"nan",
"Spectroscopy"
] |
30,914,107 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%20group | The Schrödinger group is the symmetry group of the free particle Schrödinger equation. Mathematically, the group SL(2,R) acts on the Heisenberg group by outer automorphisms, and the Schrödinger group is the corresponding semidirect product.
Schrödinger algebra
The Schrödinger algebra is the Lie algebra of the Schrödinger group. It is not semi-simple. In one space dimension, it can be obtained as a semi-direct sum of the Lie algebra sl(2,R) and the Heisenberg algebra; similar constructions apply to higher spatial dimensions.
It contains a Galilei algebra with central extension.
where are generators of rotations (angular momentum operator), spatial translations (momentum operator), Galilean boosts and time translation (Hamiltonian) respectively. (Notes: is the imaginary unit, . The specific form of the commutators of the
generators of rotation is the one of three-dimensional space, then .). The central extension M has an interpretation as non-relativistic mass and corresponds to the symmetry of Schrödinger equation under phase transformation (and to the conservation of probability).
There are two more generators which we shall denote by D and C. They have the following commutation relations:
The generators H, C and D form the sl(2,R) algebra.
A more systematic notation allows to cast these generators into the four (infinite) families and , where n ∈ ℤ is an integer and m ∈ ℤ+1/2 is a half-integer and j,k=1,...,d label the spatial direction, in d spatial dimensions. The non-vanishing commutators of the Schrödinger algebra become (euclidean form)
The Schrödinger algebra is finite-dimensional and contains the generators .
In particular, the three generators span the sl(2,R) sub-algebra. Space-translations are generated by
and the Galilei-transformations by .
In the chosen notation, one clearly sees that an infinite-dimensional extension exists, which is called the Schrödinger–Virasoro algebra.
Then, the generators with n integer span a loop-Virasoro algebra. An explicit representation as time-space transformations is given by, with n ∈ ℤ and m ∈ ℤ+1/2
This shows how the central extension of the non-semi-simple and finite-dimensional Schrödinger algebra becomes a component of an infinite family in the Schrödinger–Virasoro algebra. In addition, and in analogy with either the Virasoro algebra or the Kac–Moody algebra, further central extensions are possible. However, a non-vanishing result only exists for the commutator
, where it must be of the familiar Virasoro form, namely
or for the commutator between the rotations , where it must have a Kac-Moody form. Any other possible central extension can be absorbed into the Lie algebra generators.
The role of the Schrödinger group in mathematical physics
Though the Schrödinger group is defined as symmetry group of the free particle Schrödinger equation, it is realized in some interacting non-relativistic systems (for example cold atoms at criticality).
The Schrödinger group in spatial dimensions can be embedded into relativistic conformal group in dimensions . This embedding is connected with the fact that one can get the Schrödinger equation from the massless Klein–Gordon equation through Kaluza–Klein compactification along null-like dimensions and Bargmann lift of Newton–Cartan theory. This embedding can also be viewed as the extension of the Schrödinger algebra to the maximal parabolic sub-algebra of .
The Schrödinger group symmetry can give rise to exotic properties to interacting bosonic and fermionic systems, such as the superfluids in bosons
,
and Fermi liquids and non-Fermi liquids in fermions. They have applications in condensed matter and cold atoms.
The Schrödinger group also arises as dynamical symmetry in condensed-matter applications: it is the dynamical symmetry of the
Edwards–Wilkinson model of kinetic interface growth. It also describes the kinetics of phase-ordering, after a temperature quench from the disordered to the ordered phase, in magnetic systems.
References
C. R. Hagen, "Scale and Conformal Transformations in Galilean-Covariant Field Theory", Phys. Rev. D5, 377–388 (1972)
U. Niederer, "The maximal kinematical invariance group of the free Schroedinger equation", Helv. Phys. Acta 45, 802 (1972)
G. Burdet, M. Perrin, P. Sorba, "About the non-relativistic structure of the conformal algebra", Comm. Math. Phys. 34, 85 (1973)
M. Henkel, "Schrödinger-invariance and strongly anisotropic critical systems", J. Stat. Phys. 75, 1023 (1994)
M. Henkel, J. Unterberger, "Schrödinger-invariance and space-time symmetries", Nucl. Phys. B660, 407 (2003)
A. Röthlein, F. Baumann, M. Pleimling, "Symmetry-based determination of space-time functions in nonequilibrium growth processes", Phys. Rev. E74, 061604 (2006) -- erratum E76, 019901 (2007)
D.T. Son, "Towards an AdS/cold atoms correspondence: A geometric realization of the Schrödinger symmetry", Phys. Rev. D78, 046003 (2008)
A. Bagchi, R. Gopakumar, "Galilean Conformal Algebras and AdS/CFT", JHEP 0907:037 (2009)
M. Henkel, M. Pleimling, Non-equilibrium phase transitions, vol 2: ageing and dynamical scaling far from equilibrium, (Springer, Heidelberg 2010)
J. Unterberger, C. Roger, The Schrödinger-Virasoro algebra, (Springer, Heidelberg 2012)
See also
Schrödinger equation
Galilean transformation
Poincaré group
Group
Theoretical physics
Lie groups | Schrödinger group | [
"Physics",
"Mathematics"
] | 1,343 | [
"Lie groups",
"Mathematical structures",
"Equations of physics",
"Theoretical physics",
"Eponymous equations of physics",
"Quantum mechanics",
"Algebraic structures",
"Schrödinger equation"
] |
30,917,217 | https://en.wikipedia.org/wiki/C9H8N2O2 | {{DISPLAYTITLE:C9H8N2O2}}
The molecular formula C9H8N2O2 (molar mass: 176.17 g/mol, exact mass: 176.0586 u) may refer to:
Methyl phenyldiazoacetate
Pemoline
Molecular formulas | C9H8N2O2 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
30,917,280 | https://en.wikipedia.org/wiki/C8H7N3O5 | {{DISPLAYTITLE:C8H7N3O5}}
The molecular formula C8H7N3O5 (molar mass: 225.16 g/mol, exact mass: 225.0386 u) may refer to:
Dinitolmide, also known as zoalene
Furazolidone
Molecular formulas | C8H7N3O5 | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
26,251,183 | https://en.wikipedia.org/wiki/Alureon | Alureon (also known as TDSS or TDL-4) is a trojan and rootkit created to steal data by intercepting a system's network traffic and searching for banking usernames and passwords, credit card data, PayPal information, social security numbers, and other sensitive user data. Following a series of customer complaints, Microsoft determined that Alureon caused a wave of BSoDs on some 32-bit Microsoft Windows systems. The update, MS10-015, triggered these crashes by breaking assumptions made by the malware author(s).
According to research conducted by Microsoft, Alureon was the second most active botnet in the second quarter of 2010.
Description
The Alureon bootkit was first identified around 2007. Personal computers are usually infected when users manually download and install Trojan software. Alureon is known to have been bundled with the rogue security software, "Security Essentials 2010". When the dropper is executed, it first hijacks the print spooler service (spoolsv.exe) to update the master boot record and execute a modified bootstrap routine. Then it infects low-level system drivers such as those responsible for PATA operations (atapi.sys) to install its rootkit.
Once installed, Alureon manipulates the Windows Registry to block access to Windows Task Manager, Windows Update, and the desktop. It also attempts to disable anti-virus software. Alureon has also been known to redirect search engines to commit click fraud. Google has taken steps to mitigate this for their users by scanning for malicious activity and warning users in the case of a positive detection.
The malware drew considerable public attention when a software bug in its code caused some 32-bit Windows systems to crash upon installation of security update MS10-015. The malware was using a hard-coded memory address in the kernel that changed after the installation of the hotfix. Microsoft subsequently modified the hotfix to prevent installation if an Alureon infection is present, The malware author(s) also fixed the bug in the code.
In November 2010, the press reported that the rootkit had evolved to the point that it was bypassing the mandatory kernel-mode driver signing requirement of 64-bit editions of Windows 7. It did this by subverting the master boot record, which made it particularly resistant on all systems to detection and removal by anti-virus software.
TDL-4
TDL-4 is sometimes used synonymously with Alureon and is also the name of the rootkit that runs the botnet.
It first appeared in 2008 as TDL-1 being detected by Kaspersky Lab in April 2008. Later version two appeared known as TDL-2 in early 2009. Some time after TDL-2 became known, emerged version three which was titled TDL-3. This led eventually to TDL-4.
It was often noted by journalists as "indestructible" in 2011, although it is removable with tools such as Kaspersky's TDSSKiller. It infects the master boot record of the target machine, making it harder to detect and remove. Major advancements include encrypting communications, decentralized controls using the Kad network, as well as deleting other malware.
Removal
While the rootkit is generally able to avoid detection, circumstantial evidence of the infection may be found through examination of network traffic with a packet analyzer or inspection of outbound connections with a tool such as netstat. Although existing security software on a computer will occasionally report the rootkit, it often goes undetected. It may be useful to perform an offline scan of the infected system after booting an alternative operating system, such as WinPE, as the malware will attempt to prevent security software from updating. The "FixMbr" command of the Windows Recovery Console and manual replacement of "atapi.sys" could possibly be required to disable the rootkit functionality before anti-virus tools are able to find and clean an infection.
Various companies have created standalone tools which attempt to remove Alureon. Two popular tools are Microsoft Windows Defender Offline and Kaspersky TDSSKiller.
Arrests
On November 9, 2011, the United States Attorney for the Southern District of New York announced charges against six Estonian nationals who were arrested by Estonian authorities and one Russian national, in conjunction with Operation Ghost Click. As of February 6, 2012, two of these individuals were extradited to New York for running a sophisticated operation that used Alureon to infect millions of computers.
See also
Bagle (computer worm)
Botnet
Conficker
Gameover ZeuS
Regin (malware)
Rustock botnet
Srizbi botnet
Storm botnet
Trojan.Win32.DNSChanger
ZeroAccess botnet
Zeus (malware)
Zombie (computing)
References
External links
TDSSKiller tool for detecting and removing rootkits and bootkits, Kaspersky Lab
TDSS Removal, June 6, 2011, TrishTech.com
Virus:Win32/Alureon.A at Microsoft Security Intelligence
Backdoor.Tidserv at Symantec
Botnets
Internet security
Distributed computing projects
Rootkits
Spamming
Trojan horses
Windows malware
Hacking in the 2010s | Alureon | [
"Engineering"
] | 1,100 | [
"Distributed computing projects",
"Information technology projects"
] |
26,251,306 | https://en.wikipedia.org/wiki/Plasma%20transferred%20wire%20arc%20thermal%20spraying | Plasma transferred wire arc (PTWA) thermal spraying is a thermal spraying process that deposits a coating on the internal surface of a cylindrical surface, or external surface of any geometry. It is predominantly known for its use in coating the cylinder bores of an internal combustion engine, enabling the construction of aluminium engine blocks without cast iron cylinder sleeves.
The inventors of PTWA received the 2009 IPO National Inventor of the Year award. This technology was initially patented and developed by Flame-Spray Industries, and subsequently improved upon by Flame-Spray and Ford.
Process
A single conductive wire is used as feedstock for the system. A supersonic plasma jet—formed by a transferred arc between a non-consumable cathode and the wire—melts and atomizes the wire. A stream of air transports the atomized metal onto the substrate. The particles flatten upon striking the surface of the substrate due to their high kinetic energy. The particles rapidly solidify upon contact and can assume both crystalline and amorphous phases. There is also the possibility of producing multi-layer coatings via stacked layers of particles, increasing wear resistance. All conductive wires up to and including can be used as feedstock material, including "cored" wires. Refractory metals, as well as low melt materials, are easily deposited.
Applications
PTWA can be used to apply a coating to wear surfaces of engine or transmission components, serving as a plain bearing. For the cylinder bores of hypoeutectic aluminum-silicon alloy blocks, PTWA's main advantages over cast iron liners are reduced weight and cost. The thinner bore surface also allows for more compact bore spacing, and can potentially provide better heat transfer.
Automotive engines that use PTWA include Nissan VR38DETT, and Ford Coyote. Caterpillar and Ford also use PTWA to remanufacture engines.
References
External links
PTWA internal coating system
http://www.sae.org/mags/aei/manuf/7624
Metallurgical processes
Coatings
spraying | Plasma transferred wire arc thermal spraying | [
"Chemistry",
"Materials_science"
] | 421 | [
"Metallurgical processes",
"Coatings",
"Metallurgy"
] |
26,252,375 | https://en.wikipedia.org/wiki/Infragravity%20wave | Infragravity waves are surface gravity waves with frequencies lower than the wind waves – consisting of both wind sea and swell – thus corresponding with the part of the wave spectrum lower than the frequencies directly generated by forcing through the wind.
Infragravity waves are ocean surface gravity waves generated by ocean waves of shorter periods. The amplitude of infragravity waves is most relevant in shallow water, in particular along coastlines hit by high amplitude and long period wind waves and ocean swells. Wind waves and ocean swells are shorter, with typical dominant periods of 1 to 25 s. In contrast, the dominant period of infragravity waves is typically 80 to 300 s, which is close to the typical periods of tsunamis, with which they share similar propagation properties including very fast celerities in deep water. This distinguishes infragravity waves from normal oceanic gravity waves, which are created by wind acting on the surface of the sea, and are slower than the generating wind.
Whatever the details of their generation mechanism, discussed below, infragravity waves are these subharmonics of the impinging gravity waves.
Technically infragravity waves are simply a subcategory of gravity waves and refer to all gravity waves with periods greater than 30 s. This could include phenomena such as tides and oceanic Rossby waves, but the common scientific usage is limited to gravity waves that are generated by groups of wind waves.
The term "infragravity wave" appears to have been coined by Walter Munk in 1950.
Generation
Two main processes can explain the transfer of energy from the short wind waves to the long infragravity waves, and both are important in shallow water and for steep wind waves. The most common process is the subharmonic interaction of trains of wind waves which was first observed by Munk and Tucker and explained by Longuet-Higgins and Stewart. Because wind waves are not monochromatic they form groups. The Stokes drift induced by these groupy waves transports more water where the waves are highest. The waves also push the water around in a way that can be interpreted as a force: the divergence of the radiation stresses. Combining mass and momentum conservation, Longuet-Higgins and Stewart give, with three different methods, the now well-known result. Namely, the mean sea level oscillates with a wavelength that is equal to the length of the group, with a low level where the wind waves are highest and a high level where these waves are lowest. This oscillation of the sea surface is proportional to the square of the short wave amplitude and becomes very large when the group speed approaches the speed of shallow water waves. The details of this process are modified when the bottom is sloping, which is generally the case near the shore, but the theory captures the important effect, observed in most conditions, that the high water of this 'surf beat' arrives with the waves of lowest amplitude.
Another process was proposed later by Graham Symonds and his collaborators. To explain some cases in which this phase of long and short waves were not opposed, they proposed that the position of the breaker line in the surf, moving towards deep water when waves are higher, could act like a wave maker. It appears that this is probably a good explanation for infragravity wave generation on a reef.
In the case of coral reefs, the infragravity periods are established by resonances with the reef itself.
Impact
Infragravity waves are thought to be a generating mechanism behind sneaker waves, unusually large and long-duration waves that cause water to surge far onshore and that have killed a number of people in the US Pacific Northwest.
Infragravity waves generated along the Pacific coast of North America have been observed to propagate transoceanically to Antarctica and there to impinge on the Ross Ice Shelf. Their frequencies more closely couple with the ice shelf natural frequencies and they produce a larger amplitude ice shelf movement than the normal ocean swell of gravity waves. Further, they are not damped by sea ice as normal ocean swell is. As a result, they flex floating ice shelves such as the Ross Ice Shelf; this flexure contributes significantly to the breakup on the ice shelf.
References
External links
Fluid dynamics
Water waves | Infragravity wave | [
"Physics",
"Chemistry",
"Engineering"
] | 863 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
26,253,227 | https://en.wikipedia.org/wiki/S-PULSE | S-PULSE is the acronym of Shrink-Path of Ultra-Low Power Superconducting Electronics. S-PULSE is a support action of the European Seventh Framework Programme (FP7) that stimulates joint efforts of European academic and industrial groups in the field of superconducting technologies. The general goal is to prepare Superconductor Electronics (SE) technologies for the technology generation beyond the CMOS scaling limits (called often “beyond CMOS”). S-PULSE supports the Superconducting Electronics community to strengthen the vital link between research and development and industry. It also strengthens the exchange of knowledge and ideas and take charge of education.
The challenge in SE is to achieve superconducting electronic circuit performance beyond the possibilities of semiconductor circuit technologies, and to make SE technologies ready to benefit to other technologies in the world markets. This support action, developed in the 2008-2010 period, is focused to prepare a Technology Roadmap and a Strategic Research Agenda (SRA) to enable the transition from the present scientific oriented network for SE towards an industrially guided European Technology Platform (ETP).
References
External links
S-PULSE
FLUXONICS
FLUXONICS Foundry
RSFQ
Superconductivity | S-PULSE | [
"Physics",
"Materials_science",
"Engineering"
] | 250 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
26,254,718 | https://en.wikipedia.org/wiki/Group%20field%20theory | Group field theory (GFT) is a quantum field theory in which the base manifold is taken to be a Lie group. It is closely related to background independent quantum gravity approaches such as loop quantum gravity, the spin foam formalism and causal dynamical triangulation. Its perturbative expansion can be interpreted as spin foams and simplicial pseudo-manifolds (depending on the representation of the fields). Thus, its partition function defines a non-perturbative sum over all simplicial topologies and geometries, giving a path integral formulation of quantum spacetime.
See also
Shape dynamics
Causal Sets
Fractal cosmology
Loop quantum gravity
Planck scale
Quantum gravity
Regge calculus
Simplex
Simplicial manifold
Spin foam
References
Wayback Machine see Sec 6.8 Dynamics: III. Group field theory
Quantum gravity | Group field theory | [
"Physics"
] | 174 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Physics beyond the Standard Model",
"Quantum physics stubs"
] |
26,255,087 | https://en.wikipedia.org/wiki/Reliability%20block%20diagram | A reliability block diagram (RBD) is a diagrammatic method for showing how component reliability contributes to the success or failure of a redundant system. RBD is also known as a dependence diagram (DD).
An RBD is drawn as a series of blocks connected in parallel or series configuration. Parallel blocks indicate redundant subsystems or components that contribute to a lower failure rate. Each block represents a component of the system with a failure rate. RBDs will indicate the type of redundancy in the parallel path. For example, a group of parallel blocks could require two out of three components to succeed for the system to succeed. By contrast, any failure along a series path causes the entire series path to fail.
An RBD may be drawn using switches in place of blocks, where a closed switch represents a working component and an open switch represents a failed component. If a path may be found through the network of switches from beginning to end, the system still works.
An RBD may be converted to a success tree or a fault tree depending on how the RBD is defined. A success tree may then be converted to a fault tree or vice versa by applying de Morgan's theorem.
To evaluate an RBD, closed form solutions are available when blocks or components have statistical independence.
When statistical independence is not satisfied, specific formalisms and solution tools such as dynamic RBD have to be considered.
Calculating an RBD
The first thing one must determine when calculating an RBD is whether to use probability or rate. Failure rates are often used in RBDs to determine system failure rates. Use probabilities or rates in an RBD but not both.
Series probabilities are calculated by multiplying the reliability (a probability) of the series components:
Parallel probabilities are calculated by multiplying the unreliability (Q) of the series components where Q = 1 – R if only one unit needs to function for system success:
For constant failure rates, series rates are calculated by superimposing the Poisson point processes of the series components:
Parallel rates can be evaluated using a number of formulas including this formula for all units active with equal component failure rates. n − q out of n redundant units are required for success. μ >> λ
If the components in a parallel system have n different failure rates a more general formula can be used as follows. For the repairable model Q = λ/μ as long as .
See also
ARP4761
Block diagram
Reliability engineering
System safety
Fault tree analysis
References
External links
http://www.reliabilityeducation.com/rbd.pdf (commercial website)
Institut pour la Maîtrise des Risques, method sheets, english version
Reliability engineering
Engineering statistics | Reliability block diagram | [
"Engineering"
] | 554 | [
"Systems engineering",
"Reliability engineering",
"Engineering statistics"
] |
26,259,476 | https://en.wikipedia.org/wiki/Scallop%20theorem | In physics, the scallop theorem states that a swimmer that performs a reciprocal motion cannot achieve net displacement in a low-Reynolds number Newtonian fluid environment, i.e. a fluid that is highly viscous. Such a swimmer deforms its body into a particular shape through a sequence of motions and then reverts to the original shape by going through the sequence in reverse. At low Reynolds number, time or inertia does not come into play, and the swimming motion is purely determined by the sequence of shapes that the swimmer assumes.
Edward Mills Purcell stated this theorem in his 1977 paper Life at Low Reynolds Number explaining physical principles of aquatic locomotion. The theorem is named for the motion of a scallop which opens and closes a simple hinge during one period. Such motion is not sufficient to create migration at low Reynolds numbers. The scallop is an example of a body with one degree of freedom to use for motion. Bodies with a single degree of freedom deform in a reciprocal manner and subsequently, bodies with one degree of freedom do not achieve locomotion in a highly viscous environment.
Background
The scallop theorem is a consequence of the subsequent forces applied to the organism as it swims from the surrounding fluid. For an incompressible Newtonian fluid with density and dynamic viscosity , the flow satisfies the Navier–Stokes equations:
where denotes the velocity of the fluid. However, at the low Reynolds number limit, the inertial terms of the Navier-Stokes equations on the left-hand side tend to zero. This is made more apparent by nondimensionalizing the Navier–Stokes equations. By defining a characteristic velocity and length, and , we can cast our variables to dimensionless form:
where the dimensionless pressure is appropriately scaled for flow with significant viscous effects. Plugging these quantities into the Navier-Stokes equations gives us:
And by rearranging terms, we arrive at a dimensionless form:
where is the Reynolds number. In the low Reynolds number limit (as ), the LHS tends to zero and we arrive at a dimensionless form of Stokes equations. Redimensionalizing yields:
Statement
The consequences of having no inertial terms at low Reynolds number are:
One consequence means that the swimmer experiences virtually no net force or torque.
A second consequence tells us that the velocity is linearly proportional to the force (same can be said about angular velocity and torque).
The Stokes equations become are linear and independent of time.
In particular, for a swimmer moving in the low Reynolds number regime, its motion satisfies:
Independent of time: The same motion may be sped up or slowed down, and it would still satisfy the Stokes equations. More geometrically, this means that the motion of a swimmer in the low Reynolds number regime is purely determined by the shape of its trajectory in configuration space.
Kinematic reversibility: The same motion may be reversed. Any instantaneous reversal of the forces acting on the body will not change the nature of the fluid flow around it, simply the direction of the flow. These forces are responsible for producing motion. When a body has only one degree of freedom, reversal of forces will cause the body to deform in a reciprocal fashion. For instance, a scallop opening its hinge will simply close it to try to achieve propulsion. Since the reversal of forces does not change the nature of the flow, the body will move in the reverse direction in the exact same manner, leading to no net displacement. This is how we arrive at the consequences of the scallop theorem.
Proof by scaling
This is closer in spirit to the proof sketch given by Purcell. The key result is to show that a swimmer in a Stokes fluid does not depend on time. That is, a one cannot detect if a movie of a swimmer motion is slowed down, sped up, or reversed. The other results then are simple corollaries.
The stress tensor of the fluid is .
Let be a nonzero real constant. Suppose we have a swimming motion, then we can do the following scaling:and obtain another solution to the Stokes equation. That is, if we scale hydrostatic pressure, flow-velocity, and stress tensor all by , we still obtain a solution to the Stokes equation.
Since the motion is in the low Reynolds number regime, inertial forces are negligible, and the instantaneous total force and torque on the swimmer must both balance to zero. Since the instantaneous total force and torque on the swimmer is computed by integrating the stress tensor over its surface, the instantaneous total force and torque increase by as well, which are still zero.
Thus, scaling both the swimmer's motion and the motion of the surrounding fluid scales by the same factor, we still obtain a motion that respects the Stokes equation.
Proof by vector calculus
The proof of the scallop theorem can be represented in a mathematically elegant way. To do this, we must first understand the mathematical consequences of the linearity of Stokes equations. To summarize, the linearity of Stokes equations allows us to use the reciprocal theorem to relate the swimming velocity of the swimmer to the velocity field of the fluid around its surface (known as the swimming gait), which changes according to the periodic motion it exhibits. This relation allows us to conclude that locomotion is independent of swimming rate. Subsequently, this leads to the discovery that reversal of periodic motion is identical to the forward motion due to symmetry, allowing us to conclude that there can be no net displacement.
Rate-independence
The reciprocal theorem describes the relationship between two Stokes flows in the same geometry where inertial effects are insignificant compared to viscous effects. Consider a fluid filled region bounded by surface with a unit normal . Suppose we have solutions to Stokes equations in the domain possessing the form of the velocity fields and . The velocity fields harbor corresponding stress fields and respectively. Then the following equality holds:
The reciprocal theorem allows us to obtain information about a certain flow by using information from another flow. This is preferable to solving Stokes equations, which is difficult due to not having a known boundary condition. This particularly useful if one wants to understand flow from a complicated problem by studying the flow of a simpler problem in the same geometry.
One can use the reciprocal theorem to relate the swimming velocity, , of a swimmer subject to a force to its swimming gait :
Now that we have established that the relationship between the instantaneous swimming velocity in the direction of the force acting on the body and its swimming gait follow the general form
where and denote the positions of points
on the surface of the swimmer, we can establish that locomotion is independent of rate. Consider a swimmer that deforms in a periodic fashion through a sequence of motions between the times and The net displacement of the swimmer is
Now consider the swimmer deforming in the same manner but at a different rate. We describe this with the mapping
Using this mapping, we see that
This result means that the net distance traveled by the swimmer does not depend on the rate
at which it is being deformed, but only on the geometrical sequence of shape. This is the first key result.
Symmetry of forward and backward motion
If a swimmer is moving in a periodic fashion that is time invariant, we know that the average displacement during one period must be zero. To illustrate the proof, let us consider a swimmer deforming during one period that starts and ends at times and . That means its shape at the start and end are the same, i.e. Next, we consider motion obtained by time-reversal
symmetry of the first motion that occurs during the period starting and ending at times and using a similar mapping as in the previous section, we define and and define the shape in the reverse motion to be the same as the shape in the forward motion, Now we find the relationship between the net displacements in these two cases:
This is the second key result. Combining with our first key result from the previous section, we see that We see that a swimmer that reverses its motion by reversing its sequence of shape changes leads to the opposite distance traveled. In addition, since the swimmer exhibits reciprocal body deformation, the sequence of motion is the same between and and and Thus, the distance traveled should
be the same independently of the direction of time, meaning that reciprocal motion cannot be used for net motion in low Reynolds number environments.
Exceptions
The scallop theorem holds if we assume that a swimmer undergoes reciprocal motion in an infinite quiescent Newtonian fluid in the absence of inertia and external body forces. However, there are instances where the assumptions for the scallop theorem are violated. In one case, successful swimmers in viscous environments must display non-reciprocal body kinematics. In another case, if a swimmer is in a non-Newtonian fluid, locomotion can be achieved as well.
Types of non-reciprocal motion
In his original paper, Purcell proposed a simple example of non-reciprocal body deformation, now commonly known as the Purcell swimmer. This simple swimmer possess two degrees of freedom for motion: a two-hinged body composed of three rigid links rotating out-of-phase with each other. However, any body with more than one degree of freedom of motion can achieve locomotion as well.
In general, microscopic organisms like bacteria have evolved different mechanisms to perform non-reciprocal motion:
Use of a flagellum, which rotates, pushing the medium backwards — and the cell forwards — in much the same way that a ship's screw moves a ship. This is how some bacteria move; the flagellum is attached at one end to a complex rotating motor held rigidly in the bacterial cell surface.
Use of a flexible arm: this could be done in many different ways. For example, mammalian sperm have a flagellum which, whip-like, wriggles at the end of the cell, pushing the cell forward. Cilia are quite similar structures to mammalian flagella; they can advance a cell like paramecium by a complex motion not dissimilar to breast stroke.
Geometrically, the rotating flagellum is a one-dimensional swimmer, and it works because its motion is going around a circle-shaped configuration space, and a circle is not a reciprocating motion. The flexible arm is a multi-dimensional swimmer, and it works because its motion is going around a circle in a square-shaped configuration space. Notice that the first kind of motion has nontrivial homotopy, but the second kind has trivial homotopy.
Non-Newtonian fluids
The assumption of a Newtonian fluid is essential since Stokes equations will not remain linear and time-independent in an environment that possesses complex mechanical and rheological properties. It is also common knowledge that many living microorganisms live in complex non-Newtonian fluids, which are common in biologically relevant environments. For instance, crawling cells often
migrate in elastic polymeric
fluids.
Non-Newtonian fluids have several properties that can be manipulated to produce small scale locomotion.
First, one such exploitable property is normal
stress differences. These differences will arise from the stretching of the fluid by the flow of
the swimmer. Another exploitable property is stress relaxation. Such time evolution of such stresses contain a memory term, though the extent in which this can be utilized is largely unexplored. Last, non-Newtonian fluids possess viscosities that are dependent on the shear rate. In other words, a swimmer would experience a different Reynolds number environment by altering its rate of motion. Many biologically relevant fluids exhibit shear-thinning, meaning viscosity decreases with shear rate. In such an environment, the rate at which a swimmer exhibits reciprocal motion would be significant as it would no longer be time invariant. This is in stark contrast to what we established where the rate in which a swimmer moves is irrelevant for establishing locomotion. Thus, a reciprocal swimmer can be designed in a non-Newtonian fluid. Qiu et al. (2014) were able to design a micro scallop in a non-Newtonian fluid.
See also
Bacterial motility
Microswimmer
Protist locomotion
References
External links
E.M. Purcell. Life at Low Reynolds Number, American Journal of Physics vol 45, p. 3-11 (1977)
Kinematic Reversibility and the Scallop Theorem
Video of a swimming robot unable to propel in viscous fluid due to the Scallop theorem
Physics theorems
Fluid dynamics
Mathematical and theoretical biology | Scallop theorem | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,557 | [
"Equations of physics",
"Mathematical and theoretical biology",
"Applied mathematics",
"Chemical engineering",
"Piping",
"Physics theorems",
"Fluid dynamics"
] |
29,469,931 | https://en.wikipedia.org/wiki/Bernard%20H.%20Lavenda | Bernard Howard Lavenda (born September 18, 1945) is a retired professor of chemical physics at the University of Camerino and expert on irreversible thermodynamics. He has contributed to many areas of physics, including that of Brownian motion, and in the establishment of the statistical basis of thermodynamics, and non-Euclidean geometrical theories of relativity. He was the scientific coordinator of the "European Thermodynamics Network" in the European Commission Program of Human Capital and Mobility. He was also a proponent for the establishment of, and scientific director of, a National (Italian) Centre for Thermodynamics, and has acted as scientific consultant to companies such as the ENI Group, where he helped to found TEMA, a consulting firm for SNAM Progetti, ENEA (Italian National Agency for New Technologies, Energy and the Environment), and the Solar Energy Research Institute in Golden, Colorado. He has had over 130 scientific papers published in international journals, some critical of the new fashions and modes in theoretical physics.
Professor Lavenda currently lives in Trevignano Romano near Rome, is married with two adult children and two grandchildren, for whom his textbook "A New Perspective on Thermodynamics" is dedicated.
Biography
Early years
Bernard H. Lavenda was born in New York City. After completing secondary school in North Adams, Massachusetts, he attended Clark University where he graduated cum laude in 1966 with a B.Sc in chemistry. Having passed the entrance examination for the doctoral program at the Weizmann Institute of Science, he began experimental work on enzymes under the direction of Ephraim Katzir, who was later to become the President of Israel. Realizing that he was not made out for experimental work, he came under the influence of Ephraim's brother, Aaron, after reading his book Nonequilibrium Thermodynamics in Biophysics, coauthored with Peter Curran.
After the Six Days War, Aaron Katchalsky helped him secure a studentship for a doctoral degree in Ilya Prigogine's group in Brussels.
Doctoral thesis
His doctoral thesis, "Kinetic analysis and thermodynamic interpretation of nonequilibrium unstable transitions in open systems", showed that when homogeneous nonlinear chemical reactions far from equilibrium on the thermodynamic branch, which is an extension of the law of mass action at equilibrium, become unstable they make transitions to kinetic branches with lower entropy production than the thermodynamic branch.
This result was initially contested by Prigogine who reasoned from hydrodynamic instabilities, like the Rayleigh-Benard instability, which show a larger entropy production beyond the critical point in order to maintain spatial structures. Prigogine later considered these spatial structures to be produced by unstable chemically diffusing systems, based on Alan Turing's morphological models, calling them 'dissipative structures' and for which Prigogine received the Nobel Prize in Chemistry in 1977.
Prigogine later acknowledged that such transitions to lower states of entropy reduction were possible since no spatial structural changes were involved, and later incorporated Lavenda's work into a chapter of his new book Thermodynamic Theory of Structure, Stability, and Fluctuations, co-authored with Paul Glansdorff.
After receiving his doctorate from the Universite Libre de Bruxelles, with la plus grande distinction, he returned to Israel in 1970 to work as a post-doctoral student in the Physical Chemistry Department of the Hebrew University.
During that period he published a short note in the Italian physics journal, Lettere Al Nuovo Cimento [3 (1972) 385-390] criticizing the Glansdorff-Prigogine universal criteria of evolution which attributes an inequality to a potential which is a function only of intensive variables, the forces. He pointed out that no such thermodynamic potential could exist for it would be devoid of all information regarding how large the system is, or how many particles it contains. The inequality would be a criterion of stability, but, on account of the assumption of local (stable) equilibrium of the components that the system is broken up into, the sum of stable components can hardly become unstable. The note would probably have gone unnoticed were it not for Peter Landsberg's citation of it in his Nature review of the Glansdorff-Prigogine book [P. T. Landsberg, "The fourth law of thermodynamics"], where he predicted "the occasional lack of lucidity in the book which may give rise to some discussion within the next few years".
Career
Consultancies
After the murder of Aharon Katzir in Lod Airport massacre in May 1972, Lavenda accepted a position of consultant at Nuovo Pignone in Florence Italy together with a teaching position at the University of Pisa.
Through the vice President of Nuovo Pignone, he came into contact with Vicenzo Gervasio who was later to become President of ENI Data, and the idea crystallized of setting up a company dedicated to the analysis and dynamic modeling of fouling processes in refineries and reactors. He established relations between ENI and Northwest Research, Boeing, and SERI (Solar Energy Research Institute).
He helped form a new company within the ENI group, TEMA, which was supported by SNAM Progetti. While retaining an unpaid lectureship in Thermodynamics at the University of Naples, Lavenda published his critical appraisal of the then current theories of irreversible thermodynamics, Thermodynamics of Irreversible Processes, in 1978. It was originally published by the Macmillan Press and later became a Dover Classic of Science and Mathematics.
Camerino years
In 1980 he won a chair in Physical Chemistry. Transferring to Camerino, he was to spend more than three decades there.
His first book during this period, "Nonequilibrium Statistical Thermodynamics", published by Wiley in 1985, developed the nonlinear generalization of the Onsager-Machlup formulation of nonequilibrium fluctuations which was restricted to linear (Gaussian) processes.
Just as equilibrium is characterized by the state of maximum entropy, corresponding to maximum probability, nonequilibrium states are characterized by the principle of least dissipation of energy, corresponding to the maximum probability of a transition between nonequilibrium states that are not well-separated in time.
This principle can be generalized to non-Gaussian fluctuations in the limit of small thermal noise and constitutes a kinetic analog to Boltzmann's principle.
During a sabbatical year in 1986 in Porto Alegre, Lavenda had ample time to browse through the well-furnished library at the Universidade Federale di Rio Grande del Sud.
He was impressed by the parallelism between statistical inference and statistical thermodynamics: two distinct and separate branches that are essentially working on the same problems but with no apparent connection.
His work, summarized in Statistical Physics: A Probabilistic Approach, published by Wiley-Interscience in 1991, completes Boltzmann's principle, which expresses the entropy as the logarithm of a combinatorial factor, by showing that the entropy is the potential that determines Gauss’ law of error for which the average value is the most probable value.
Just as there are frequency and degree- of-belief (Bayes' theorem) interpretations of statistical inference, the same should apply to statistical thermodynamics. The frequency interpretation applies to extensive variables, like energy and volume which can be sampled, while the degree-of-belief interpretations applies to the intensive variables, like temperature and pressure, for which sampling has no meaning. The connection between the two branches translates the Cramer-Rao inequality into thermodynamic uncertainty relations, analogous to quantum mechanical uncertainty relations, where the more knowledge we have about a thermodynamic variable the less we know about its conjugate.
Since the lack of a probability distribution means the absence of its statistics, the possibility of an intermediate statistics, or what is referred to as parastatistics, between Bose–Einstein statistics and Fermi–Dirac statistics is nonexistent.
Statistical thermodynamics is usually concerned with most probable behavior which becomes almost certainty if large enough samples are taken.
But sometimes surprises are in store where extreme behavior becomes the prevalent one.
Turning his attention to such rare events Lavenda published Thermodynamics of Extremes in 1995, whose real interest lies in the formulation of a thermodynamics of earthquakes that was subsequently published in Annali di Geofisica (Extreme value statistics and thermodynamics of earthquakes: "Large earthquakes"; "aftershock sequences"), and which is gaining increasing attention.
By properly defining entropy and energy, a temperature can be associated to an aftershock sequence giving it an additional means of characterization.
A new magnitude-frequency relation is predicted which applies to clustered after-shocks in contrast to the [Gutenberg-Richter law] which treats them as independent and identically distributed random events.
Attempts at forming a centre for thermodynamics
In the nineties, Lavenda saw thermodynamics as a cultural heritage that could have a place in Italian society, and would be pertinent to both industrial research and to the preservation of its artistic patrimony.
He was a proponent for the establishment of a National Centre of Thermodynamics for which financial funding was unavailable. Capturing the interest of the ENEA, or the Italian agency for alternative energy resources, he applied for funding in the European Commission of Human Capital and Mobility Programme.
His project, "Thermodynamics of Complex Systems", came in sixth place in Chemistry section with maximum funding in 1992.
This led to the formation of a European Thermodynamics Network consisting of 16 partners in the EU and Switzerland.
It was later extended to the Eastern European Countries in the European Commission PECO Programme. This eventually led to the establishment of a National Centre for Thermodynamics that was brought into existence by the ENEA, but lasted only several months, because European funds were absorbed by other projects
Later years
Often critical of new fashions and modes in thermodynamics, Lavenda wrote A New Perspective on Thermodynamics, published in 2009, by returning to Carnot's original conception that work can only be done when there is a difference in temperature, and the necessity of closing the cycle before that work can be assessed.
More recently Lavenda has directed his interests to relativity by providing it with a new foundation based on non-Euclidean geometries.
Rather than measuring distances in terms of the usual Euclidean metric, distances are defined in terms of what is known as a cross-ratio, a perspective invariant of four points, which, for the space of velocities, just happens to be the compounding of longitudinal Doppler shifts. Doppler shifts are fundamental to relativity: oblique Doppler shifts describe aberration, while second order ones describe length contraction, but rather than being in the direction of the motion are perpendicular to it.
The uniformly rotating disc, which is considered by some to be the missing link in Einstein's formulation of general relativity, is exactly described by the hyperbolic metric in polar coordinates, named after the nineteenth century Italian geometer Eugenio Beltrami, which predicts the circumference of the disc to be greater when in motion than when it is at rest.
Thus a uniformly rotating disc belongs to hyperbolic, and not Euclidean, space and so, too, does relativity.
Monographs and textbooks
A New Perspective on Relativity: An Odyssey in Noneuclidean Spaces: World Scientific (2011)
A New Perspective on Thermodynamics: Springer, New York (2009)
Thermodynamics of Extremes: Horwood (1995)
Statistical Physics: A Probabilistic Approach (1991), Dover Reprint 2016 (pbk)
Nonequilibrium statistical thermodynamics - Wiley (1985) , Dover Reprint 2019 (pbk)
Thermodynamics of Irreversible Processes - Macmillan Press, London, 1978 , Dover Reprint in Dover Classics of Science and Mathematics 1993 (pbk)
Russian translation Mir, Moscow, 1999
Introduzione alla Fisica Atomica e Statistica- (E. Santamato coauthor) Liguori Editore, Naples, 1989
Thermodynamics of Irreversible Processes-Italian translation, Liguori Editore, Naples, 1980
Introduction to Atomic Physics and Statistics-In Italian
Where Physics Went Wrong:World Scientific, Singapore (2015)
Seeing Gravity: (2019)
The Physics of Gravitation: (2021)
Dialogue Concerning Two Chief World Systems: Gravity & Quantum Theory: (2024)
Academic positions
1980-2014 — Full Professor of Chemical Physics, University of Camerino, Camerino, Italy
1997 — Scientific Director of the National Centre for Thermodynamics, ENEA Rome
1992–1996 — Scientific Coordinator of the European Thermodynamics Network
1993–1996 — Scientific Coordinator of the Eastern European Thermodynamics Network of the European Commission (PECO)
1986 — Visiting Professor, Department of Physics Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre (RS), Brasil
1978–1984 — Scientific Consultant at TEMA (ENI Group)Bologna
1975–1980 — Professore Incaricato of Thermodynamics University of Naples, Faculty of Natural Sciences
1975–1976 — Professore Incaricato of Electronics, University of Naples, Engineering Faculty
1974–1975 — Incarico di Ricerca at the International Institute of Genetics and Biophysics, Naples
1972–1973 — Professore Incaricato of Chemical Statistics, University of Pisa, Faculty of Natural Sciences
Academic history
1966 — Bachelor of Arts in Chemistry, Clark University, Worcester, Massachusetts, USA
1968 — Masters of Science in Physical Chemistry, Weizmann Institute of Science, Rehovot, Israel
1970 — Doctor of Philosophy in Chemistry, Chemie-Physique II, Université libre de Bruxelles, Belgium
1970–2 Post Doctoral, Department of Physical Chemistry, Hebrew University of Jerusalem, Israel
Awards and prizes
Telesio-Galilei Academy Award 2009 given for his work on irreversible thermodynamics and contributions to many areas of physics, including those of Brownian motion, and in the establishment of the statistical basis of thermodynamics, and his contributions to astrophysics/cosmology.http://www.telesio-galilei.com/tg/index.php/academy-award-2009
References
Living people
Thermodynamicists
1945 births | Bernard H. Lavenda | [
"Physics",
"Chemistry"
] | 3,048 | [
"Thermodynamics",
"Thermodynamicists"
] |
29,470,091 | https://en.wikipedia.org/wiki/Polavaram%20Project | The Polavaram Project is an under construction multi-purpose irrigation project on the Godavari River in the Eluru District and East Godavari District in Andhra Pradesh. The project has been accorded National project status by the Central Government of India. Its reservoir back water spreads up to the Dummugudem Anicut (i.e. approx 150 km back from Polavaram dam on main river side) and approx 115 km on Sabari River side. Thus back water spreads into parts of Chhattisgarh and Odisha States. It gives major boost to tourism sector in Godavari Districts as the reservoir covers the famous Papikonda National Park, Polavaram hydro electric project (HEP) and National Waterway 4 are under construction on left side of the river. It is located 40 km to the upstream of Sir Arthur Cotton Barrage in Rajamahendravaram City and 25 km from Rajahmundry Airport.
History
In July 1941, the first conceptual proposal for the project was mooted by the erstwhile Madras Presidency. Diwan Bahadur L. Venkatakrishna Iyer, then Chief Engineer in the Presidency's irrigation department, made the first survey of the project site and made a definitive proposal for a reservoir at Polavaram. Sri Iyer not only envisaged cultivation of 350,000 acres (140,000 ha) over two crop seasons through this project, but also planned for a 40 MW hydroelectric plant within the project. The project, when it was conceived in 1946–47, was estimated to cost Rs 129 crore.
In 1980, then Chief Minister of Andhra Pradesh Tanguturi Anjaiah laid the foundation stone for the prestigious Polavaram irrigation project. In the year 2004 Y. S. Rajasekhara Reddy performs bhoomi pooja with the estimation cost of 8,261 cr and the administrative sanction was accorded for construction of right and left canals at a cost of Rs 1,320 crore and Rs 1,353 crore.
Y. S. Rajasekhara Reddy Congress government completed 33% of project before 2014. The Polavaram Project Authority was constituted by the Union Cabinet in May 2014 and the construction of project head works were taken up earnestly. The Naidu government has acquired the complete lands that required across the right canal by solving court petition issues of farmers who lost their agricultural lands from both districts of West Godavari and Krishna and the Pattiseema Lift Irrigation Project has been launched in order to pump the Godavari river water and sent to Krishna river. In June, the state was bifurcated under the Andhra Pradesh Reorganisation Act. In December 2017, it was reported that the project contractor, Transstroy was seeking a deadline extension and a budget escalation. Transstroy was reported to have its ₹4,300 crore loan turn NPA in July 2015. In January 2018, the state government signed a new contract for the project spillway, spill channel and stilling basin concrete work with Navayuga Engineering By June 2018, 1,10,355 acres of the required 1,68,213 acres had been acquired. On June 11, 2018, the Polavaram diaphragm wall was completed marking a significant milestone in the project construction. On January 7, 2019, The Polavaram project in Andhra Pradesh entered the Guinness Book of World Records by pouring 32,100 cubic meters of concrete in 24 hours by Navayuga Engineering. The project beat the existing record of 21,580 cubic meters which was achieved by Abdul Wahid Bin Shabib, RALS Contracting LLC and Alfa Eng. Consultant (all UAE), in Dubai between May 18 and 20 in 2017. The Chief minister of Andhra Pradesh N. Chandrababu Naidu unveiled the first crest gate of Polavaram project on 24 December 2018.
Progress
Purpose
National River-Linking Project, which works under the aegis of the Indian Ministry of Water Resources, was designed to overcome the deficit in water in the country. As a part of this plan, surplus water from the Himalayan rivers is to be transferred to the peninsular rivers of India. This exercise, with a combined network of 30 river-links and a total length of at an estimated cost of US$120 billion (in 1999), would be the largest ever infrastructure project in the world. In this project's case, the Godavari River basin is considered as a surplus one, while the Krishna River basin is considered to be a deficit one. As of 2008, 644 tmcft of underutilised water from Godavari River flowed into the Bay of Bengal. But as of 2017 over 3000 tmcft are drained unutilised into Bay of Bengal. Based on the estimated water requirements in 2025, the Study recommended that sizeable surplus water was to be transferred from the Godavari River basin to the Krishna River basin.
The capacity of right and left canals are 17,500 cusecs each. During the monsoon months (July to October), nearly 360 tmcft of Godavari flood flows at the rate of 3 tmcft per day can be diverted into the canals. At least another 190 tmcft water from the water stored in the Polavaram reservoir along with lean season inflows, excluding the downstream Godavari Delta water requirements, can be diverted into these canals. Thus the total annual water utilization capacity of the Polavaram project is 550 tmcft. The water storage available in Sileru river basin is used as usual for the full water requirements of the Godavari Delta.
The hydropower plant (960 MW) will generate 2.29 billion kWh green or renewable electricity annually. Polavaram reservoir will also create the potential to install nearly 1,58,000 MW high head pumped-storage hydroelectric plants in the future.
Alternate site
The dam could not be taken up for construction during the last century on techno-economical grounds. The proposed dam site at Polavaram is located where the river emerges from the last range of the Eastern Ghats into plains covered with deep alluvial sandy strata. At Polavaram, the river width is about 1500 m. In view of large depth of excavation which is more than 30 m deep, to reach hard rock at this dam site, the dam project was not found economical to take up. However, a lucrative alternate site is feasible located in upstream of Polavaram site where the river passes through deep gorges of Papi hill range. The width of river is about 300 m only in the rocky gorge stretch. Thirty years back, this alternative was found technologically challenging task to connect the reservoir with the irrigation canals via tunnels across the ghat area. Also, costly underground hydro electric station is mandated compared to river bed based hydro electric station.
When the project was actually taken up in the year 2004, the old finalised designs at Polavaram site are adopted without re-examining the latest cost of upstream alternate site in view of state of the art construction technology of tunnels and underground hydroelectric station. The progress up to the year 2012 in construction of dam structures and the hydro electric station is almost nil. The alternate site located in the gorge stretch is still worth of re-examination to reduce the ever-increasing cost of Polavaram dam.
The spillway and non-overflow dam are founded on Khondalite bedrock in Polavaram Project. Khondalites, which are feldspar-rich, often contain soft graphite, hard garnet, etc. in addition to other minerals. Khondalites are highly weathered and hence unsuitable at dam site.
Technical details
The project reservoir has live storage 75.2 tmcft at canal's full supply level of MSL and gross storage 194 tmcft thereby enabling irrigation of 23,20,000 acres (including stabilisation of existing irrigated lands) in Krishna, West Godavari, Eluru district, East Godavari, Visakhapatnam, Vizianagaram and Srikakulam districts of Andhra Pradesh. The silt free dead storage water of nearly 100 tmcft above the spillway crest level MSL, can also be used in downstream lift irrigation projects (Pattiseema lift, Tadipudi lift, Chintalapudi lift, Thorrigedda lift, Pushkara lift, Purushothapatnam lift, Venkatanagaram lift, Chagalnadu lift, etc.) and Dowleswaram Barrage during the summer months. Chintalapudi lift / Jalleru project will supply water to irrigate most of the highlands in Eluru district and NTR districts including the existing command area under Nagarjunasagar left canal in AP facilitating 40 tmcft saved Krishna river water for diversion to Rayalaseema from Srisailam reservoir. GoAP announced the decision to construct Purushothapatnam Lift Irrigation Scheme to transfer water at the rate of 3,500 cusecs to Polavaram left bank canal and further pumping at the rate of 1,400 cusecs to Yeleru reservoir for feeding Yeleru canal which is supplying water to Vizag city. Uttarandhra Sujala Sravanthi lift irrigation scheme will also use the Godavari water and a sanction of ₹2,114 crores was made in 2017 for its first phase. All the irrigated lands under these lift schemes can be supplied from Polavaram right and left canals by gravity flow when Polavaram reservoir level is above the canal's full supply level of 41.15 m MSL. However these lift stations are to be operated every year during the dry season to draw water from the substantial dead storage available behind the flood gates of the Polavaram dam. So these lift schemes are not for few years operation till the Polavaram dam is constructed but for permanent operation regularly for at least four months in every year. Nearly 80 tmcft live storage capacity available to Andhra Pradesh in Sileru river basin can also augment the water availability additionally to the Polavaram project during the dry season.
The dam construction involves the building of a 1.5-m-thick concrete diaphragm wall up to depths from 40 to 120 m below the river bed under the earth dam which is the first of its kind in India. The purpose of diaphragm wall is to secure the river bed stability for withstanding the water pressure across the dam. The project would constitute an earth-cum-rock fill dam of length, spillway of with 48 vents to enable discharge of of water. The spillway is located on the right bank of the river for which nearly 5.5 km long and 1.0 km wide approach and spill channels up to river bed level is envisaged involving nearly 70 million cubic meters earth/rock excavation which is nearly 2/3rd of the project's total earthwork. The maximum flood level at Polavaram is MSL and lowest water level is MSL. Two cofferdams are planned, one up to MSL, to facilitate faster pace of work on earth-cum-rock fill dam to complete the first phase of the project by June 2018. With coffer dams inclusion and the bed level of the approach and discharge canals of the spillway increased to MSL, the spillway related rock excavation is reduced by 70% leading to substantial cost reduction in the project's head works cost. Ultimately, the material of cofferdams would be excavated and reused for the peripheral portions of the main earth-cum-rock fill dam. On the left side of the river, 12 water turbines, each having 80 Megawatt capacity, were to be installed. Without removing the upper and lower coffer dams after the construction of the ECRF dam, the 960 MW hydropower plant can not be commissioned as they are blocking the water passages of the powerhouse. The right canal connecting to Krishna River in upstream of Prakasam Barrage ( long) discharges at head works and left canal ( long) discharges of water.
Indira Dummugudem lift irrigation scheme starting at is under construction to supply irrigation water for 200,000 acres in Khammam, Krishna, Eluru and West Godavari districts drawing Godavari River water from the backwaters of Polavaram reservoir. This is a joint project between Andhra Pradesh and Telangana states. This project was shelved and merged with another project by the Telangana state.
Financing
The revised cost of the total project including the 960 MW power station is 47,726 crores at 2017-18 prices.
In December 2016, NABARD handed over ₹1,981 crores, as part of its loan from the Long Term Irrigation Fund (LTIF) under the Pradhan Mantri Krishi Sinchayee Yojana (PMKSY). NABARD provided a loan of ₹2,981.54 crores during 2016-17 and ₹979.36 crores during 2017-18 under the LTIF to the National Water Development Authority (NWDA) for the project.
In its 2018 budget, Andhra Pradesh allocated ₹9000 crores to the project. In June 2018, the Central Government approved ₹1,400 crores which had been sanctioned in January, but not released, through Extra Budgetary Resources raised by NABARD. These funds were from outside the LTIF.
In January 2018, it was reported that the project cost had escalated to ₹58,319 crore. In June 2018, The Water Ministry sanctioned ₹417.44 crore as grant-in-aid under the Accelerated Irrigation Benefit Programme under the Pradhan Mantri Krishi Sinchayi Yojana (PMKSY) towards the project. By June 2018, ₹13,000 crore had been spent on the project.
An expenditure of ₹16,035.88 crores has been incurred on the project from April 2014 to December 2022. A sum of ₹13,226.04 crores has been released by the centre for the execution of the project since April 2014. Bills amounting to ₹2,390.27 crores were rejected for reimbursement by the Polavaram Project Authority (PPA). Bills amounting to ₹548 crores have been received by the PPA for examination.
Interstate water sharing
As per the inter state agreements dated 4 August 1978 (page 89) and 29 January 1979 (page 101) with Andhra Pradesh, the states of Karnataka and Maharashtra are entitled to use 21 tmcft and 14 tmcft respectively out of the unallocated waters of Krishna river when Godavari water transferred in a year by Polavaram right bank canal from Polavaram reservoir to Prakasam barrage across Krishna river is not exceeding 80 tmcft at 75% dependability. When additional Godavari water exceeding 80 tmcft is transferred from Polavaram reservoir, Karnataka and Maharashtra are entitled with additional water from the unallocated Krishna river waters in the same proportion (i.e. 21:14:45) provided all the following conditions are satisfied.
The additional Godavari water from Polavaram project should be transferred to the Krishna river in the upstream of Prakasam barrage. Such additional Godavari water quantity to be shared is decided based on 75% dependability.
The transferred water shall also displace the water discharges from Nagarjuna Sagar Dam for the Krishna delta requirements. Krishna delta is the area located down stream of Prakasam barrage which is part of Krishna basin. It does not include adjacent coastal river basins which are being irrigated by the Krishna waters from the Prakasam barrage.
Thus Andhra Pradesh is entitled to transfer Godavari water in excess of 80 tmcft in three out of four years (below 75% dependability) reducing the water releases from Nagarjunasagar dam for Krishna delta requirements and need not share water with other states beyond 80 tmcft.
The above interstate water sharing agreement does not cover the Godavari water transferred to Krishna river which is not displacing the water releases from the Nagarjunasagar dam for the requirements of Krishna delta. Thus Andhra Pradesh need not share with other states the water transferred via Krishna river or any reservoir located on Krishna river for the water needs of any of its area (including Krishna basin) other than Krishna delta.
Future potential uses
Godavari Penna River linking
To stabilize the existing irrigated ayacut area under Nagarjuna sagar right canal, a new lift project construction with greenfield alignment was started in the first phase of Godavari Penna River linking project by having five step ladder pumping stages and a gravity canals to transfer 7,000 cusecs of Godavari water from Prakasam Barrage back waters into the Nagarjuna sagar right canal near Nekarikallu by utilizing 73 tmcft of Godavari water. With FRL 25M newly created Vykuntapuram Barrage pond will have back waters beyond Pokkunuru up to the toe of Pulichinthala Project. It is more economical to construct first stage pump house of this lift project to lift water from Prakasam Barrage back waters into newly created Vykuntapuram Barrage pond and the second lift stage from Vykuntapuram Barrage pond to existing K.L Rao sagar / Pulichintala Project and later lift stages from K.L Rao Sagar to Nagarjuna sagar right canal.It will shorten the length of this lift project canal, Pressure Main and fewer lift stages and also enables the lifting of water up to Srisailam Project via Existing Reversible Reverse turbine pump houses in Nagarjuna Sagar Dam and Srisailam project. It is even more economical, if we construct a new gravity canal from Ibrahimpatnam to Vykuntapuram Barrage pond to deliver the Polavaram right main canal/ Budameru diversion canal waters directly into the Vykuntapuram barrage pond since canal level is 33 m MSL at Ambapuram hill near Vijayawada. As water supply from Nagarjuna Sagar Left Bank Canal is highly erratic, Muktyala Lift Irrigation scheme is proposed by drawing water from the back waters of Vykuntapuram barrage on left bank of Krishna river.
There is a proposal to link Nagarjuna Sagar Dam across the Krishna River and Somasila Dam across Penna River with 400 km canal as part of national river linking program. With help from the Indian Government, AP Govt can construct a new canal up to Somasila Dam as per DPR of Indian Rivers Inter-link program specifications. Thus the Godavari River water will travel up to Somasila Dam and then Swarnamukhi in Chittoor district via existing Somasila Swarnamukhi link canal. GoAP can also provide water to Tamil Nadu with this Godavari water and retained water in Krishna River (15 TMC allocation of Krishna water to Telugu Ganga) will be used for other projects in Rayalaseema region.
In future a new massive dam named Palnadu Sagar across hill range near Bollapalle with 700 TMC capacity reservoir is possible using of flood water of Krishna River and Godavari River diverted with this lift project. It will submerge nearly 300Sq KM of land at FRL 260m MSL.Palnadu Sagar spillway with Francis Turbine will take and release water into Nagarjuna Sagar right canal along with Hydroelectricity power generation. Flood water of Krishna River will be pumped to Palnadu Sagar. The water stored in Palnadu Sagar will be used for irrigation and drinking in drought years.
Godavari Krishna River linking
Vykuntapuram barrage would be constructed on Krishna river located near in the upstream of Prakasam barrage with FRL at 25m MSL to receive Godavari water diverted from Polavaram dam.
A low level lift canal from the Krishna river located near at 20 m MSL in the downstream of Pulichintala dam will be executed to feed Godavari water diverted from Polavaram Dam to some of the existing command area (situated below 60 m MSL) under Nagarjuna Sagar right bank canal to facilitate extension of Nagarjuna Sagar right bank canal connecting to Kandaleru feeder canal / Somasila reservoir for serving irrigation needs in Prakasam, Potti Sriramulu Nellore and Chittur districts including Chennai drinking water supply. A branch from this lift canal is also extended up to Pulichinthala dam (FRL 53.34 m MSL) to store Godavari water in Pulichintala reservoir during drought years and to irrigate low lands along Krishna river up to Pulichintala dam.
Another high level lift canal from above Krishna river location up to 90 m MSL would be constructed to join Nagarjuna Sagar tail pond (FRL 75 m MSL) irrigating lands en route along Krishna river in Guntur district. During drought years, the water transferred by this canal to Nagarjuna Sagar tail pond is further lifted to Nagarjuna Sagar and Srisailam reservoirs with the existing pumped storage hydro units for use in all the projects receiving water from these reservoirs. This high level lift canal is an alternative to Dummugudem to Sagar lift canal planned in Telangana region which would transfer Godavari river water from Dummugudem to Nagarjuna Sagar tail pond. Ultimately the Polavaram right bank canal would be remodelled to enhance its capacity to 50,000 cusecs by raising its embankments for augmenting water transfer to meet shortages in the Krishna river basin and the needed environmental flows downstream of Prakasam barrage.
25 MW power plant
A 25 MW hydropower station can be established utilising Polavaram right bank canal water near Vijayawada city by transferring water via Budameru river and Eluru canal to Prakasam barrage pond. The last portion of the Polavaram right bank canal is nothing but Budameru/ Velagaleru flood diversion canal which has flow limitation of 10,000 cusecs. Thus water flow from Polavaram right bank canal to Krishna River can be enhanced by constructing a 25 MW power plant.
Fresh water coastal reservoir
A fresh water coastal reservoir of storage capacity 1000 Tmcft (thousand million cubic feet) could be constructed along the sea coast to store the Krishna & Godavari river flood waters for creating additional irrigated area in Prakasam, Potti Sriramulu Nellore, Cudapah, Chittoor districts and further transfer of Godavari water to Kavery river in Tamil Nadu under interstate rivers linking project This project is similar to Kalpasar Project to store Narmada River water in Gulf of Khambhat sea.
Fresh water coastal reservoirs can be established in the shallow sea area by constructing sea dikes / bunds/ causeway up to the depth of 20 meters from the coast line. Water can be pumped from this artificial freshwater lagoon throughout the year for meeting agriculture, etc. needs. Also top surface of the dike can be used as coastal road & rail rout. The proposed dikes would be similar to the land reclamation of North Sea area called Delta Works in Netherlands or Saemangeum Seawall in South Korea. The earth bunds / dikes located on sea bed at 20 meters below the sea level, is lesser challenging technically when compared to Saemangeum Seawall project which is having 36 meters average water depth.
The sea area up to 20 meters depth adjacent to coast line between the locations (near ) where Vashista Godavari, the right side branch of the Godavari river, is joining the sea and the mouth of the Gundlakamma River (near ), is highly suitable for creating the freshwater coastal reservoir. The average width of the sea up to 20 m depth is nearly 16 km wide and the length of the sea dikes is nearly 200 km. The area of the coastal reservoir would be nearly 2900 km2. A barrage would be constructed across the Vashista Godavari river (near ) near Antervedi Pallipalem town. A flood canal (1.5 km long) from this barrage would feed Godavari river water to the coastal reservoir. With 70 tmcft live storage above the full supply level of its canals and another 100 tmcft above the spillway sill level, Polavaram reservoir will assist in moderating the Godavari flood water to make adequate water flow to the coastal reservoir.
The offshore earth dam extending up to 8 m msl high, is in the form of two parallel dikes separated by 1000 meters gap. The main purpose of the twin dikes is to prevent any sea water seepage into the coastal reservoir as its water level is below the sea water level. The water level between the dikes is always maintained at a minimum of 2 m above sea level by pumping fresh water from the coastal reservoir to the 1000 m gap between the dikes. The higher level water barrier between the two dikes eliminates any seawater seepage into the coastal reservoir by establishing freshwater seepage to the sea. The rainwater falling on the coastal reservoir area and runoff water from its catchment area is adequate to cater to the seepage and evaporation losses from the coastal reservoir. The 180 km long, 1000 m gap between the two dikes is also used as deep water mega harbor for shipping, ship breaking, ship building, etc. For shipping purpose, the breakwater outer dike facing the sea is envisaged with few locks fitted with twin gates for access to the open sea. The top surface of the inner dike would serve as access to the mainland from the mega harbor with rail and road links. The coastal reservoir whose full reservoir water level (FRL) is at 0.0 m msl, would also reduce drastically the cyclone damage and flooding in coastal areas of West Godavari, Krishna, Guntur and Prakasam districts. It would also greatly improve the irrigated coastal land drainage in these districts. The coastal reservoir area can also be used for locating floating solar power plants to generate the needed water pumping power. The dikes are built by dredging sand and clay from the nearby shallow sea bed to reduce the construction cost. Nearly 1850 tmcft of water of Godavari and Krishna flood waters can be utilized for irrigation, etc. requirements with this freshwater coastal reservoir.
Vast lands in the districts of Prakasam, Nellore, Cudapah, Ananthapur and Chittoor are drought prone and do not have adequate water sources for irrigating the dry lands to the extent of 10,000,000 acres. Water from this coastal reservoir would be pumped uplands to Ramathirtham water tank (near ) which is at 85 m msl. From this water tank, dry lands in Prakasam and Nellore districts up to Tamil Nadu border can be brought under irrigation by gravity canals. From this canal, water would be further pumped to the uplands up to 600 m msl across the Seshachalam mountains to irrigate vast area in Chittoor, Cudapah and Ananthapur districts. This gravity canal would also be extended further to transfer 350 tmcft water up to the Kavery river in Tamil Nadu state during South-west monsoon period. The total cost to Andhra Pradesh state would be less than ₹1,00,000 crores which is nearly ₹1,00,000 per acre of newly irrigated lands.
Controversies
The proposed project would displace 276 villages and 44,574 families spread across Andhra Pradesh state mainly. Tribals constitute 50% of such a displaced population. Human rights activists came out against the project because of these reasons. In addition, one activist pointed out that this interlinking of the rivers will harm the interests of the region in the state. Environmental activist Medha Patkar said that the project not only will displace several thousands of families, it will also submerge several archaeological sites, coal deposits, a wildlife sanctuary in Papikonda National Park, and several hectares of farm land.
Sixty-four years after the initial conception of the project, the Government of Andhra Pradesh secured the environmental clearance from the central agency in 2005. This clearance was obtained after the state government prepared a 4,500 crore forest management plan and rehabilitation and resettlement proposal covering 59,756 hectares that were being lost under the project. In addition, 40,000 was to be allotted for each dwelling to be constructed for the displaced as against 25,000 provided by other states. Despite this clearance, the project faced political roadblocks. The Communist Party of India (M) and Bharat Rashtra Samithi were troubled with the issue of submerging agricultural lands by the project.
Meanwhile, work on the project began in April 2006 and was expected to be completed by February 2007. After 30% work of excavation work on the canals and 15% of the spillway works had been completed, the work was halted in May 2006 to seek clearance from the Ministry of Forests and Environment.
The neighbouring state of Odisha also expressed its concern on the submerging of its land and decided to study this together with the officials from Andhra Pradesh. In response, Chief Minister of Andhra Pradesh Late Y. S. Rajasekhara Reddy clarified that neither Odisha nor Chhattisgarh would be affected by the construction. The problem continued until 2010, when Chief Minister of Odisha Naveen Patnaik remained steadfast in his demand for compensation and rehabilitation of tribals of his state who would be displaced due to the submerging of their land.
Odisha and Chhattisgarh have filed a petition in the Supreme Court against the Project which has probability of temporarily submerging large areas of its state and allege that union government are going ahead with the project without the necessary permissions from Environment Ministry. Under section 90 of the Andhra Pradesh Reorganisation Act, 2014, union government has taken the responsibility of taking all clearances and approvals for the project execution and also declared the project as national project. The states also allege that public hearing in the effected areas are not held. Under section 90 (3) of the Andhra Pradesh Reorganisation Act, 2014, Telangana state has already given the approval in all respects to the Polavaram project.
In June 2018, it was reported that Naveen Patnaik, Chief Minister of Odisha had written to the Central Government to halt the Polavaram Project.
Interstate river water disputes
Odisha, Chhattisgarh and Andhra Pradesh entered into an agreement (clause vi of final order, page 80 of original GWDT) which was made part of Godavari Water Disputes Tribunal (GWDT) award. The agreement allows Andhra Pradesh to construct the Polavaram reservoir with full reservoir level (FRL) at 150 feet above the mean sea level (MSL). Odisha approached Supreme Court against the design discharge capacity of the Polavaram dam spillway stating that it should be designed for five million cusecs (cubic feet per second) which is the estimated probable maximum flood (PMF) once in 1000 years duration. Odisha argues that otherwise there would be additional submergence above 150 ft MSL in its territory during peak floods. The recorded maximum flood is 3.0 million cusecs in the year 1986 during the last 115 years.
The projected back water level build-up at Konta due to PMF in Godavari river after construction of the Polavaram project with the designed maximum water level (measured at dam point) shall be cross-checked with the level that can occur at Konta in Sabari basin from the PMF generated in the upstream of Konta when the downstream main Godavari is not under spate. Then only enhanced submergence during the PMF of the Godavari river in Odisha and Chhattisgarh states can be assessed due to the Polavaram dam construction.
The location of Polavaram dam is in the plain area at approximately 10 km downstream from the 50 km long narrow gorge in the Papi Hills. It is also to be ascertained whether the higher backwater level during the PMF in Godavari river is solely due to the presence of long narrow gorge which is acting as a natural dam/barrier or further enhanced by the presence of the downstream Polavaram barrage.
It is purely an academic interest showing concern for the few thousand hectares of farm/forest land submerged by the backwater level build-up once in five hundred or thousand years (against the permitted norm of once in 25 years) without showing concern for the thousands of square kilometers land submerged in the area downstream of the dam with a river flood of magnitude five million cusecs.
Thirty-two years have passed after the GWDT award in 1980, Maharashtra, Odisha and Chhattisgarh have not made serious efforts to harness the major Godavari tributaries such as Sabari River, Indravati River and Pranahita River to utilize the allocated share of Godavari waters. This underutilization of water is the main reason for the very high flood flows at the Polavaram dam site. The vast area in excess of 10,000 square km up to sea are frequently flooded (at least once in a decade) by Godavari floods in Andhra Pradesh by the flood waters originating in Madhya Pradesh, Maharashtra, Odisha and Chhattisgarh states. The land submergence due to the Polavaram dam in Odisha and Chhattisgarh states is a fraction of the Andhra Pradesh area which is affected by the floods in the Godavari River. During the years from 1953 to 2011, Andhra Pradesh suffered nearly 55,800 crore which is 26% of total flood damage in India. It is justified to raise the FRL of Polavaram dam further on this ground alone. One single criterion shall be applied by the tribunals/courts for all the submerged lands whether they are related to reservoir projects construction or due to river floods (i.e. non-utilization of river water). Upstream states shall not take granted that downstream state areas are permitted to be flooded /inundated by the river flood water without offering agreeable relief/comforts.
Odisha and Chhattisgarh entered into an agreement (clause 3e, Annexure F, Page 159 of original GWDT) to construct a Hydro electricity project at Konta / Motu just upstream of the confluence point of Sileru tributary with Sabari River (tri-junction point of Andhra Pradesh, Odisha and Chhattisgarh borders). When this project is constructed, the land submergence would be more than that of Polavaram backwaters. It would be better for Odisha and Chhattisgarh to enter into an agreement with Andhra Pradesh to shift the location of this Hydro electricity project further downstream in Andhra Pradesh territory to harness Sileru river water also for hydroelectricity generation. This joint project of the three states would eliminate the backwaters issue of the Polavaram dam.
The 200 km long stretch of the Sabari river forming boundary between Chhattisgarh and Odisha drops by 2.25 meters per km length on average. This stretch of the river has substantial hydroelectricity generation potential by building medium head (< 20 m) barrages in series to minimize land submergence. The surplus water of Indravati River in Odisha can also be diverted to Sabari river via Jouranala through which Indravati River flood waters naturally overflow into Sabari basin for power generation.
Dispute with Odisha and Telangana
In July 2018, a two-member bench of the Supreme Court asked Andhra Pradesh, Telangana, Odisha and Chhattisgarh governments to frame the issues for arguments.
Telangana said to the court it is in-principle agree for the project but the center should take up study by neutral central institute like CWPRS, Pune to study the impact of Backwater due to increase of 36 lakh cusecs to 50 lakh cusecs of spillway design discharge, it is to ascertain the safety of important temple town like Bhadrachalam, Mining areas and heavy water Plant. Orissa also insisted on backwater studies. The matter is before Supreme court. Proceedings are going on. Supreme Court identified 13 issues to settle the dispute.
See also
Musunuri Nayaks
Polavaram, Eluru district
Interstate River Water Disputes Act
Water security
Steel dam
Sriram Sagar Project
Nizam Sagar
Icchampally Project
Balimela Reservoir
Jalaput Dam
Palar River
Nagavali River
Vamsadhara River
References
Dams on the Godavari River
Dams in Andhra Pradesh
Hydroelectric power stations in Andhra Pradesh
Inter-state disputes in India
Irrigation projects
Irrigation in India
Buildings and structures in West Godavari district
Buildings and structures in East Godavari district
Godavari basin
Proposed infrastructure in Andhra Pradesh | Polavaram Project | [
"Engineering"
] | 7,539 | [
"Irrigation projects"
] |
29,472,173 | https://en.wikipedia.org/wiki/Sludge%20incineration | Sludge incineration is a sewage sludge treatment process using incineration. It generates thermal energy from sewage sludge produced in sewage treatment plants. The process is in operation in Germany where Klärschlammverbrennung GmbH in Hamburg incinerates 1.3m tonnes of sludge annually. The process has also been trialed in China, where it has been qualified as an environmental investment project. However the energy balance of the process is not high, as sludge needs drying before incinerating.
Sewage sludge can be incinerated in mono-incineration or co-incineration plants. In co-incineration, sewage sludge is not the only fuel and it can be processed at coal fired power plants, cement plants and in some waste incineration facility.
Examples
Germany currently has around 27 mono-incineration facilities, where only sewage sludge is processed.
References
External links
Sewer Cleaning
Renewable energy
Sewerage | Sludge incineration | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 197 | [
"Sewerage",
"Water pollution",
"Environmental engineering"
] |
29,482,459 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20proton%20pump%20inhibitors | Proton pump inhibitors (PPIs) block the gastric hydrogen potassium ATPase (H+/K+ ATPase) and inhibit gastric acid secretion. These drugs have emerged as the treatment of choice for acid-related diseases, including gastroesophageal reflux disease (GERD) and peptic ulcer disease.
PPIs also can bind to other types of proton pumps such as those that occur in cancer cells and are finding applications in the reduction of cancer cell acid efflux and reduction of chemotherapy drug resistance.
History
Evidence emerged by the end of the 1970s that the newly discovered proton pump (H+/K+ ATPase) in the secretory membrane of the parietal cell was the final step in acid secretion. Literature from anaesthetic screenings led attention to the potential antiviral compound pyridylthioacetamide which after further examination pointed the focus on an anti-secretory compound with unknown mechanisms of action called timoprazole. Timoprazole is a pyridylmethylsulfinyl benzimidazole and appealed due to its simple chemical structure and its surprisingly high level of anti-secretory activity.
Optimization of substituted benzimidazoles and their antisecretory effects were studied on the newly discovered proton pump to obtain higher pKa values of the pyridine, thereby facilitating accumulation within the parietal cell and increasing the rate of acid-mediated conversion to the active mediate. As a result of such optimization the first proton pump inhibiting drug, omeprazole, was released on the market.
Other PPIs like lansoprazole and pantoprazole would follow in its footsteps, claiming their share of a flourishing market, after their own course of development.
Basic structure
PPIs can be divided into two groups based on their basic structure. Although all members have a substituted pyridine part, one group has linked to various benzimidazoles, whereas the other has linked to a substituted imidazopyridine. All marketed PPIs (omeprazole, lansoprazole, pantoprazole) are in the benzimidazole group.
Proton pump inhibitors are prodrugs and their actual inhibitory form is somewhat controversial. In acidic solution, the sulfenic acid is isolated before reaction with one or more cysteines accessible from the luminar surface of the enzyme, a tetracyclic sulfenamide. This is a planar molecule thus any enantiomer of a PPI loses stereospecifity upon activation.
The effectiveness of these drugs derives from two factors: their target, the H+/K+ ATPase which is responsible for the last step in acid secretion; therefore, their action on acid secretion is independent of the stimulus to acid secretion, of histamine, acetylcholine, or other yet to be discovered stimulants. In addition, their mechanism of action involves covalent binding of the activated drug to the enzyme, resulting in a duration of action that exceeds their plasma half-life.
The gastric ATPase
Acid secretion by the human stomach results in a median diurnal pH of 1.4. This very large (>106-fold) H+ gradient is generated by the gastric H+/K+ ATPase which is an ATP-driven proton pump. Hydrolysis of one ATP molecule is used to catalyse the electroneutral exchange of two luminal potassium ions for two cytoplasmic protons through the gastric membrane.
Structure
The proton pump, H+/K+ ATPase is a α,β-heterodimeric enzyme. The catalytic α subunit has ten transmembrane segments with a cluster of intramembranal carboxylic amino acids located in the middle of the transmembrane segments TM4, TM5, TM6 and TM8. The β subunit has one transmembrane segment with N terminus in cytoplasmic region. The extracellular domain of the β subunit contains six or seven N-linked glycosylation sites which is important for the enzyme assembly, maturation and sorting.
Function
The ion transport is accomplished by cyclical conformational changes of the enzyme between its two main reaction states, E1 and E2. The cytoplasmic-open E1 and luminal-open E2 states have high affinity for H+ and K+. The expulsion of the proton at 160 mM (pH 0.8) concentration results from movement of lysine 791 into the ion binding site in the E2P configuration.
Discovery
In the year 1975, timoprazole was found to inhibit acid secretion irrespective of stimulus, extracellular or intracellular. Studies on timoprazole revealed enlargement of the thyroid gland due to inhibition of iodine uptake as well as atrophy of the thymus gland. A literature search showed that some substituted mercapto-benzimidazoles had no effect on iodine uptake and introduction of such substituents into timoprazole resulted in an elimination of the toxic effects, without reducing the antisecretory effect. A derivative of timoprazole, omeprazole, was discovered in 1979, and was the first of a new class of drug that control acid secretion in the stomach, a proton pump inhibitor (PPI). Addition of 5-methoxy-substitution to the benzimidazole moiety of omeprazole was also made and gave the compound much more stability at neutral pH. In 1980, an Investigational New Drug (IND) application was filed and omeprazole was taken into Phase III human trials in 1982. A new approach for the treatment of acid-related diseases was introduced, and omeprazole was quickly shown to be clinically superior to the histamine H2 receptor antagonists, and was launched in 1988 as Losec in Europe, and in 1990 as Prilosec in the United States. In 1996, Losec became the world's biggest ever selling pharmaceutical, and by 2004 over 800 million patients had been treated with the drug worldwide. During the 1980s, about 40 other companies entered the PPIs area, but few achieved market success: Takeda with lansoprazole, Byk Gulden (now Nycomed) with pantoprazole, and Eisai with rabeprazole, all of which were analogues of omeprazole.
Development
Pantoprazole
The story of pantoprazole's discovery is a good example of the stepwise development of PPIs. The main focus of modification of timoprazole was the benzimidazole part of its structure. Addition of a trifluoromethyl group to the benzimidazole moiety led to a series of very active compounds with varying solution-stability. In general fluoro substituents were found to block metabolism at the point where they were attached. Later the more balanced fluoroalkoxy substituent, instead of the highly lipophilic and strongly electron-withdrawing trifluoromethyl substituent, led to highly active compounds with supposed longer half-lives and higher solution stability.
It was realized that activity was somehow linked to instability in solution and then came to the conclusion that the cyclic sulfenamides, formed in acidic conditions, were the active principle of the PPIs. Finally, it was understood that seemingly small alterations in the backbone of timoprazole led nowhere, and focus had to be centered on the substituents on the backbone. However, necessary intramolecular rearrangement of the benzimidazole into sulfenamide posed severe geometric constraints. Optimal compounds would be those that were stable at neutral pH but were quickly activated at low pH.
A clear-cut design of active inhibitors was still not possible because in the complex multi-step chemistry the influence of a substituent on each step in the cascade could be different, and therefore not predictable for the overall rate of the prerequisite acid activation. Smith Kline and French, that entered into collaboration with Byk Gulden mid-1984, greatly assisted in determining criteria for further development. From 1985, the aim was to identify a compound with good stability at neutral pH, sustaining this higher level of stability down to pH 5 but being rapidly activateable at lower pHs, combined with a high level of H+/K+ ATPase inhibition. From the numerous already synthesized and tested compounds that fulfilled these criteria the most promising candidates were pantoprazole and its salt, pantoprazole sodium.
In 1986 pantoprazole sodium sesquihydrate was synthesized and from 1987 onwards the development of pantoprazole was switched to the sodium salt which is more stable and has better compatibility with other excipients used in the drug formulation.
Pantoprazole was identified after nearly seven years of research and registered for clinical use after a further seven years of development, and finally reached its first market in 1994 in Germany. During the course of the studies on pantoprazole, more than 650 PPIs had been synthesized and evaluated. Pantoprazole obtained high selection criteria in its development process — especially concerning the favorable low potential for interaction with other drugs. Good solubility of pantoprazole and a very high solution stability allowed it to become the first marketed PPI for intravenous use in critical care patients.
Esomeprazole
Omeprazole showed an inter-individual variability and therefore a significant number of patients with acid-related disorders required higher or multiple doses to achieve symptom relief and healing. Astra started a new research program in 1987 to identify a new analogue to omeprazole with less interpatient variability. Only one compound proved superior to omeprazole and that was the (S)-(−)-isomer, esomeprazole, which was developed as the magnesium salt. Esomeprazole magnesium (brand name Nexium) received its first approval in 2000 and provided more pronounced inhibition of acid secretion and less inter-patient variation compared to omeprazole. In 2004, Nexium had already been used to treat over 200 million patients.
Benzimidazoles
Omeprazole (brand names Losec, Prilosec, Zegerid, Ocid, Lomac, Omepral, Omez, Ultop, Ortanol, Gastrozol)
Omeprazole was the first PPI on the market, in 1988. It is a 1:1 racemate drug with a backbone structure of timoprazole, but substituted with two methoxy and two methyl groups. One of the methoxy group is at position 6 of the bensoimidazole and the other at position 4 of the pyridine and the methyl groups are at position 3 and 5 of the pyridine.
Omeprazole is available as enteric-coated tablets, capsules, chewable tablets, powder for oral suspensions and powder for intravenous injection.
Lansoprazole (brand names: Prevacid, Zoton, Inhibitol, Levant, Lupizole, Lancid, Lansoptol, Epicur)
Lansoprazole was the second of the PPI drugs to reach the market, being launched in Europe in 1991 and the US in 1995.
It has no substitutions at the benzimidazole but two substituents on the pyridine, methyl group at position 3 and a trifluoroethoxy group at position 4. The drug is a 1:1 racemate of the enantiomers dexlansoprazole and levolansoprazole. It is available in gastroresistant capsules and tablets as well as chewable tablets.
Pantoprazole (brand names: Protonix, Somac, Pantoloc, Pantozol, Zurcal, Zentro, Pan, Nolpaza, Controloc, Sunpras)
Pantoprazole was the third PPI and was introduced to the German market in 1994.
It has a difluoroalkoxy sidegroup on the benzimidazole part and two methoxy groups in position 3 and 4 on the pyridine.
Pantoprazole was first prepared in April 1985 by a small group of scale-up chemists. It is a dimethoxy-substituted pyridine bound to a fluoroalkoxy substituted benzimidazole.
Pantoprazole sodium is available as gastroresistant or delayed release tablets and as lyophilized powder for intravenous use.
Rabeprazole (brand names: Zechin, Rabecid, Nzole-D, AcipHex, Pariet, Rabeloc, Zulbex, Ontime, Noflux)
Rabeprazole is a novel benzimidazole compound on market, since 1999 in USA. It is similar to lansoprazole in having no substituents on its benzimidazole part and a methyl group at site 3 on the pyridine, the only difference is the methoxypropoxy substitution at site 4 instead of the trifluoroethoxy group on lansoprazole.
Rabeprazole is marketed as rabeprazole sodium salt. It is available as enteric-coated tablets.
Esomeprazole (brand names: Nexium, Esotrex, Emanera, Neo-Zext)
In 2001 esomeprazole was launched in USA, as a follow-up of omeprazoles patent.
Esomeprazole is the (S)-(−)-enantiomer of omeprazole and provides higher bioavailability and improved efficacy, in terms of stomach acid control, over the (R)-(+)-enantiomer of omeprazole. In theory, by using pure esomeprazole the effects on the proton pump will be equal in all patients, eliminating the "poor metabolizer effect" of the racemate omeprazole. It is available as delayed-release capsules or tablets and as esomeprazole sodium for intravenous injection/infusion. Oral esomeprazole preparations are enteric-coated, due to the rapid degradation of the drug in the acidic condition of the stomach. This is achieved by formulating capsules using the multiple-unit pellet system.
Although the (S)-(−)-isomer is more potent in humans, the (R)-(+)-isomer is more potent in testings of rats, while the enantiomers are equipotent in dogs.
Dexlansoprazole (brand names: Kapidex, Dexilant)
Dexlansoprazole was launched as a follow up of lansoprazole in 2009.
Dexlansoprazole is an (R)-(+)-enantiomer of lansoprazole, marketed as Dexilant. After oral appliance of the racemic lansoprazole, the circulating drug is 80% dexlansoprazole. Moreover, both enantiomers have similar effects on the proton pump. Consequently, the main advantage of Dexilant is not the fact that it is an enantiopure substance. The advantage is the pharmaceutical formulation of the drug, which is based on a dual release technology, with the first quick release producing a blood plasma peak concentration about one hour after application, and the second retarded release producing another peak about four hours later.
Imidazopyridines
Tenatoprazole
Tenatoprazole (TU-199), an imidazopyridine proton pump inhibitor, is a novel compound that has been designed as a new chemical entity with a substantially prolonged plasma half-life (7 hours), but otherwise has similar activity as other PPIs.
The difference in the structural backbone of tenatoprazole compared to benzimidazole PPIs, is its imidazo[4,5-b]pyridine moiety, which reduces the rate of metabolism, allowing a longer plasma residence time but also decreases the pKa of the fused imidazole N as compared to the current PPIs. Tenatoprazole has the same substituents as omeprazole, the methoxy groups at position 6 on the imidazopyridine and at position 4 on the pyridine part as well as two methyl groups at position 3 and 5 on the pyridine.
The bioavailability of tenatoprazole is double for the (S)-(−)-tenatoprazole sodium salt hydrate form when compared to the free form in dogs. This increased bioavailability is due to differences in the crystal structure and hydrophobic nature of the two forms, and therefore its more likely to be marketed as the pure (S)-(−)-enantiomer.
PPIs binding mode
The disulfide binding of the inhibitor takes place in the luminal sector of the H+/K+ ATPase were 2 mol of inhibitor is bound per 1 mol of active site H+/K+ ATPase.
All PPIs react with cysteine 813 in the loop between TM5 and TM6 on the H+/K+ ATPase, fixing the enzyme in the E2 configuration. Omeprazole reacts with cysteine 813 and 892. Rabeprazole binds to cysteine 813 and both 892 and 321. Lansoprazole reacts with cysteine 813 and cysteine 321, whereas pantoprazole and tenatoprazole react with cysteine 813 and 822.
Reaction with cysteine 822 confers a rather special property to the covalently inhibited enzyme, namely irreversibility to reducing agents. The likely first step is binding of the prodrug protonated on the pyridine of the compound with cysteine 813. Then the second proton is added with acid transport by the H+/K+ ATPase, and the compound is activated. Recent data suggest the hydrated sulfenic acid to be the reactive species forming directly from the mono-protonated benzimidazole bound on the surface of the pump.
Saturation of the gastric ATPase
Even though consumption of food stimulates acid secretion and acid secretion activates PPIs, PPIs cannot inhibit all pumps. About 70% of pump enzyme is inhibited, as PPIs have a short half-life and not all pump enzymes are activated. It takes about 3 days to reach steady-state inhibition of acid secretion, as a balance is struck between covalent inhibition of active pumps, subsequent stimulation of inactive pumps after the drug has been eliminated from the blood, and de novo synthesis of new pumps.
Clinical pharmacology
Although the drugs omeprazole, lansoprazole, pantoprazole, and rabeprazole share common structure and mode of action, each differs somewhat in its clinical pharmacology.
Differing pyridine and benzimidazole substituents result in small, but potentially significant different physical and chemical properties.
Direct comparison of pantoprazole sodium with other anti-secretory drugs showed that it was significantly more effective than H2-receptor antagonists and either equivalent or better than other clinically used PPIs. Another study states rabeprazole undergoes activation over a greater pH range than omeprazole, lansoprazole, and pantoprazole, and converts to the sulphenamide form more rapidly than any of these three drugs.
Most oral PPI preparations are enteric-coated, due to the rapid degradation of the drugs in the acidic conditions of the stomach. For example omeprazole is unstable in acid with a half-life of 2 min at pH 1–3, but is significantly more stable at pH 7 (half-life ca. 20 h).
The acid protective coating prevents conversion to the active principle in the lumen of the stomach, which then will react with any available sulfhydryl group in food and will not penetrate to the lumen of the secretory canaliculus
The oral bioavailability of PPIs is high; 77% for pantoprazole, 80–90% for lansoprazole and 89% for esomeprazole. All the PPIs except tenatoprazole are rapidly metabolized in the liver by CYP enzymes, mostly by CYP2C19 and CYP3A4. PPIs are sensitive to CYP enzymes and have different pharmacokinetic profiles. Studies comparing the efficacy of PPIs indicate that esomeprazole and tenatoprazole have stronger acid suppression, with a longer period of intragastric pH (pH > 4).
Studies of the effect of tenatoprazole on acid secretion in in vivo animal models, such as pylorus-ligated rats and acute gastric fistula rats, demonstrated a 2- to 4-fold more potent inhibitory activity compared with omeprazole. A more potent inhibitory activity was also shown in several models of induced gastric lesions. In Asian as well as Caucasian healthy subjects, tenatoprazole exhibited a seven-fold longer half-life than the existing H+/K+ ATPase inhibitors. It is thus hypothesized that a longer half-life results in a more prolonged inhibition of gastric acid secretion, especially during the night.
A strong relationship has been stated between the degree and duration of gastric acid inhibition, as measured by monitoring of the 24-hour intragastric pH in pharmacodynamic studies, and the rate of healing and symptom relief reported.
A clinical study showed that nocturnal acid breakthrough duration was significantly shorter for 40 mg of tenatoprazole than for 40 mg of esomeprazole, with the conclusion that tenatoprazole was significantly more potent than esomeprazole during the night. Although, the therapeutic relevance of this pharmacological advantage deserves further study.
PPIs have been used successfully in triple-therapy regiments with clarithromycin and amoxicillin for the eradication of Helicobacter pylori with no significant difference between different PPI-based regimens.
Future research and new generations of PPIs
Potassium-competitive acid blockers or acid pump antagonists
Despite the fact that PPIs have revolutionized the treatment of GERD, there is still room for improvement in the speed of onset of acid suppression as well as mode of action that is independent of an acidic environment and also better inhibition of the proton pump. Therefore, a new class of PPIs, potassium-competitive acid blockers (P-CABs) or acid pump antagonists (APAs), have been under development the past years and will most likely be the next generation of drugs that suppress gastric activity. These new agents can in a reversible and competitive fashion inhibit the final step in the gastric acid secretion with respect to K+ binding to the parietal cell gastric H+/K+ ATPase. That is, they block the action of the H+/K+ ATPase by binding to or near the site of the K+ channel. Since the binding is competitive and reversible these agents have the potential to achieve faster inhibition of acid secretion and longer duration of action compared to PPIs, resulting in quicker symptom relief and healing. The imidazopyridine-based compound SCH28080 was the prototype of this class, and turned out to be hepatotoxic. Newer agents that are currently in development include CS-526, linaprazan, soraprazan and revaprazan in which the latter have reached clinical trials. Studies remain to determine whether these or other related compounds can become useful. In June 2006, Yuhan obtained approval from the Korean FDA for the use of revaprazan (brand name Revanex) in the treatment of gastritis.
Vonoprazan is a newer agent with a faster and longer lasting action, first marketed in Japan, then in Russia, and in 2023 was approved for use in the US. It is still being trialed in the UK.
See also
Digestion
Stomach
Gastric acid
Gastroesophageal reflux disease
Hydrogen potassium ATPase
Proton pump inhibitor
References
Proton-pump inhibitors
Gastroenterology
Proton-Pump Inhibitors, Discovery And Development Of | Discovery and development of proton pump inhibitors | [
"Chemistry",
"Biology"
] | 5,147 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
44,700,904 | https://en.wikipedia.org/wiki/Anton%20blood%20group%20antigen | The Anton blood group antigen is a cell surface receptor found on some human red blood cells. It has been observed to play a role in Haemophilus influenzae infections. Studies showed that bacterium can adhere to this receptor and cause human red blood cells to agglutinate.
In 1985, this antigen was found to be the same as another called Wj so it is usually referred as AnWj. In 1991, a study of a family with the trait showed that the phenotype indeed had an inherited character independent of other human blood group systems and in 2024, it was found that the inherited AnWj-negative blood group phenotype is caused by homozygosity for a deletion in the MAL gene that encodes the expression of the Myelin and lymphocyte (MAL) protein.
References
Transmembrane receptors
Blood cells
Microbiology | Anton blood group antigen | [
"Chemistry",
"Biology"
] | 181 | [
"Signal transduction",
"Transmembrane receptors",
"Microbiology",
"Microscopy"
] |
28,111,101 | https://en.wikipedia.org/wiki/Lov%C3%A1sz%20number | In graph theory, the Lovász number of a graph is a real number that is an upper bound on the Shannon capacity of the graph. It is also known as Lovász theta function and is commonly denoted by , using a script form of the Greek letter theta to contrast with the upright theta used for Shannon capacity. This quantity was first introduced by László Lovász in his 1979 paper On the Shannon Capacity of a Graph.
Accurate numerical approximations to this number can be computed in polynomial time by semidefinite programming and the ellipsoid method.
The Lovász number of the complement of any graph is sandwiched between the chromatic number and clique number of the graph, and can be used to compute these numbers on graphs for which they are equal, including perfect graphs.
Definition
Let be a graph on vertices. An ordered set of unit vectors is called an orthonormal representation of in , if and are orthogonal whenever vertices and are not adjacent in :
Clearly, every graph admits an orthonormal representation with : just represent vertices by distinct vectors from the standard basis of . Depending on the graph it might be possible to take considerably smaller than the number of vertices .
The Lovász number of graph is defined as follows:
where is a unit vector in and is an orthonormal representation of in . Here minimization implicitly is performed also over the dimension , however without loss of generality it suffices to consider . Intuitively, this corresponds to minimizing the half-angle of a rotational cone containing all representing vectors of an orthonormal representation of . If the optimal angle is , then and corresponds to the symmetry axis of the cone.
Equivalent expressions
Let be a graph on vertices. Let range over all symmetric matrices such that whenever or vertices and are not adjacent, and let denote the largest eigenvalue of . Then an alternative way of computing the Lovász number of is as follows:
The following method is dual to the previous one. Let range over all symmetric positive semidefinite matrices such that whenever vertices and are adjacent, and such that the trace (sum of diagonal entries) of is . Let be the matrix of ones. Then
Here, is just the sum of all entries of .
The Lovász number can be computed also in terms of the complement graph . Let be a unit vector and be an orthonormal representation of . Then
Value for well-known graphs
The Lovász number has been computed for the following graphs:
Properties
If denotes the strong graph product of graphs and , then
If is the complement of , then
with equality if is vertex-transitive.
Lovász "sandwich theorem"
The Lovász "sandwich theorem" states that the Lovász number always lies between two other numbers that are NP-complete to compute. More precisely,
where is the clique number of (the size of the largest clique) and is the chromatic number of (the smallest number of colors needed to color the vertices of so that no two adjacent vertices receive the same color).
The value of can be formulated as a semidefinite program and numerically approximated by the ellipsoid method in time bounded by a polynomial in the number of vertices of G.
For perfect graphs, the chromatic number and clique number are equal, and therefore are both equal to . By computing an approximation of and then rounding to the nearest integer value, the chromatic number and clique number of these graphs can be computed in polynomial time.
Relation to Shannon capacity
The Shannon capacity of graph is defined as follows:
where is the independence number of graph (the size of a largest independent set of ) and is the strong graph product of with itself times. Clearly, . However, the Lovász number provides an upper bound on the Shannon capacity of graph, hence
For example, let the confusability graph of the channel be , a pentagon. Since the original paper of it was an open problem to determine the value of . It was first established by that .
Clearly, . However, , since "11", "23", "35", "54", "42" are five mutually non-confusable messages (forming a five-vertex independent set in the strong square of ), thus .
To show that this bound is tight, let be the following orthonormal representation of the pentagon:
and let . By using this choice in the initial definition of Lovász number, we get . Hence, .
However, there exist graphs for which the Lovász number and Shannon capacity differ, so the Lovász number cannot in general be used to compute exact Shannon capacities.
Quantum physics
The Lovász number has been generalized for "non-commutative graphs" in the context of quantum communication. The Lovasz number also arises in quantum contextuality in an attempt to explain the power of quantum computers.
See also
Tardos function, a monotone approximation to the Lovász number of the complement graph that can be computed in polynomial time and has been used to prove lower bounds in monotone circuit complexity.
Notes
References
, page 285
External links
Graph invariants
Information theory | Lovász number | [
"Mathematics",
"Technology",
"Engineering"
] | 1,059 | [
"Telecommunications engineering",
"Applied mathematics",
"Graph theory",
"Graph invariants",
"Information theory",
"Mathematical relations",
"Computer science"
] |
28,113,360 | https://en.wikipedia.org/wiki/Classical%20central-force%20problem | In classical mechanics, the central-force problem is to determine the motion of a particle in a single central potential field. A central force is a force (possibly negative) that points from the particle directly towards a fixed point in space, the center, and whose magnitude only depends on the distance of the object to the center. In a few important cases, the problem can be solved analytically, i.e., in terms of well-studied functions such as trigonometric functions.
The solution of this problem is important to classical mechanics, since many naturally occurring forces are central. Examples include gravity and electromagnetism as described by Newton's law of universal gravitation and Coulomb's law, respectively. The problem is also important because some more complicated problems in classical physics (such as the two-body problem with forces along the line connecting the two bodies) can be reduced to a central-force problem. Finally, the solution to the central-force problem often makes a good initial approximation of the true motion, as in calculating the motion of the planets in the Solar System.
Basics
The essence of the central-force problem is to solve for the position r of a particle moving under the influence of a central force F, either as a function of time t or as a function of the angle φ relative to the center of force and an arbitrary axis.
Definition of a central force
A conservative central force F has two defining properties. First, it must drive particles either directly towards or directly away from a fixed point in space, the center of force, which is often labeled O. In other words, a central force must act along the line joining O with the present position of the particle. Second, a conservative central force depends only on the distance r between O and the moving particle; it does not depend explicitly on time or other descriptors of position.
This two-fold definition may be expressed mathematically as follows. The center of force O can be chosen as the origin of a coordinate system. The vector r joining O to the present position of the particle is known as the position vector. Therefore, a central force must have the mathematical form
where r is the vector magnitude |r| (the distance to the center of force) and r̂ = r/r is the corresponding unit vector. According to Newton's second law of motion, the central force F generates a parallel acceleration a scaled by the mass m of the particle
For attractive forces, F(r) is negative, because it works to reduce the distance r to the center. Conversely, for repulsive forces, F(r) is positive.
Potential energy
If the central force is a conservative force, then the magnitude F(r) of a central force can always be expressed as the derivative of a time-independent potential energy function U(r)
Thus, the total energy of the particle—the sum of its kinetic energy and its potential energy U—is a constant; energy is said to be conserved. To show this, it suffices that the work W done by the force depends only on initial and final positions, not on the path taken between them.
Equivalently, it suffices that the curl of the force field F is zero; using the formula for the curl in spherical coordinates,
because the partial derivatives are zero for a central force; the magnitude F does not depend on the angular spherical coordinates θ and φ.
Since the scalar potential V(r) depends only on the distance r to the origin, it has spherical symmetry. In this respect, the central-force problem is analogous to the Schwarzschild geodesics in general relativity and to the quantum mechanical treatments of particles in potentials of spherical symmetry.
One-dimensional problem
If the initial velocity v of the particle is aligned with position vector r, then the motion remains forever on the line defined by r. This follows because the force—and by Newton's second law, also the acceleration a—is also aligned with r. To determine this motion, it suffices to solve the equation
One solution method is to use the conservation of total energy
Taking the reciprocal and integrating we get:
For the remainder of the article, it is assumed that the initial velocity v of the particle is not aligned with position vector r, i.e., that the angular momentum vector L = r × m v is not zero.
Uniform circular motion
Every central force can produce uniform circular motion, provided that the initial radius r and speed v satisfy the equation for the centripetal force
If this equation is satisfied at the initial moments, it will be satisfied at all later times; the particle will continue to move in a circle of radius r at speed v forever.
Relation to the classical two-body problem
The central-force problem concerns an ideal situation (a "one-body problem") in which a single particle is attracted or repelled from an immovable point O, the center of force. However, physical forces are generally between two bodies; and by Newton's third law, if the first body applies a force on the second, the second body applies an equal and opposite force on the first. Therefore, both bodies are accelerated if a force is present between them; there is no perfectly immovable center of force. However, if one body is overwhelmingly more massive than the other, its acceleration relative to the other may be neglected; the center of the more massive body may be treated as approximately fixed. For example, the Sun is overwhelmingly more massive than the planet Mercury; hence, the Sun may be approximated as an immovable center of force, reducing the problem to the motion of Mercury in response to the force applied by the Sun. In reality, however, the Sun also moves (albeit only slightly) in response to the force applied by the planet Mercury.
Such approximations are unnecessary, however. Newton's laws of motion allow any classical two-body problem to be converted into a corresponding exact one-body problem. To demonstrate this, let x1 and x2 be the positions of the two particles, and let r = x1 − x2 be their relative position. Then, by Newton's second law,
The final equation derives from Newton's third law; the force of the second body on the first body (F21) is equal and opposite to the force of the first body on the second (F12). Thus, the equation of motion for r can be written in the form
where is the reduced mass
As a special case, the problem of two bodies interacting by a central force can be reduced to a central-force problem of one body.
Qualitative properties
Planar motion
The motion of a particle under a central force F always remains in the plane defined by its initial position and velocity. This may be seen by symmetry. Since the position r, velocity v and force F all lie in the same plane, there is never an acceleration perpendicular to that plane, because that would break the symmetry between "above" the plane and "below" the plane.
To demonstrate this mathematically, it suffices to show that the angular momentum of the particle is constant. This angular momentum L is defined by the equation
where m is the mass of the particle and p is its linear momentum. In this equation the times symbol × indicates the vector cross product, not multiplication. Therefore, the angular momentum vector L is always perpendicular to the plane defined by the particle's position vector r and velocity vector v.
In general, the rate of change of the angular momentum L equals the net torque r × F
The first term m v × v is always zero, because the vector cross product is always zero for any two vectors pointing in the same or opposite directions. However, when F is a central force, the remaining term r × F is also zero because the vectors r and F point in the same or opposite directions. Therefore, the angular momentum vector L is constant. Then
Consequently, the particle's position r (and hence velocity v) always lies in a plane perpendicular to L.
Polar coordinates
Since the motion is planar and the force radial, it is customary to switch to polar coordinates. In these coordinates, the position vector r is represented in terms of the radial distance r and the azimuthal angle φ.
Taking the first derivative with respect to time yields the particle's velocity vector v
Similarly, the second derivative of the particle's position r equals its acceleration a
The velocity v and acceleration a can be expressed in terms of the radial and azimuthal unit vectors. The radial unit vector is obtained by dividing the position vector r by its magnitude r, as described above
The azimuthal unit vector is given by
Thus, the velocity can be written as
whereas the acceleration equals
Specific angular momentum
Since F = ma by Newton's second law of motion and since F is a central force, then only the radial component of the acceleration a can be non-zero; the angular component aφ must be zero
Therefore,
This expression in parentheses is usually denoted h
which equals the speed v times r⊥, the component of the radius vector perpendicular to the velocity. h is the magnitude of the specific angular momentum because it equals the magnitude L of the angular momentum divided by the mass m of the particle.
For brevity, the angular speed is sometimes written ω
However, it should not be assumed that ω is constant. Since h is constant, ω varies with the radius r according to the formula
Since h is constant and r2 is positive, the angle φ changes monotonically in any central-force problem, either continuously increasing (h positive) or continuously decreasing (h negative).
Constant areal velocity
The magnitude of h also equals twice the areal velocity, which is the rate at which area is being swept out by the particle relative to the center. Thus, the areal velocity is constant for a particle acted upon by any type of central force; this is Kepler's second law. Conversely, if the motion under a conservative force F is planar and has constant areal velocity for all initial conditions of the radius r and velocity v, then the azimuthal acceleration aφ is always zero. Hence, by Newton's second law, F = ma, the force is a central force.
The constancy of areal velocity may be illustrated by uniform circular and linear motion. In uniform circular motion, the particle moves with constant speed v around the circumference of a circle of radius r. Since the angular velocity ω = v/r is constant, the area swept out in a time Δt equals ω r2Δt; hence, equal areas are swept out in equal times Δt. In uniform linear motion (i.e., motion in the absence of a force, by Newton's first law of motion), the particle moves with constant velocity, that is, with constant speed v along a line. In a time Δt, the particle sweeps out an area vΔtr⊥ (the impact parameter). The distance r⊥ does not change as the particle moves along the line; it represents the distance of closest approach of the line to the center O (the impact parameter). Since the speed v is likewise unchanging, the areal velocity vr⊥ is a constant of motion; the particle sweeps out equal areas in equal times.
Equivalent parallel force field
By a transformation of variables, any central-force problem can be converted into an equivalent parallel-force problem. In place of the ordinary x and y Cartesian coordinates, two new position variables ξ = x/y and η = 1/y are defined, as is a new time coordinate τ
The corresponding equations of motion for ξ and η are given by
Since the rate of change of ξ is constant, its second derivative is zero
Since this is the acceleration in the ξ direction and since F=ma by Newton's second law, it follows that the force in the ξ direction is zero. Hence the force is only along the η direction, which is the criterion for a parallel-force problem. Explicitly, the acceleration in the η direction equals
because the acceleration in the y-direction equals
Here, Fy denotes the y-component of the central force, and y/r equals the cosine of the angle between the y-axis and the radial vector r.
General solution
Binet equation
Since a central force F acts only along the radius, only the radial component of the acceleration is nonzero. By Newton's second law of motion, the magnitude of F equals the mass m of the particle times the magnitude of its radial acceleration
This equation has integration factor
Integrating yields
If h is not zero, the independent variable can be changed from t to ϕ
giving the new equation of motion
Making the change of variables to the inverse radius u = 1/r yields
where C is a constant of integration and the function G(u) is defined by
This equation becomes quasilinear on differentiating by ϕ
This is known as the Binet equation. Integrating yields the solution for ϕ
where ϕ0 is another constant of integration. A central-force problem is said to be "integrable" if this final integration can be solved in terms of known functions.
Orbit of the particle
Take the scalar product of Newton's second law of motion with the particle's velocity where the force is obtained from the potential energy
gives
where summation is assumed over the spatial Cartesian index and we have used the fact that and used the chain rule
.
Rearranging
The term in parentheses on the left hand side is a constant, label this with , the total mechanical energy. Clearly, this is the sum of the kinetic energy and the potential energy.
Furthermore if the potential is central, and so the force is along the radial direction. In this case, the cross product of Newton's second law of motion with the particle's position vector must vanish since the cross product of two parallel vectors is zero:
but (cross product of parallel vectors), so
The term in parentheses on the left hand side is a constant, label this with the angular momentum,
In particular, in polar coordinates, or
Further, , so the energy equation may be simplified with the angular momentum as
This indicates that the angular momentum contributes an effective potential energy
Solve this equation for
which may be converted to the derivative of with respect to the azimuthal angle as
This is a separable first order differential equation. Integrating and yields the formula
Changing the variable of integration to the inverse radius yields the integral
which expresses the above constants C = 2mEtot/L2 and G(u) = 2mU(1/u)/L2 above in terms of the total energy Etot and the potential energy U(r).
Turning points and closed orbits
The rate of change of r is zero whenever the effective potential energy equals the total energy
The points where this equation is satisfied are known as turning points. The orbit on either side of a turning point is symmetrical; in other words, if the azimuthal angle is defined such that φ = 0 at the turning point, then the orbit is the same in opposite directions, r(φ) = r(−φ).
If there are two turning points such that the radius r is bounded between rmin and rmax, then the motion is contained within an annulus of those radii. As the radius varies from the one turning point to the other, the change in azimuthal angle φ equals
The orbit will close upon itself provided that Δφ equals a rational fraction of 2π, i.e.,
where m and n are integers. In that case, the radius oscillates exactly m times while the azimuthal angle φ makes exactly n revolutions. In general, however, Δφ/2π will not be such a rational number, and thus the orbit will not be closed. In that case, the particle will eventually pass arbitrarily close to every point within the annulus. Two types of central force always produce closed orbits: F(r) = αr (a linear force) and F(r) = α/r2 (an inverse-square law). As shown by Bertrand, these two central forces are the only ones that guarantee closed orbits.
In general, if the angular momentum L is nonzero, the L2/2mr2 term prevents the particle from falling into the origin, unless the effective potential energy goes to negative infinity in the limit of r going to zero. Therefore, if there is a single turning point, the orbit generally goes to infinity; the turning point corresponds to a point of minimum radius.
Specific solutions
Kepler problem
In classical physics, many important forces follow an inverse-square law, such as gravity or electrostatics. The general mathematical form of such inverse-square central forces is
for a constant , which is negative for an attractive force and positive for a repulsive one.
This special case of the classical central-force problem is called the Kepler problem. For an inverse-square force, the Binet equation derived above is linear
The solution of this equation is
which shows that the orbit is a conic section of eccentricity e; here, φ0 is the initial angle, and the center of force is at the focus of the conic section. Using the half-angle formula for sine, this solution can also be written as
where u1 and u2 are constants, with u2 larger than u1. The two versions of the solution are related by the equations
and
Since the sin2 function is always greater than zero, u2 is the largest possible value of u and the inverse of the smallest possible value of r, i.e., the distance of closest approach (periapsis). Since the radial distance r cannot be a negative number, neither can its inverse u; therefore, u2 must be a positive number. If u1 is also positive, it is the smallest possible value of u, which corresponds to the largest possible value of r, the distance of furthest approach (apoapsis). If u1 is zero or negative, then the smallest possible value of u is zero (the orbit goes to infinity); in this case, the only relevant values of φ are those that make u positive.
For an attractive force (α < 0), the orbit is an ellipse, a hyperbola or parabola, depending on whether u1 is positive, negative, or zero, respectively; this corresponds to an eccentricity e less than one, greater than one, or equal to one. For a repulsive force (α > 0), u1 must be negative, since u2 is positive by definition and their sum is negative; hence, the orbit is a hyperbola. Naturally, if no force is present (α=0), the orbit is a straight line.
Central forces with exact solutions
The Binet equation for u(φ) can be solved numerically for nearly any central force F(1/u). However, only a handful of forces result in formulae for u in terms of known functions. As derived above, the solution for φ can be expressed as an integral over u
A central-force problem is said to be "integrable" if this integration can be solved in terms of known functions.
If the force is a power law, i.e., if F(r) = α rn, then u can be expressed in terms of circular functions and/or elliptic functions if n equals 1, -2, -3 (circular functions) and -7, -5, -4, 0, 3, 5, -3/2, -5/2, -1/3, -5/3 and -7/3 (elliptic functions). Similarly, only six possible linear combinations of power laws give solutions in terms of circular and elliptic functions
The following special cases of the first two force types always result in circular functions.
The special case was mentioned by Newton, in corollary 1 to proposition VII of the principia, as the force implied by circular orbits passing through the point of attraction.
Revolving orbits
The term r−3 occurs in all the force laws above, indicating that the addition of the inverse-cube force does not influence the solubility of the problem in terms of known functions. Newton showed that, with adjustments in the initial conditions, the addition of such a force does not affect the radial motion of the particle, but multiplies its angular motion by a constant factor k. An extension of Newton's theorem was discovered in 2000 by Mahomed and Vawda.
Assume that a particle is moving under an arbitrary central force F1(r), and let its radius r and azimuthal angle φ be denoted as r(t) and φ1(t) as a function of time t. Now consider a second particle with the same mass m that shares the same radial motion r(t), but one whose angular speed is k times faster than that of the first particle. In other words, the azimuthal angles of the two particles are related by the equation φ2(t) = k φ1(t). Newton showed that the force acting on the second particle equals the force F1(r) acting on the first particle, plus an inverse-cube central force
where L1 is the magnitude of the first particle's angular momentum.
If k2 is greater than one, F2−F1 is a negative number; thus, the added inverse-cube force is attractive. Conversely, if k2 is less than one, F2−F1 is a positive number; the added inverse-cube force is repulsive. If k is an integer such as 3, the orbit of the second particle is said to be a harmonic of the first particle's orbit; by contrast, if k is the inverse of an integer, such as , the second orbit is said to be a subharmonic of the first orbit.
Historical development
Newton's derivation
The classical central-force problem was solved geometrically by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, in which Newton introduced his laws of motion. Newton used an equivalent of leapfrog integration to convert the continuous motion to a discrete one, so that geometrical methods may be applied. In this approach, the position of the particle is considered only at evenly spaced time points. For illustration, the particle in Figure 10 is located at point A at time t = 0, at point B at time t = Δt, at point C at time t = 2Δt, and so on for all times t = nΔt, where n is an integer. The velocity is assumed to be constant between these time points. Thus, the vector rAB = rB − rA equals Δt times the velocity vector vAB (red line), whereas rBC = rC − rB equals vBCΔt (blue line). Since the velocity is constant between points, the force is assumed to act instantaneously at each new position; for example, the force acting on the particle at point B instantly changes the velocity from vAB to vBC. The difference vector Δr = rBC − rAB equals ΔvΔt (green line), where Δv = vBC − vAB is the change in velocity resulting from the force at point B. Since the acceleration a is parallel to Δv and since F = ma, the force F must be parallel to Δv and Δr. If F is a central force, it must be parallel to the vector rB from the center O to the point B (dashed green line); in that case, Δr is also parallel to rB.
If no force acts at point B, the velocity is unchanged, and the particle arrives at point K at time t = 2Δt. The areas of the triangles OAB and OBK are equal, because they share the same base (rAB) and height (r⊥). If Δr is parallel to rB, the triangles OBK and OBC are likewise equal, because they share the same base (rB) and the height is unchanged. In that case, the areas of the triangles OAB and OBC are the same, and the particle sweeps out equal areas in equal time. Conversely, if the areas of all such triangles are equal, then Δr must be parallel to rB, from which it follows that F is a central force. Thus, a particle sweeps out equal areas in equal times if and only if F is a central force.
Alternative derivations of the equations of motion
Lagrangian mechanics
The formula for the radial force may also be obtained using Lagrangian mechanics. In polar coordinates, the Lagrangian L of a single particle in a potential energy field U(r) is given by
Then Lagrange's equations of motion
take the form
since the magnitude F(r) of the radial force equals the negative derivative of the potential energy U(r) in the radial direction.
Hamiltonian mechanics
The radial force formula may also be derived using Hamiltonian mechanics. In polar coordinates, the Hamiltonian can be written as
Since the azimuthal angle φ does not appear in the Hamiltonian, its conjugate momentum pφ is a constant of the motion. This conjugate momentum is the magnitude L of the angular momentum, as shown by the Hamiltonian equation of motion for φ
The corresponding equation of motion for r is
Taking the second derivative of r with respect to time and using Hamilton's equation of motion for pr yields the radial-force equation
Hamilton-Jacobi equation
The orbital equation can be derived directly from the Hamilton–Jacobi equation. Adopting the radial distance r and the azimuthal angle φ as the coordinates, the Hamilton-Jacobi equation for a central-force problem can be written
where S = Sφ(φ) + Sr(r) − Etott is Hamilton's principal function, and Etot and t represent the total energy and time, respectively. This equation may be solved by successive integrations of ordinary differential equations, beginning with the φ equation
where pφ is a constant of the motion equal to the magnitude of the angular momentum L. Thus, Sφ(φ) = Lφ and the Hamilton–Jacobi equation becomes
Integrating this equation for Sr yields
Taking the derivative of S with respect to L yields the orbital equation derived above
See also
Schwarzschild geodesics, the analog in general relativity
Particle in a spherically symmetric potential, the analog in quantum mechanics
Hydrogen-like atom, the Kepler problem in quantum mechanics
Inverse square potential
Notes
References
Bibliography
External links
Two-body Central Force Problems by D. E. Gary of the New Jersey Institute of Technology
Motion in a Central-Force Field by A. Brizard of Saint Michael's College
Motion under the Influence of a Central Force by G. W. Collins, II of Case Western Reserve University
Video lecture by W. H. G. Lewin of the Massachusetts Institute of Technology
Classical mechanics
Articles containing video clips | Classical central-force problem | [
"Physics"
] | 5,501 | [
"Mechanics",
"Classical mechanics"
] |
23,371,726 | https://en.wikipedia.org/wiki/Lagrangian%20mechanics | In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lagrangian mechanics describes a mechanical system as a pair consisting of a configuration space M and a smooth function within that space called a Lagrangian. For many systems, , where T and V are the kinetic and potential energy of the system, respectively.
The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
Introduction
Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is
nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems.
Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment.
For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so , and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written . The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus
In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration",
applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for.
Lagrangian
Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by
where
is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the particles. Each particle labeled has mass and is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself.
Kinetic energy is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so
V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, If there is some external field or external driving force changing with time, the potential changes with time, so most generally
As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy.
One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form If the number of constraints in the system is C, then each constraint has an equation ..., each of which could apply to any of the particles. If particle k is subject to constraint i, then At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods.
If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates.
With these definitions, Lagrange's equations of the first kind are
where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and
are each shorthands for a vector of partial derivatives with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to , because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations.
In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by , is just ; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2).
In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore . We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple , by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time:
The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is
Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so
With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind
are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for.
Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates.
From Newtonian to Lagrangian mechanics
Newton's laws
For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation
where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when
Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates the law in tensor index notation is the "Lagrangian form"
where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind,
is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates.
It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense.
However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C,
The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations.
The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion.
D'Alembert's principle
A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero:
The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint).
Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero:
so that
Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion.
Equations of motion from D'Alembert's principle
If there are constraints on particle k, then since the coordinates of the position are linked together by a constraint equation, so are those of the virtual displacements . Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential,
There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time.
The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces
so that
This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result:
Now D'Alembert's principle is in the generalized coordinates as required,
and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion,
These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle.
Euler–Lagrange equations and Hamilton's principle
For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that
equating to Lagrange's equations and defining the Lagrangian as obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion
However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations.
The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is
which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian,
Now, if the condition holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle:
The time integral of the Lagrangian is another quantity called the action, defined as
which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as , ·, or ·. With this definition Hamilton's principle is
Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles.
Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others.
Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here.
Lagrange multipliers and constraints
The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles,
Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian
The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives
The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement
which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations
For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian gives
and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are
thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers.
Properties of the Lagrangian
Non-uniqueness
The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian will describe the same motion as L. If one restricts as above to trajectories q over a given time interval } and fixed end points and , then two Lagrangians describing the same system can differ by the "total time derivative" of a function :
where means
Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via
with the last two components and independent of q.
Invariance under point transformations
Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation which is invertible as , the new Lagrangian L′ is a function of the new coordinates
and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation;
This may simplify the equations of motion.
Cyclic coordinates and conserved momenta
An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by
If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that
and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable".
For example, a system may have a Lagrangian
where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta
are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved.
Energy
Given a Lagrangian the Hamiltonian of the corresponding mechanical system is, by definition,
This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector: . From:
where is a symmetric matrix that is defined for the derivation.
Invariance under coordinate transformations
At every time instant t, the energy is invariant under configuration space coordinate changes , i.e. (using natural coordinates)
Besides this result, the proof below shows that, under such change of coordinates, the derivatives change as coefficients of a linear form.
Conservation
In Lagrangian mechanics, the system is closed if and only if its Lagrangian does not explicitly depend on time. The energy conservation law states that the energy of a closed system is an integral of motion.
More precisely, let be an extremal. (In other words, satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to
If the Lagrangian L does not explicitly depend on time, then , then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that
Hence, if the chosen coordinates were natural coordinates, the energy is conserved.
Kinetic and potential energies
Under all these circumstances, the constant
is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates.
Mechanical similarity
If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, , so that
and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if
Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios
Interacting particles
For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems:
If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction,
This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above.
The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added.
Consequences of singular Lagrangians
From the Euler-Lagrange equations, it follows that:
where the matrix is defined as . If the matrix is non-singular, the above equations can be solved to represent as a function of . If the matrix is non-invertible, it would not be possible to represent all 's as a function of but also, the Hamiltonian equations of motions will not take the standard form.
Examples
The following examples apply Lagrange's equations of the second kind to mechanical problems.
Conservative force
A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential,
If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates.
Cartesian coordinates
The Lagrangian of the particle can be written
The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate
with derivatives
hence
and similarly for the y and z coordinates. Collecting the equations in vector form we find
which is Newton's second law of motion for a particle subject to a conservative force.
Polar coordinates in 2D and 3D
Using the spherical coordinates as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is
So, in spherical coordinates, the Euler–Lagrange equations are
The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum
in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant.
The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2.
Pendulum on a movable support
Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the -direction. Let be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle from the vertical. The coordinates and velocity components of the pendulum bob are
The generalized coordinates can be taken to be and . The kinetic energy of the system is then
and the potential energy is
giving the Lagrangian
Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is
and the Lagrange equation for the support coordinate is
The Lagrange equation for the angle θ is
and simplifying
These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example, should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively.
Two-body central force problem
Two bodies of masses and with position vectors and are in orbit about each other due to an attractive central potential . We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies and the location of the center of mass . The Lagrangian is then
where is the total mass, is the reduced mass, and the potential of the radial force, which depends only on the magnitude of the separation . The Lagrangian splits into a center-of-mass term and a relative motion term .
The Euler–Lagrange equation for is simply
which states the center of mass moves in a straight line at constant velocity.
Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates and take ,
so is a cyclic coordinate with the corresponding conserved (angular) momentum
The radial coordinate and angular velocity can vary with time, but only in such a way that is constant. The Lagrange equation for is
This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity from this radial equation,
which is the equation of motion for a one-dimensional problem in which a particle of mass is subjected to the inward central force and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term):
Of course, if one remains entirely within the one-dimensional formulation, enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system.
Extensions to include non-conservative forces
Dissipative forces
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom.
In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form:
where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then
and
Electromagnetism
A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent.
The Lagrangian for a charged particle with electrical charge , interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential and magnetic vector potential are defined from the electric field and magnetic field as follows:
The Lagrangian of a massive charged test particle in an electromagnetic field
is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law
Under gauge transformation:
where is any scalar function of space and time, the aforementioned Lagrangian transforms like:
which still produces the same Lorentz force law.
Note that the canonical momentum (conjugate to position ) is the kinetic momentum plus a contribution from the field (known as the potential momentum):
This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum is not gauge invariant, and therefore not a measurable physical quantity; However, if is cyclic (i.e. Lagrangian is independent of position ), which happens if the and fields are uniform, then this canonical momentum given here is the conserved momentum, while the measurable physical kinetic momentum is not.
Other contexts and formulations
The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations.
Alternative formulations of classical mechanics
A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by
and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)).
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates.
Momentum space formulation
The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian obtains the generalized momenta Lagrangian in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta.
Higher derivatives of generalized coordinates
There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability
Optics
Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow.
Relativistic formulation
Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out.
Quantum mechanics
In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions.
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics.
Classical field theory
In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field defined over a region of 3D space. Associated with the field is a Lagrangian density
defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space
where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian.
Noether's theorem
The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity.
See also
Canonical coordinates
Fundamental lemma of the calculus of variations
Functional derivative
Generalized coordinates
Hamiltonian mechanics
Hamiltonian optics
Inverse problem for Lagrangian mechanics, the general topic of finding a Lagrangian for a system given the equations of motion.
Lagrangian and Eulerian specification of the flow field
Lagrangian point
Lagrangian system
Non-autonomous mechanics
Plateau's problem
Restricted three-body problem
Footnotes
Notes
References
The Principle of Least Action, R. Feynman
Further reading
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
External links
Principle of least action interactive Excellent interactive explanation/webpage
Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math)
Constrained motion and generalized coordinates, page 4
Dynamical systems
Mathematical physics | Lagrangian mechanics | [
"Physics",
"Mathematics"
] | 9,156 | [
"Lagrangian mechanics",
"Classical mechanics",
"Dynamical systems"
] |
23,373,071 | https://en.wikipedia.org/wiki/Pseudospark%20switch | The pseudospark switch is a gas-filled tube capable of high speed switching. Pseudospark switches are functionally similar to triggered spark gaps.
Advantages of pseudospark switches include the ability to carry reverse currents (up to 100%), low pulse, high lifetime, and a high current rise of about 1012 A/sec. In addition, since the cathode is not heated prior to switching, the standby power is approximately one order of magnitude lower than in thyratrons. However, pseudospark switches have undesired plasma phenomena at low peak currents. Issues such as current quenching, chopping, and impedance fluctuations occur at currents less than 2–3 kA while at very high peak currents (20–30 kA) a transition to a metal vapor arc occurs which leads to erosion of the electrodes.
Construction
A pseudospark switch's electrodes (cathode and anode) have central holes approximately 3 to 5 mm in diameter. Behind the cathode and anode lie a hollow cathode and hollow anode, respectively. The electrodes are separated by an insulator. A low pressure (less than 50 Pa) "working gas" (typically hydrogen) is contained between the electrodes.
While a pseudospark switch is generally fairly simple in construction, engineering a switch for higher lifetimes is more difficult. One method of extending the lifetime is to create a multichannel pseudospark switch to distribute the current and as a result, decrease the erosion. Another method is to simply use cathode materials more resistant to erosion.
Typical electrode materials include copper, nickel, tungsten/rhenium, molybdenum, tantalum, and ceramic materials. Tantalum, however, cannot be used with hydrogen due to chemical erosion affecting the lifetime adversely. Of the metals, tungsten and molybdenum are commonly used, though molybdenum electrodes show issues with reignition behavior. Several papers which compare electrode materials claim tungsten is the most suitable of the metal electrodes tested. Some ceramic materials such as silicon carbide and boron carbide have proven to be excellent electrode materials as well, with lower erosion rates than tungsten in certain cases.
Pseudospark discharge
In a pseudospark discharge a breakdown is first triggered between the electrodes by applying a voltage. The gas then breaks down as a function of the pressure, distance, and voltage. An "ionization avalanche" then occurs producing a homogeneous discharge plasma confined to the central regions of the electrodes.
In the figure above, the various stages of the pseudospark discharge can be seen. Stage (I) is the triggering or low current phase. The discharges in both stage (II), the hollow cathode phase, and stage (III), the borehole phase, are capable of carrying currents of several hundred amps. The transition from the borehole phase to the high current phase (IV) is very fast, characterized by a sudden jump in switch impedance. The last phase (V) only occurs for currents of several 10 kA and is unwelcome as it results in high erosion rates.
See also
Ignitron
Krytron
IGBT
Thyratron
References
Further reading
External links
Gas-filled tubes
Switching tubes
switch
Electrical breakdown | Pseudospark switch | [
"Physics"
] | 680 | [
"Physical phenomena",
"Plasma physics",
"Plasma technology and applications",
"Electrical phenomena",
"Electrical breakdown"
] |
23,374,329 | https://en.wikipedia.org/wiki/ChemBioChem | ChemBioChem is a peer-reviewed scientific journal covering chemical biology, synthetic biology, and bio-nanotechnology and published by Wiley-VCH on behalf of Chemistry Europe. The journal publishes communications, full papers, reviews, minireviews, highlights, concepts, book reviews, and conference reports. Viewpoints, correspondence, essays, web sites, and databases are also occasionally featured. It is abstracted and indexed in major databases and has been online-only since 2016.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.461.
References
External links
Chemistry Europe academic journals
Academic journals established in 2000
Biochemistry journals
English-language journals
Online-only journals
Journals published between 13 and 25 times per year
Wiley-VCH academic journals | ChemBioChem | [
"Chemistry"
] | 158 | [
"Biochemistry stubs",
"Biochemistry journals",
"Biochemistry literature",
"Biochemistry journal stubs"
] |
23,375,296 | https://en.wikipedia.org/wiki/Deflagration%20to%20detonation%20transition | Deflagration to detonation transition (DDT) refers to a phenomenon in ignitable mixtures of a flammable gas and air (or oxygen) when a sudden transition takes place from a deflagration type of combustion to a detonation type of explosion.
Description
A deflagration is characterized by a subsonic flame propagation velocity, typically far below , and relatively modest overpressures, typically below . The main mechanism of combustion propagation is of a flame front that moves forward through the gas mixture - in technical terms the reaction zone (chemical combustion) progresses through the medium by processes of diffusion of heat and mass. In its most benign form, a deflagration may simply be a flash fire.
In contrast, a detonation is characterized by supersonic flame propagation velocities, perhaps up to , and substantial overpressures, up to . The main mechanism of detonation propagation is of a powerful pressure wave that compresses the unburnt gas ahead of the wave to a temperature above the autoignition temperature. In technical terms, the reaction zone (chemical combustion) is a self-driven shock wave where the reaction zone and the shock are coincident, and the chemical reaction is initiated by the compressive heating caused by the shock wave. The process is similar to ignition in a Diesel engine, but much more sudden and violent.
Under certain conditions, mainly in terms of geometrical conditions (such as partial confinement and many obstacles in the flame path that cause turbulent flame eddy currents), a subsonic flame front may accelerate to supersonic speed, transitioning from deflagration to detonation. The exact mechanism is not fully understood,
and while existing theories are able to explain and model both deflagrations and detonations, there is no theory which can predict the transition phenomenon.
Examples
A deflagration to detonation transition has been a feature of several major industrial accidents:
1970 propane vapor cloud explosion in Port Hudson
Flixborough disaster
Phillips disaster of 1989 in Pasadena, Texas
Damage observed in the Buncefield fire
2020 Beirut explosions
Applications
The phenomenon is exploited in pulse detonation engines, because a detonation produces a more efficient combustion of the reactants than a deflagration does, i.e. giving a higher yields. Such engines typically employ a Shchelkin spiral in the combustion chamber to facilitate the deflagration to detonation transition.
The mechanism has also found military use in thermobaric weapons.
Related phenomena
An analogous deflagration to detonation transition (DDT) has also been proposed for thermonuclear reactions responsible for supernovae initiation. This process has been called a "carbon detonation".
See also
Zeldovich spontaneous wave
Dust explosion
Pressure piling
Boiling liquid expanding vapor explosion (BLEVE)
References
Combustion
Industrial fires and explosions
Explosives engineering | Deflagration to detonation transition | [
"Chemistry",
"Engineering"
] | 584 | [
"Explosives engineering",
"Industrial fires and explosions",
"Combustion",
"Explosions"
] |
23,376,161 | https://en.wikipedia.org/wiki/Ligation-independent%20cloning | Ligation-independent cloning (LIC) is a form of molecular cloning that can be performed without the use of restriction endonucleases or DNA ligase. The technique was developed in the early 1990s as an alternative to restriction enzyme/ligase cloning. This allows genes to be cloned without the requirement of a restriction site for cloning that is absent from the gene insert. LIC uses long complementary overhangs on the vector and the DNA insert to create a stable association between them.
Steps in Procedure
Design PCR Primers with LIC extension
Perform PCR to amplify gene
Purify PCR product
Create 5' overhangs
Incubate vector and PCR product to anneal
Transform
References
Further reading
LIC Primer Design
Cloning | Ligation-independent cloning | [
"Engineering",
"Biology"
] | 160 | [
"Cloning",
"Genetic engineering"
] |
38,986,349 | https://en.wikipedia.org/wiki/Indium%20gallium%20arsenide%20phosphide | Indium gallium arsenide phosphide () is a quaternary compound semiconductor material, an alloy of gallium arsenide, gallium phosphide, indium arsenide, or indium phosphide. This compound has applications in photonic devices, due to the ability to tailor its band gap via changes in the alloy mole ratios, x and y.
Indium phosphide-based photonic integrated circuits, or PICs, commonly use alloys of to construct quantum wells, waveguides and other photonic structures, lattice matched to an InP substrate, enabling single-crystal epitaxial growth onto InP.
Many devices operating in the near-infrared 1.55 μm wavelength window utilize this alloy, and are employed as optical components (such as laser transmitters, photodetectors and modulators) in C-band communications systems.
Fraunhofer Institute for Solar Energy Systems ISE reported a triple-junction solar cell utilizing . The cell has very high efficiency of 35.9% (claimed to be a record).
See also
Indium gallium phosphide
Gallium indium arsenide antimonide phosphide
Solar cell efficiency
References
External links
http://www.ioffe.ru/SVA/NSM/Semicond/GaInAsP/
III-V semiconductors
Indium compounds
Gallium compounds
Arsenides
Phosphides | Indium gallium arsenide phosphide | [
"Physics",
"Chemistry",
"Materials_science"
] | 299 | [
"Materials science stubs",
"Semiconductor materials",
"Condensed matter physics",
"Condensed matter stubs",
"III-V semiconductors"
] |
38,989,070 | https://en.wikipedia.org/wiki/Lake%20ball | A lake ball (also known as a surf ball, beach ball or spill ball) is a ball of debris found on ocean beaches and lakes large enough to have wave action. The rolling motion of the waves gathers debris in the water and eventually will form the materials into a ball. The materials vary from year to year and from location to location depending on the debris the motion gathers.
The earliest known reference to lake balls is Walden:
Larch balls
A specific type of lake ball, a larch ball is a structure created when Western Larch needles floating in a lake become entangled in a spherical shape due to the action of waves. They are most commonly known to form in Seeley Lake, Montana; however, they have also been known to form in similar regions such as Clark Fork and lakes in Tracy, New Brunswick such as Peltoma Lake, Big Kedron Lake, and Little Kedron Lake. Typical specimens are 3 to 4 inches (8 to 10 centimeters) in diameter. More rarely, larger ones are found.
References
Hydrology
Habitat | Lake ball | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 212 | [
"Hydrology",
"Environmental engineering"
] |
40,284,592 | https://en.wikipedia.org/wiki/Chiral%20Potts%20model | The chiral Potts model is a spin model on a planar lattice in statistical mechanics studied by Helen Au-Yang Perk and Jacques Perk, among others. It may be viewed as a generalization of the Potts model, and as with the Potts model, the model is defined by configurations which are assignments of spins to each vertex of a graph, where each spin can take one of values. To each edge joining vertices with assigned spins and , a Boltzmann weight is assigned. For this model, chiral means that . When the weights satisfy the Yang–Baxter equation, it is integrable, in the sense that certain quantities can be exactly evaluated.
For the integrable chiral Potts model, the weights are defined by a high genus curve, the chiral Potts curve.
Unlike the other solvable models, whose weights are parametrized by curves of genus less or equal to one, so that they can be expressed in terms of trigonometric functions, rational functions for the genus zero case, or by theta functions for the genus 1 case, this model involves high genus theta functions, for which the theory is less well-developed.
The related chiral clock model, which was introduced in the 1980s by David Huse and Stellan Ostlund independently, is not exactly solvable, in contrast to the chiral Potts model.
The model
This model is out of the class of all previously known models and raises a host of unsolved questions which are related to some of the most intractable problems of algebraic geometry which have been with us for 150 years. The chiral Potts models are used to understand the commensurate-incommensurate phase transitions. For N = 3 and 4, the integrable case was discovered in 1986 in Stony Brook and published the following year.
Self-dual case
The model is called self-dual if the Fourier transform of the weight function returns the same function. A special (genus 1) case had been solved in 1982 by Fateev and Zamolodchikov.
By removing certain restrictions of the work of Alcaraz and Santos, a more general self-dual case of the integrable chiral Potts model was discovered. The weight are given in product form and the parameters in the weight are shown to be on the Fermat curve, with genus greater than 1.
General case
The general solution for all k (the temperature variable) was found. The weights were also given in product form and it was tested computationally (on Fortran) that they satisfy the star–triangle relation. The proof was published later.
Results
Order parameter
From the series the order parameter was conjectured to have the simple form
It took many years to prove this conjecture, as the usual corner transfer matrix technique could not be used, because of the higher genus curve. This conjecture was proven by Baxter in 2005 using functional equations and the "broken rapidity line" technique of Jimbo et al. assuming two mild analyticity
conditions of the type commonly used in the field of Yang–Baxter integrable models. Most recently, in a series of papers
an algebraic (Ising-like) way of obtaining the order parameter has been given, giving more insight into the algebraic structure.
Connection to six vertex model
In 1990 Bazhanov and Stroganov showed that there exist L-operators (Lax operator) which satisfy the Yang–Baxter equation
where the 2 × 2 R-operator (R-matrix) is the six vertex model R-matrix (see Vertex model).
The product of four chiral Potts weight S was shown to intertwine two L-operators as
This inspired a breakthrough, namely the functional relations for the transfer matrices of the chiral Potts models were discovered.
Free energy and interfacial tension
Using these functional relations, Baxter was able to calculate the eigenvalues of the transfer matrix of the chiral Potts model, and obtained the critical exponent for the specific heat α=1-2/N, which was also conjectured in reference 12. The interfacial tension was also calculated by him with the exponent μ=1/2+1/N.
Relation with knot theory
The integrable chiral Potts weights are given in product form as
where is a primitive root of unity and we associate with each rapidity variable p three variables satisfying
It is easy to see that
which is similar to Reidemeister move I. It was also known that the weights satisfy the inversion relation,
This is equivalent to Reidemeister move II. The star-triangle relation
is equivalent to Reidemeister move III. These are shown in the figures below.
See also
Z N model
References
Lattice models
Spin models
Statistical mechanics | Chiral Potts model | [
"Physics",
"Materials_science"
] | 964 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics"
] |
40,286,795 | https://en.wikipedia.org/wiki/MYF5 | Myogenic factor 5 is a protein that in humans is encoded by the MYF5 gene.
It is a protein with a key role in regulating muscle differentiation or myogenesis, specifically the development of skeletal muscle. Myf5 belongs to a family of proteins known as myogenic regulatory factors (MRFs). These basic helix loop helix transcription factors act sequentially in myogenic differentiation. MRF family members include Myf5, MyoD (Myf3), myogenin, and MRF4 (Myf6). This transcription factor is the earliest of all MRFs to be expressed in the embryo, where it is only markedly expressed for a few days (specifically around 8 days post-somite formation and lasting until day 14 post-somite in mice). It functions during that time to commit myogenic precursor cells to become skeletal muscle. In fact, its expression in proliferating myoblasts has led to its classification as a determination factor. Furthermore, Myf5 is a master regulator of muscle development, possessing the ability to induce a muscle phenotype upon its forced expression in fibroblastic cells.
Expression
Myf5 is expressed in the dermomyotome of the early somites, pushing the myogenic precursors to undergo determination and differentiate into myoblasts. Specifically, it is first seen in the dorsomedial portion of the dermomyotome, which develops into the epaxial myotome. Although it is expressed in both the epaxial (to become muscles of the back) and hypaxial (body wall and limb muscles) portions of the myotome, it is regulated differently in these tissue lines, providing part of their alternative differentiation. Most notably, while Myf5 is activated by Sonic hedgehog in the epaxial lineage, it is instead directly activated by the transcription factor Pax3 in hypaxial cells. The limb myogenic precursors (derived from the hypaxial myotome) do not begin expressing Myf5 or any MRFs, in fact, until after migration to the limb buds. Myf5 is also expressed in non-somitic paraxial mesoderm that forms muscles of the head, at least in zebrafish.
While the product of this gene is capable of directing cells towards the skeletal muscle lineage, it is not absolutely required for this process. Numerous studies have shown redundancy with two other MRFs, MyoD and MRF4. The absence of all three of these factors results in a phenotype with no skeletal muscle. These studies were performed after it was shown that Myf5 knockouts had no clear abnormality in their skeletal muscle. The high redundancy of this system shows how crucial the development of skeletal muscle is to the viability of the fetus. Some evidence shows that Myf5 and MyoD are responsible for the development of separate muscle lineages, and are not expressed concurrently in the same cell. Specifically, while Myf5 plays a large role in the initiation of epaxial development, MyoD directs the initiation of hypaxial development, and these separate lineages can compensate for the absence of one or the other. This has led some to claim that they are not indeed redundant, though this depends on the definition of the word. Still, the existence of these separate “MyoD-dependent” and “Myf5-dependent” subpopulations has been disputed, with some claiming that these MRFs are indeed coexpressed in muscle progenitor cells. This debate is ongoing.
Although Myf5 is mainly associated with myogenesis, it is expressed in other tissues, as well. Firstly, it is expressed in brown adipose precursors. However, its expression is limited to brown and not white adipose precursors, providing part of the developmental separation between these two lineages. Furthermore, Myf5 is expressed in portions of the neural tube (that go on to form neurons) a few days after it is seen in the somites. This expression is eventually repressed to prevent extraneous muscle formation. Although the specific roles and dependency of Myf5 in adipogenesis and neurogenesis have remained to be explored, these findings show that Myf5 may play roles outside of myogenesis. Myf5 also has an indirect role controlling proximal rib development. Although Myf5 knockouts have normal skeletal muscle, they die due to abnormalities in their proximal ribs that make it difficult to breathe.
Despite only being present for a few days during embryonic development, Myf5 is still expressed in certain adult cells. As one of the key cell markers of satellite cells (the stem cell pool for skeletal muscles), it plays an important role in the regeneration of adult muscle. Specifically, it allows a brief pulse of proliferation of these satellite cells in response to injury. Differentiation begins (regulated by other genes) after this initial proliferation. In fact, if Myf5 is not downregulated, differentiation does not occur.
In zebrafish, Myf5 is the first MRF expressed in embryonic myogenesis and is required for adult viability, even though larval muscle forms normally. As no muscle is formed in Myf5;Myod double mutant zebrafish, Myf5 cooperates with Myod to promote myogenesis.
Regulation
The regulation of Myf5 is dictated by a large number of enhancer elements that allow a complex system of regulation. Although most events throughout myogenesis that involve Myf5 are controlled through the interaction of multiple enhancers, there is one important early enhancer that initiates expression. Termed the early epaxial enhancer, its activation provides the "go" signal for expression of Myf5 in the epaxial dermomyotome, where it is first seen. Sonic hedgehog from the neural tube acts at this enhancer to activate it. Following that, the chromosome contains different enhancers for regulation of Myf5 expression in the hypaxial region, cranial region, limbs, etc. This early expression of Myf5 in the epaxial dermamyotome is involved with the very formation of myotome, but nothing beyond that. After its initial expression, other enhancer elements dictate where and how long it is expressed. It remains clear that each population of myogenic progenitor cells (for different locations in the embryo) is regulated by a different set of enhancers.
Clinical significance
As for its clinical significance, the aberration of this transcription factor provides part of the mechanism for how hypoxia (lack of oxygen) can influence muscle development. Hypoxia has the ability to impede muscle differentiation in part by inhibiting the expression of Myf5 (as well as other MRFs). This prevents the muscle precursors from becoming post-mitotic muscle fibers. Although hypoxia is a teratogen, this inhibition of expression is reversible, therefore it remains unclear if there is a connection between hypoxia and birth defects in the fetus.
References
Further reading
Transcription factors
hu:MyoD | MYF5 | [
"Chemistry",
"Biology"
] | 1,469 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
40,287,138 | https://en.wikipedia.org/wiki/Olga%20Bondareva | Olga Nikolaevna Bondareva (April 27, 1937 – December 9, 1991) was a distinguished Soviet mathematician and economist. She contributed to the fields of mathematical economics, especially game theory.
Bondareva is best known as one of the two independent discoverers of the Bondareva–Shapley theorem.
Biography
In 1954 she entered the Mathematics and Mechanics Faculty of Leningrad State University, receiving her kandidat degree in 1963 under the supervision of Nikolai Vorobyov. She defended her doktor nauk degree in 1984 at the Faculty of Computational Mathematics and Cybernetics, Moscow State University.
From October 1959 to April 1972 she worked as a junior researcher, then associate professor (in operations research), and then a senior researcher at the Mathematics and Mechanics Faculty of Leningrad State University. From June 1972 to July 1984 was a senior researcher at the Economic Faculty of the Leningrad State University, from July 1984 to March 1989 a senior researcher at the Institute of Physics, and from October, 1989 to her death in 1991 a leading researcher of the Mathematics and Mechanics Faculty of Leningrad State University.
She was married to Lev Alexandrovich Gordon, and had two sons: Maxim (b. 1966 ) and Gregory (b. 1974 ). She was killed in a car accident while crossing the street in St. Petersburg.
Academic career
O. N. Bondareva has published more than 70 scientific papers on game theory and mathematics. She was a member of the editorial board of the international journal Games and Economic Behavior. Her work on cooperative game theory has received international recognition.
The most famous result of Bondareva, obtained during her PhD studies, is the necessary and sufficient conditions for the core of a cooperative game with transferable utility to be non-empty. It was published in the collection "Problems of Cybernetics", quite a prestigious publication, but not translated into English, and was not noticed in the West. In 1967, a similar result was published by Lloyd Shapley. Having learned about the publication of Bondareva, Shapley unconditionally recognized its priority, which ensured its universal recognition.
This theorem uses the notion of a balanced coverage, some analog of partition of unity in topology. This is the name of a set of non-negative numbers assigned to each coalition if their summation over all coalitions, including one (any) player, gives one. The Bondareva–Shapley theorem states that the core is non-empty if and only if, for any balanced covering, the sum over all coalitions of the values of the characteristic function with the corresponding weights does not exceed the value of the characteristic function for the complete coalition. With a small number of players, this theorem allows us to practically deal with any game to the end. In addition, it makes it possible to establish that the core is non-empty in some classes of games, regardless of the number of players, for example, in convex games.
Throughout the 1970s and 1980s, Bondareva studied game-theoretic dominance properties expressed in terms of abstract binary relations, essentially following the example of the seminal monograph Neumann and Morgenstern. In particular, she obtained a number of results on the convergence of spaces with a binary relation and on finite approximations.
She was also among the first to publish a theorem on the existence of a maximum element for an acyclic binary relation with open lower contours on a compact set, although her note, published in Russian in the proceedings of the conference (in Vilniuse), went unnoticed.
In the late 1970s, Bondareva, together with her students T. E. Kulakovskaya and N. I. Naumova, brainstormed the problem of the existence of a von Neumann-Morgenstern solution in cooperative games with transferable utility (the possibility of non-existence was already known by that time ). In particular, they proved the existence of a solution in any four-player game.
Bibliography
Бондарева О.Н. Некоторые применения методов линейного программирования к теории кооперативных игр // Проблемы кибернетики. Выпуск 10. — М.: Государственное издательство физико-математической литературы, 1963. — p. 119—139.
translated as : Bondareva O. N. Some applications of linear programming to the theory of cooperative games // Selected Russian Papers in Game Theory 1959—1965. — Princeton: Princeton University Press, 1968. p. 79—114.
Бондарева О. Н. О теоретико-игровых моделях в экономике. — Л.: Издательство Ленинградского университета, 1974. — 38 с. — 6240 экз.
Бондарева О. Н. Конечные приближения для ядер и решений кооперативных игр Журнал вычислительной математики и математической физики. — 1976. — 16(3)624—633.
Bondareva O. N. Сходимость пространств с отношением и теоретико-игровые следствия Журнал вычислительной математики и математической физики. — 1978. — 10(1):84—92.
Bondareva O. N. Замечание к статье «Сходимость пространств с отношением и теоретико-игровые следствия» (письмо в редакцию) Журнал вычислительной математики и математической физики 1980. — 20(4)1078—1079.
Bondareva O. N, Кулаковская Т. Е., Наумова Н. И. Решение произвольной кооперативной игры четырех лиц // Вестник Ленинградского университета (Математика). — 1979. — 2(7):104—105.
Bondareva O. N. Развитие теоретико-игровых методов оптимизации в кооперативных играх и их применение к многокритериальным задачам // Современное состояние теории исследования операций. — Moscow: Наука, 1979. — p. 150—162.
Bondareva O. N Конечные приближения выбора на бесконечном множестве // Известия АН СССР. Серия «Техническая кибернетика». — 1987. 1:18—23.
Bondareva O. N. Domination, core and solution (A short survey of Russian results). Discussion Paper No. 185. IMW, University of Bielefeld, 1989.
Bondareva O. N. Revealed fuzzy preferences // Multiperson Decision Making Models Using Fuzzy Sets and Possibility Theory, ed. J. Kacprzyk and M. Fedrizzi. Dordrecht: Kluwer, 1990.
Bondareva O. N., Driessen T. S. H. "Extensive coverings and exact core bounds" Games and Economic Behavior — 1994. — v.6(2):212—219.
Sources
Гордон Л. А. Дом. — СПб.: Товарищество журнала «Нева», 1992. — 240 с. — 295 экз. —
In memoriam Olga Bondareva (1937—1991) Games and Economic Behavior. — 1992. — 4(2):318—324.
Rosenmüller J. Obituary and Kulakovskaja T. E., Naumova N. I. Olga Nikolajevna Bondareva. 1937—1991 International Journal of Game Theory. — 1992. — Vol. 20(4):309—312.
Кукушкин Н. С., Меньшикова О. Р., Меньшиков И. С. Ольга Николаевна Бондарева (некролог) // Журнал вычислительной математики и математической физики. — 1992. — 32(6) 989—990. (в pdf-файле есть фотография)
Wooders M. "Bondareva, Olga (1937—1991)" The New Palgrave Dictionary of Economics. 2nd Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. — Palgrave Macmillan, 2008. эл.версия
Boldyrev, I. Soviet Mathematics and Economic Theory in the Past Century: A Historical Reappraisal, Journal of Economic Literature. 2024. Vol. 62(4).
References
Game theorists
Mathematical economists
20th-century Russian mathematicians
1937 births
1991 deaths
Soviet economists
Saint Petersburg State University alumni
Mathematicians from Saint Petersburg | Olga Bondareva | [
"Mathematics"
] | 2,521 | [
"Game theorists",
"Game theory"
] |
40,287,988 | https://en.wikipedia.org/wiki/Retrometabolic%20drug%20design | In the field of drug discovery, retrometabolic drug design is a strategy for the design of safer drugs either using predictable metabolism to an inactive moiety or using targeted drug delivery approaches. The phrase retrometabolic drug design was coined by Nicholas Bodor. The method is analogous to retrosynthetic analysis where the synthesis of a target molecule is planned backwards. In retrometabolic drug design, metabolic reaction information of drugs is used to design parent drugs whose metabolism and distribution can be controlled to target and eliminate the drug to increase efficacy and minimize undesirable side effects. The new drugs thus designed achieve selective organ and/or therapeutic site drug targeting and produce safe therapeutic agents and safe environmental chemicals. These approaches represent systematic methodologies that thoroughly integrate structure-activity (SAR) and structure-metabolism (SMR) relationships and are aimed at designing safe, locally active compounds with improved therapeutic index (ratio of benefit vs. side effect).
Classification
The concept of retrometabolic drug design encompasses two distinct approaches. One approach is the design of soft drugs (SDs), new, active therapeutic agents, often isosteric or isolelectronic analogs of a lead compound, with a chemical structure specifically designed to allow predictable metabolism into inactive metabolites after exerting their desired therapeutic effect(s). The other approach is the design of chemical delivery systems (CDSs). CDSs are biologically inert molecules intended to enhance drug delivery to a particular organ or site and requiring several conversion steps before releasing the active drug.
Although both retrometabolic design approaches involve chemical modifications of the molecular structure and both require enzymatic reactions to fulfill drug targeting, the principles of SD and CDS design are distinctly different. While CDSs are inactive as administered and sequential enzymatic reactions provide the differential distribution and ultimately release the active drug, SDs are active as administered and are designed to be easily metabolized into inactive species. Assuming an ideal situation, with a CDS the drug is present at the site and nowhere else in the body because enzymatic processes destroy the drug at those sites. Whereas, CDSs are designed to achieve drug targeting at a selected organ or site, SDs are designed to afford a differential distribution that can be regarded as reverse targeting.
Soft drugs
Since its introduction by Nicholas Bodor in the late 1970s, the soft drug concept generated considerable research both in academic and in industrial settings. Bodor defined soft drugs as biologically active, therapeutically useful chemical compounds characterized by a predictable and controllable in vivo metabolism to non-toxic moieties after they achieve their therapeutic role. There are several rationally designed soft drugs that have either already reached the market, such as
esmolol (Breviblock)
landiolol (Onoact)
remifentanil (Ultiva)
loteprednol etabonate (Lotemax, Alrex, Zylet)
clevidipine (Cleviprex)
remimazolam (Byfavo)
or are in late-stage development (budiodarone, celivarone, AZD3043, tecafarin). There are also compounds that can be considered as soft chemicals (e.g., malathion) or soft drugs (e.g., articaine, methylphenidate) even though they were not developed as such.
Chemical delivery systems
Since their introduction in the early 1980s, CDSs have also generated considerable research work, especially for brain and eye targeting of various therapeutic agents, including those that cannot cross the blood–brain barrier or the blood–retinal barrier on their own. Within this approach, three major general CDS classes have been identified:
Enzymatic physicochemical-based (e.g., brain-targeting) CDSs: exploit site-specific traffic properties by sequential metabolic conversions that result in considerably altered properties
Site-specific enzyme-activated (e.g., eye-targeting) CDSs: exploit specific enzymes found primarily, exclusively, or at higher activity at the site of action
Receptor-based transient anchor-type (e.g., lung-targeting) CDSs: provide enhanced selectivity and activity through transient, reversible binding at the receptor
This concept has been extended to many drugs and peptides, its importance illustrated by the fact that its first applications and uses were published in Science in 1975, 1981 and 1983. Its extension to the targeted brain-delivery of neuropeptides was included by the Harvard Health Letter as one of the top 10 medical advances of 1992. Several compounds have reached advanced clinical development phase, such as
E2-CDS (Estredox) for the brain-targeted delivery of estradiol and
betaxoxime for the eye-targeted delivery of betaxolol
In the first example above, brain-targeted CDSs employ a sequential metabolic conversion of a redox-based targetor moiety, which is closely related to the ubiquitous NAD(P)H ⇌ NAD(P)+ coenzyme system, to exploit the unique properties of the blood–brain barrier (BBB). After enzymatic oxidation of the NADH type drug conjugate to its corresponding NAD+- drug, the still inactive precursor, "locks-in" behind the BBB to provide targeted and sustained CNS-delivery of the compound of interest.
The second example involves eye-specific delivery of betaxoxime, the oxime derivative of betaxolol. The administered, inactive β-amino-ketoxime is converted to the corresponding ketone via oxime hydrolase, an enzyme recently identified with preferential activity in the eye, and then stereospecifically reduced to its alcohol form. IOP-lowering activity is demonstrated without producing the active β-blockers systemically, making them void of any cardiovascular activity, a major drawback of classical antiglaucoma agents. Because of the advantages provided by this unique eye-targeting profile, oxime-based eye-targeting CDSs could replace the β-blockers currently used for ophthalmic applications.
History and significance
These retrometabolic design strategies were introduced by Nicholas Bodor, one of the first and most prominent advocates for the early integration of metabolism, pharmacokinetic and general physicochemical considerations in the drug design process. These drug design concepts recognize the importance of design-controlled metabolism and directly focus not on the increase of activity alone but on the increase of the activity/toxicity ratio (therapeutic index) in order to deliver the maximum benefit while also reducing or eliminating unwanted side effects. The importance of this field is reviewed in a book dedicated to the subject (Bodor, N.; Buchwald, P.; Retrometabolic Drug Design and Targeting, 1st ed., Wiley & Sons, 2012), as well as by a full chapter of Burger's Medicinal Chemistry and Drug Design, 7th ed. (2010) with close to 150 chemical structures and more than 450 references. At the time of its introduction, the idea of designed-in metabolism represented a significant novelty and was against mainstream thinking then in place that instead focused on minimizing or entirely eliminating drug metabolism. Bodor's work on these design concepts developed during the late 1970s and early 1980s, and came to prominence during the mid-1990s. Loteprednol etabonate, a soft corticosteroid designed and patented by Bodor received final Food and Drug Administration (FDA) approval in 1998 as the active ingredient of two ophthalmic preparations (Lotemax and Alrex), currently the only corticosteroid approved by the FDA for use in all inflammatory and allergy-related ophthalmic disorders. Its safety for long-term use further supports the soft drug concept, and in 2004, loteprednol etabonate was also approved as part of a combination product (Zylet). A second generation of soft corticosteroids such as etiprednol dicloacetate is in development for a full spectrum of other possible applications such as nasal spray for rhinitis or inhalation products for asthma.
The soft drug concept ignited research work in both academic (e.g., Aston University, Göteborg University, Okayama University, Uppsala University, University of Iceland, University of Florida, Université Louis Pasteur, Yale University) and industrial (e.g., AstraZeneca, DuPont, GlaxoSmithKline, IVAX, Janssen Pharmaceutica, Nippon Organon, Novartis, ONO Pharmaceutical, Schering AG) settings. Besides corticosteroids, various other therapeutic areas have been pursued such as soft beta-blockers, soft opioid analgetics, soft estrogens, soft beta-agonists, soft anticholinergics, soft antimicrobials, soft antiarrhythmic agents, soft angiotensin converting enzyme (ACE) inhibitors, soft dihydrofolate reductase (DHFR) inhibitors, soft cancineurin inhibitors (soft immunosuppressants), soft matrix metalloproteinase (MMP) inhibitors, soft cytokine inhibitors, soft cannabinoids, soft Ca2+ channel blockers (see for a recent review).
Following the introduction of the CDS concepts, work along those lines started in numerous pharmaceutical centers around the world, and brain-targeting CDSs were explored for many therapeutic agents such as steroids (testosterone, progestins, estradiol, dexamethasone), anti-infective agents (penicillins, sulfonamides), antivirals (acyclovir, trifluorothymidine, ribavirin), antiretrovirals (AZT, ganciclovir), anticancer agents (Lomustine, chlorambucil), neurotransmitters (dopamine, GABA), nerve growth factor (NGF) inducers, anticonvulsants (Phenytoin, valproate, stiripentol), Ca2+ antagonists (felodipine), MAO inhibitors, NSAIDs and neuropeptides (tryptophan, Leu-enkephalin analogs, TRH analogs, kyotorphin analogs). A number of new chemical entities (NCE) were developed based on these principles, such as E2-CDS (Estredox or betaxoxime are in advanced clinical development phases.
A review of ongoing research using the general retrometabolic design approaches is conducted biennially at the Retrometabolism Based Drug Design and Targeting Conference, an international series of symposia developed and organized by Nicholas Bodor. Proceedings of each conference held have been published in the international pharmaceutical journal Pharmazie. Past conferences, and their published proceedings are:
May 1997, Amelia Island, Florida; Pharmazie 52(7) S1, 1997
May 1999, Amelia Island, Florida; Pharmazie 55(3), 2000
May 2001, Amelia Island Florida; Pharmazie 57(2), 2002
May 2003, Palm Coast, Florida; Pharmazie 59(5), 2004
May 2005, Hakone, Japan; Pharmazie 61(2), 2006
June 2007, Göd, Hungary; Pharmazie 63(3), 2008
May 2009, Orlando, Florida; Pharmazie 65(6), 2010
June 2011, Graz, Austria; Pharmazie 67(5), 2012
May 2013, Orlando, Florida; Pharmazie 69(6), 2014
October 2015, Orlando, Florida.
References
Drug discovery | Retrometabolic drug design | [
"Chemistry",
"Biology"
] | 2,434 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
46,349,305 | https://en.wikipedia.org/wiki/Protein%20methylation | Protein methylation is a type of post-translational modification featuring the addition of methyl groups to proteins. It can occur on the nitrogen-containing side-chains of arginine and lysine, but also at the amino- and carboxy-termini of a number of different proteins. In biology, methyltransferases catalyze the methylation process, activated primarily by S-adenosylmethionine.
Protein methylation has been most studied in histones, where the transfer of methyl groups from S-adenosyl methionine is catalyzed by histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression.
Methylation by substrate
Multiple sites of proteins can be methylated. For some types of methylation, such as N-terminal methylation and prenylcysteine methylation, additional processing is required, whereas other types of methylation such as arginine methylation and lysine methylation do not require pre-processing.
Arginine
Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases (PRMTs): Type I PRMTs (PRMT1, PRMT2, PRMT3, PRMT4, PRMT6, and PRMT8) attach two methyl groups to a single terminal nitrogen atom, producing asymmetric dimethylarginine (N G,N G-dimethylarginine). In contrast, type II PRMTs (PRMT5 and PRMT9) catalyze the formation of symmetric dimethylarginine with one methyl group on each terminal nitrogen (symmetric N G,N' G-dimethylarginine). Type I and II PRMTs both generate N G-monomethylarginine intermediates; PRMT7, the only known type III PRMT, produces only monomethylated arginine.
Arginine-methylation usually occurs at glycine and arginine-rich regions referred to as "GAR motifs", which is likely due to the enhanced flexibility of these regions that enables insertion of arginine into the PRMT active site. Nevertheless, PRMTs with non-GAR consensus sequences exist. PRMTs are present in the nucleus as well as in the cytoplasm. In interactions of proteins with nucleic acids, arginine residues are important hydrogen bond donors for the phosphate backbone — many arginine-methylated proteins have been found to interact with DNA or RNA.
Enzymes that facilitate histone acetylation as well as histones themselves can be arginine methylated. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction and transcriptional regulation. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
Lysine
Lysine can be methylated once, twice, or three times by lysine methyltransferases (PKMTs). Most lysine methyltransferases contain an evolutionarily conserved SET domain, which possesses S-adenosylmethionine-dependent methyltransferase activity, but are structurally distinct from other S-adenosylmethionine binding proteins. Lysine methylation plays a central part in how histones interact with proteins. Lysine methylation can be reverted by lysine demethylases (PKDMs).
Different SET domain-containing proteins possess distinct substrate specificities. For example, SET1, SET7 and MLL methylate lysine 4 of histone H3, whereas Suv39h1, ESET and G9a specifically methylate lysine 9 of histone H3. Methylation at lysine 4 and lysine 9 are mutually exclusive and the epigenetic consequences of site-specific methylation are diametrically opposed: Methylation at lysine 4 correlates with an active state of transcription, whereas methylation at lysine 9 is associated with transcriptional repression and heterochromatin. Other lysine residues on histone H3 and histone H4 are also important sites of methylation by specific SET domain-containing enzymes. Although histones are the prime target of lysine methyltransferases, other cellular proteins carry N-methyllysine residues, including elongation factor 1A and the calcium sensing protein calmodulin.
N-terminal methylation
Many eukaryotic proteins are post-translationally modified on their N-terminus. A common form of N-terminal modification is N-terminal methylation (Nt-methylation) by N-terminal methyltransferases (NTMTs). Proteins containing the consensus motif H2N-X-Pro-Lys- (where X can be Ala, Pro or Ser) after removal of the initiator methionine (iMet) can be subject to N-terminal α-amino-methylation. Monomethylation may have slight effects on α-amino nitrogen nucleophilicity and basicity, whereas trimethylation (or dimethylation in case of proline) will result in abolition of nucleophilicity and a permanent positive charge on the N-terminal amino group. Although from a biochemical point of view demethylation of amines is possible, Nt-methylation is considered irreversible as no N-terminal demethylase has been described to date.
Histone variants CENP-A and CENP-B have been found to be Nt-methylated in vivo.
Prenylcysteine
Eukaryotic proteins with C-termini that end in a CAAX motif are often subjected to a series of posttranslational modifications. The CAAX-tail processing takes place in three steps: First, a prenyl lipid anchor is attached to the cysteine through a thioester linkage. Then endoproteolysis occurs to remove the last three amino acids of the protein to expose the prenylcysteine α-COOH group. Finally, the exposed prenylcysteine group is methylated. The importance of this modification can be seen in targeted disruption of the methyltransferase for mouse CAAX proteins, where loss of isoprenylcysteine carboxyl methyltransferase resulted in mid-gestation lethality.
The biological function of prenylcysteine methylation is to facilitate the targeting of CAAX proteins to membrane surfaces within cells. Prenylcysteine can be demethylated and this reverse reaction is catalyzed by isoprenylcysteine carboxyl methylesterases. CAAX box containing proteins that are prenylcysteine methylated include Ras, GTP-binding proteins, nuclear lamins and certain protein kinases. Many of these proteins participate in cell signaling, and they utilize prenylcysteine methylation to concentrate them on the cytosolic surface of the plasma membrane where they are functional.
Methylations on the C-terminus can increase a protein's chemical repertoire and are known to have a major effect on the functions of a protein.
Protein phosphatase 2
In eukaryotic cells, phosphatases catalyze the removal of phosphate groups from tyrosine, serine and threonine phosphoproteins. The catalytic subunit of the major serine/threonine phosphatases, like Protein phosphatase 2 is covalently modified by the reversible methylation of its C-terminus to form a leucine carboxy methyl ester. Unlike CAAX motif methylation, no C-terminal processing is required to facilitate methylation. This C-terminal methylation event regulates the recruitment of regulatory proteins into complexes through the stimulation of protein–protein interactions, thus indirectly regulating the activity of the serine-threonine phosphatases complex. Methylation is catalyzed by a unique protein phosphatase methyltransferase. The methyl group is removed by a specific protein phosphatase methylesterase. These two opposed enzymes make serine-threonine phosphatases methylation a dynamic process in response to stimuli.
L-isoaspartyl
Damaged proteins accumulate isoaspartyl which causes protein instability, loss of biological activity and stimulation of autoimmune responses. The spontaneous age-dependent degradation of L-aspartyl residues results in the formation of a succinimidyl intermediate, a succinimide radical. This is spontaneously hydrolyzed either back to L-aspartyl or, in a more favorable reaction, to abnormal L-isoaspartyl. A methyltransferase dependent pathway exists for the conversion of L-isoaspartyl back to L-aspartyl. To prevent the accumulation of L-isoaspartyl, this residue is methylated by the protein L-isoaspartyl methyltransferase, which catalyzes the formation of a methyl ester, which in turn is converted back to a succinimidyl intermediate.
Loss and gain of function mutations have unmasked the biological importance of the L-isoaspartyl O-methyltransferase in age-related processes: Mice lacking the enzyme die young of fatal epilepsy, whereas flies engineered to over-express it have an increase in life span of over 30%.
Physical effects
A common theme with methylated proteins, as with phosphorylated proteins, is the role this modification plays in the regulation of protein–protein interactions. The arginine methylation of proteins can either inhibit or promote protein–protein interactions depending on the type of methylation. The asymmetric dimethylation of arginine residues in close proximity to proline-rich motifs can inhibit the binding to SH3 domains. The opposite effect is seen with interactions between the survival of motor neurons protein and the snRNP proteins SmD1, SmD3 and SmB/B', where binding is promoted by symmetric dimethylation of arginine residues in the snRNP proteins.
A well-characterized example of a methylation dependent protein–protein interaction is related to the selective methylation of lysine 9, by SUV39H1 on the N-terminal tail of the histone H3. Di- and tri-methylation of this lysine residue facilitates the binding of heterochromatin protein 1 (HP1). Because HP1 and Suv39h1 interact, it is thought the binding of HP1 to histone H3 is maintained and even allowed that to spread along the chromatin. The HP1 protein harbors a chromodomain which is responsible for the methyl-dependent interaction between it and lysine 9 of histone H3. It is likely that additional chromodomain-containing proteins will bind the same site as HP1, and to other lysine methylated positions on histones H3 and Histone H4.
C-terminal protein methylation regulates the assembly of protein phosphatase. Methylation of the protein phosphatase 2A catalytic subunit enhances the binding of the regulatory B subunit and facilitates holoenzyme assembly.
References
Proteins
Epigenetics
Post-translational modification | Protein methylation | [
"Chemistry"
] | 2,471 | [
"Biomolecules by chemical classification",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Molecular biology",
"Proteins"
] |
46,356,900 | https://en.wikipedia.org/wiki/Molar%20absorption%20coefficient | In chemistry, the molar absorption coefficient or molar attenuation coefficient () is a measurement of how strongly a chemical species absorbs, and thereby attenuates, light at a given wavelength. It is an intrinsic property of the species. The SI unit of molar absorption coefficient is the square metre per mole (), but in practice, quantities are usually expressed in terms of M−1⋅cm−1 or L⋅mol−1⋅cm−1 (the latter two units are both equal to ). In older literature, the cm2/mol is sometimes used; 1 M−1⋅cm−1 equals 1000 cm2/mol. The molar absorption coefficient is also known as the molar extinction coefficient and molar absorptivity, but the use of these alternative terms has been discouraged by the IUPAC.
Beer–Lambert law
The absorbance of a material that has only one absorbing species also depends on the pathlength and the concentration of the species, according to the Beer–Lambert law
where
is the molar absorption coefficient of that material;
is the molar concentration of those species;
is the path length.
Different disciplines have different conventions as to whether absorbance is decadic (10-based) or Napierian (e-based), i.e., defined with respect to the transmission via common logarithm (log10) or a natural logarithm (ln). The molar absorption coefficient is usually decadic. When ambiguity exists, it is important to indicate which one applies.
When there are N absorbing species in a solution, the overall absorbance is the sum of the absorbances for each individual species i:
The composition of a mixture of N absorbing species can be found by measuring the absorbance at N wavelengths (the values of the molar absorption coefficient for each species at these wavelengths must also be known). The wavelengths chosen are usually the wavelengths of maximum absorption (absorbance maxima) for the individual species. None of the wavelengths may be an isosbestic point for a pair of species. The set of the following simultaneous equations can be solved to find the concentrations of each absorbing species:
The molar absorption coefficient (in units of M−1cm−1) is directly related to the attenuation cross section (in units of cm2) via the Avogadro constant NA:
Mass absorption coefficient
The mass absorption coefficient is equal to the molar absorption coefficient divided by the molar mass of the absorbing species.
=
where
= Mass absorption coefficient
= Molar absorption coefficient
= Molar mass of the absorbing species
Proteins
In biochemistry, the molar absorption coefficient of a protein at depends almost exclusively on the number of aromatic residues, particularly tryptophan, and can be predicted from the sequence of amino acids. Similarly, the molar absorption coefficient of nucleic acids at can be predicted given the nucleotide sequence.
If the molar absorption coefficient is known, it can be used to determine the concentration of a protein in solution.
References
External links
Nikon MicroscopyU: Introduction to Fluorescent Proteins includes a table of molar attenuation coefficient of fluorescent proteins.
Analytical chemistry
Absorption spectroscopy
Molar quantities | Molar absorption coefficient | [
"Physics",
"Chemistry"
] | 648 | [
"Spectrum (physical sciences)",
"Physical quantities",
"Intensive quantities",
"Absorption spectroscopy",
"nan",
"Spectroscopy",
"Molar quantities"
] |
46,362,430 | https://en.wikipedia.org/wiki/Stem%20Cell%20and%20Regenerative%20Medicine%20Cluster | Stem Cell and Regenerative Medicine Cluster () is a business cluster located in Vilnius, Lithuania. Founded in 2011 the cluster unifies 11 SMEs and state enterprises, including research centers, hospitals, medical centers, and other related institutions. The cluster is currently managed by the Stem Cell Research Center.
Activities
The cluster engages in fundamental and applied scientific research in the fields of stem cells and regenerative medicine. In addition, members of the cluster carry out clinical research (including pre-clinical and clinical trials) in related fields.
Members of the cluster have the capacity to offer an integrated value chain approach, as facilities available include cGMP, in vivo and in vitro testing, and pre-clinical and clinical research.
Current members
Stem Cell Research Center (JSC Kamieniniu lasteliu tyrimu centras)
Vilnius University Hospital Santariskiu Klinikos (Vilniaus Universitetine Ligonine Santariskiu Klinikos)
State Research Institute Center For Innovative Medicine (Valstybinis mokslinių tyrimų institutas Inovatyvios medicinos centras)
Northway Medical Center (JSC Northway Medicinos Centras)
Northway Real Estate (JSC Northway Nekilnojamas Turtas)
Pasilaiciai General Practice (JSC Pasilaiciu seimos medicinos centras)
Lirema Ophtomology (JSC Lirema)
Biotechnology center Biotechpharma (JSC Northway Biotech)
Kardivita Private Hospital (JSC Kardivita privatus medicinos centras)
Biotechnology park (JSC Biotechnologiju parkas)
Biosantara (JSC Biosantara)
References
2011 establishments in Lithuania
Organizations based in Vilnius
Stem cell research
Research institutes in Lithuania | Stem Cell and Regenerative Medicine Cluster | [
"Chemistry",
"Biology"
] | 367 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
36,103,509 | https://en.wikipedia.org/wiki/Gonadotropin%20receptor | The gonadotropin receptors are a group of receptors that bind a group of pituitary hormones called gonadotropins. They include the:
Follicle-stimulating hormone receptor (FSHR) - binds follicle-stimulating hormone (FSH)
Luteinizing hormone receptor (LHR) - binds luteinizing hormone (LH) and human chorionic gonadotropin (hCG)
See also
GnRH receptor
Sex hormone receptor
G protein-coupled receptors
Gonadotropin-releasing hormone and gonadotropins
Signal transduction
Cell signaling | Gonadotropin receptor | [
"Chemistry",
"Biology"
] | 124 | [
"G protein-coupled receptors",
"Neurochemistry",
"Biochemistry",
"Signal transduction"
] |
36,105,547 | https://en.wikipedia.org/wiki/HydroSack | A HydroSack or a HydroSnake is a brand name for a flood control sandbag alternative made by Gravitas International of Cheshire, North West England. They are very lightweight and thin until they come into contact with water, then they begin to retain water until they have reached capacity. The devices then resist any further water excess. These can be used to absorb, resist and redirect flowing water.
Design and use
The HydroSack and the HydroSnake are made up of an outer fabric consisting of non-woven polypropylene with a hydrophilic finish. The internal pads are composed of pads containing wood pulp and a superabsorbent polymer (SAP). In this form, the HydroSack or HydroSnake weighs . The HydroSack measures and the HydroSnake measures . When a HydroSack or a HydroSnake comes into contact with water, the SAP crystallizes and absorbs the water. HydroSacks take approximately 2-3 minutes to reach full capacity. The HydroSack and the HydroSnake are very absorbent. Full capacity is between of water. It retains this weight for up to six months. HydroSacks can be stacked together to form a strong ballast. It is used as a form of flood control and can be used to minimize the damage that flooding can cause. The HydroSack has a three-section structure with handles. When a HydroSack is no longer needed, the insides can be emptied into the earth after use, without any harmful effects.
HydroSacks and HydroSnakes are new innovations to replace sandbags for their multiple purposes, from road ballast signs to flood protection. Globally, other companies are making products similar to the HydroSack, such as UK-based FloodSax which have sold more than 2.5 million of their alternative sandbags worldwide and Thailand based Nanotec. Fife council in the United Kingdom recently integrated the HydroSack into their first response flood protection. In June 2013, some residents of Rosyth criticized the distribution of HydroSacks for flood control, complaining that they were a single-use, disposable product. The Fife Council confirmed this, but added they would last for two or three months once filled with water.
See also
AquaFence
Flood control
Sandbag
References
External links
MSDS for HydroSack
Bags
Flood control | HydroSack | [
"Chemistry",
"Engineering"
] | 475 | [
"Flood control",
"Environmental engineering"
] |
36,108,417 | https://en.wikipedia.org/wiki/Junker%20test | A Junker test is a mechanical test to determine the point at which a bolted joint loses its preload when subjected to shear loading caused by transverse vibration.
Design engineers apply the Junker test to determine the point at which fastener securing elements – such as lock nuts, wedges and lock washers – fail when subjected to vibration. The data collected by the test enables design engineers to specify fasteners that will perform under a wide range of conditions without loosening.
Research into the causes of vibration-induced self-loosening of threaded fasteners spans six decades and the causes of self-loosening are now well understood. It was pioneering experimental research into the behaviour of bolted joints under transverse loads, conducted by German engineer Gerhard Junker in the late 1960s which underpins modern theories on self-loosening behaviour.
Junker’s test methodology and apparatus described in his 1969 paper has since become known as the Junker test and has been adopted into international fastener standards such as DIN 65151, the Junker test is the established method used for analysing the self-loosening behaviour of secured and unsecured threaded fasteners under transverse loading conditions by vibration testing.
References
Mechanical engineering
Industrial design
Fasteners
Metalworking
Threaded fasteners | Junker test | [
"Physics",
"Engineering"
] | 256 | [
"Industrial design",
"Design engineering",
"Applied and interdisciplinary physics",
"Fasteners",
"Construction",
"Mechanical engineering",
"Design"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.