id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
21,726,554 | https://en.wikipedia.org/wiki/Laves%20phase | Laves phases are intermetallic phases that have composition AB2 and are named for Fritz Laves who first described them. The phases are classified on the basis of geometry alone. While the problem of packing spheres of equal size has been well-studied since Gauss, Laves phases are the result of his investigations into packing spheres of two sizes. Laves phases fall into three Strukturbericht types: cubic MgCu2 (C15), hexagonal MgZn2 (C14), and hexagonal MgNi2 (C36). The latter two classes are unique forms of the hexagonal arrangement, but share the same basic structure. In general, the A atoms are ordered as in diamond, hexagonal diamond, or a related structure, and the B atoms form tetrahedra around the A atoms for the AB2 structure.
Laves phases are of particular interest in modern metallurgy research because of their abnormal physical and chemical properties. Many hypothetical or primitive applications have been developed. However, little practical knowledge has been elucidated from Laves phase study so far.
A characteristic feature is the almost perfect electrical conductivity, but they are not plastically deformable at room temperature.
In each of the three classes of Laves phase, if the two types of atoms were perfect spheres with a size ratio of , the structure would be topologically tetrahedrally close-packed. At this size ratio, the structure has an overall packing volume density of 0.710. Compounds found in Laves phases typically have an atomic size ratio between 1.05 and 1.67. Analogues of Laves phases can be formed by the self-assembly of a colloidal dispersion of two sizes of sphere.
Laves phases are instances of the more general Frank-Kasper phases.
References
Intermetallics
Crystal structure types | Laves phase | [
"Physics",
"Chemistry",
"Materials_science"
] | 387 | [
"Inorganic compounds",
"Metallurgy",
"Crystal structure types",
"Crystallography",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
21,727,396 | https://en.wikipedia.org/wiki/Double%20group | The concept of a double group was introduced by Hans Bethe for the quantitative treatment of magnetochemistry. Because the fermions' phase changes with 360-degree rotation, enhanced symmetry groups that describe band degeneracy and topological properties of magnonic systems are needed, which depend not only on geometric rotation, but on the corresponding fermionic phase factor in representations (for the related mathematical concept, see the formal definition). They were introduced for studying complexes of ions that have a single unpaired electron in the metal ion's valence electron shell, like Ti3+, and complexes of ions that have a single "vacancy" in the valence shell, like Cu2+.
In the specific instances of complexes of metal ions that have the electronic configurations 3d1, 3d9, 4f1 and 4f13, rotation by 360° must be treated as a symmetry operation R, in a separate class from the identity operation E. This arises from the nature of the wave function for electron spin. A double group is formed by combining a molecular point group with the group that has two symmetry operations, identity and rotation by 360°. The double group has twice the number of symmetry operations compared to the molecular point group.
Background
In magnetochemistry, the need for a double group arises in a very particular circumstance, namely, in the treatment of the paramagnetism of complexes of a metal ion in whose electronic structure there is a single electron (or its equivalent, a single vacancy) in a metal ion's d- or f-shell. This occurs, for example, with the elements copper and silver in the +2 oxidation state, where there is a single vacancy in a d electron shell, with titanium(III), which has a single electron in the 3d shell, and with cerium(III), which has a single electron in the 4f shell.
In group theory, the character , for rotation of a molecular wavefunction for angular momentum by an angle α is given by
where ; angular momentum is the vector sum of orbital and spin angular momentum. This formula applies with most paramagnetic chemical compounds of transition metals and lanthanides. However, in a complex containing an atom with a single electron in the valence shell, the character, , for a rotation through an angle of about an axis through that atom is equal to minus the character for a rotation through an angle of
The change of sign cannot be true for an identity operation in any point group. Therefore, a double group, in which rotation by , is classified as being distinct from the identity operation, is used.
A character table for the double group 4 is as follows. The new symmetry operations are shown in the second row of the table.
{| class="wikitable"|+ style="text-align: center;"
|+ Character table: double group 4
|-
!4 ||E || ||C4||C43||C2||2C2||2C2
|-
! || ||R ||C4R||C43R||C2R||22R||2C2R
|-
!1
||1||1||1||1||1||1||1|1
|-
!2
||1||1||1||1||1||1|−1||−1
|-
!1
||1||1||−1||−1||1||1||−1
|-
!2
||1||1||−1||−1||1||−1||1
|-
!1
||2||−2||0||0||−2||0||0
|-
!2
||2||−2||√2||−√2||0||0||0
|-
!3
||2 ||−2 ||−√2||√2 ||0 ||0 ||0
|}
The symmetry operations such as C4 and C4R belong to the same class but the column header is shown, for convenience, in two rows, rather than C4, C4R in a single row.
Character tables for the double groups , , , , , , , , , , , , and are given in Salthouse and Ware.
Applications
The need for a double group occurs, for example, in the treatment of magnetic properties of 6-coordinate complexes of copper(II). The electronic configuration of the central Cu2+ ion can be written as [Ar]3d9. It can be said that there is a single vacancy, or hole, in the copper 3d-electron shell, which can contain up to 10 electrons. The ion [Cu(H2O)6]2+ is a typical example of a compound with this characteristic.
Six-coordinate complexes of the Cu(II) ion, with the generic formula [CuL6]2+, are subject to the Jahn-Teller effect so that the symmetry is reduced from octahedral (point group Oh) to tetragonal (point group D4h). Since d orbitals are centrosymmetric the related atomic term symbols can be classified in the subgroup D4.
To a first approximation spin–orbit coupling can be ignored and the magnetic moment is then predicted to be 1.73 Bohr magnetons, the so-called spin-only value. However, for a more accurate prediction spin–orbit coupling must be taken into consideration. This means that the relevant quantum number is J, where .
When J is half-integer, the character for a rotation by an angle of radians is equal to minus the character for rotation by an angle α. This cannot be true for an identity in a point group. Consequently, a group must be used in which rotations by are classed as symmetry operations distinct from rotations by an angle α. This group is known as the double group, .
With species such as the square-planar complex of the silver(II) ion [AgF4]2− the relevant double group is also ; deviations from the spin-only value are greater as the magnitude of spin–orbit coupling is greater for silver(II) than for copper(II).
A double group is also used for some compounds of titanium in the +3 oxidation state. Titanium(III) has a single electron in the 3d shell; the magnetic moments of its complexes have been found to lie in the range 1.63–1.81 B.M. at room temperature. The double group is used to classify their electronic states.
The cerium(III) ion, Ce3+, has a single electron in the 4f shell. The magnetic properties of octahedral complexes of this ion are treated using the double group .
When a cerium(III) ion is encapsulated in a C60 cage, the formula of the endohedral fullerene is written as . The magnetic properties of the compound are treated using the icosahedral double group I2h.
Free radicals
Double groups may be used in connection with free radicals. This has been illustrated for the species CH3F+ and CH3BF2+, each of which contain a single unpaired electron.
See also
Molecular symmetry
Point group
Magnetochemistry
References
Further reading
Group theory
Molecular physics
Theoretical chemistry
Materials science | Double group | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,543 | [
"Applied and interdisciplinary physics",
"Molecular physics",
"Materials science",
"Group theory",
"Theoretical chemistry",
"Fields of abstract algebra",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
21,728,202 | https://en.wikipedia.org/wiki/Flexoelectricity | Flexoelectricity is a property of a dielectric material where there is coupling between electrical polarization and a strain gradient. This phenomenon is closely related to piezoelectricity, but while piezoelectricity refers to polarization due to uniform strain, flexoelectricity specifically involves polarization due to strain that varies from point to point in the material. This nonuniform strain breaks centrosymmetry, meaning that unlike in piezoelectricity, flexoelectric effects occur in both centrosymmetric and asymmetric crystal structures. This property is not the same as Ferroelasticity. It plays a critical role in explaining many interesting electromechanical behaviors in hard crystalline materials and core mechanoelectric transduction phenomena in soft biomaterials. Additionally, it is a size-dependent effect that becomes more significant in nanoscale systems, such as crack tips.
In common usage, flexoelectricity is the generation of polarization due to a strain gradient; inverse flexoelectricity is when polarization, often due to an applied electric field, generates a strain gradient. Converse flexoelectricity is where a polarization gradient induces strain in a material.
The electric polarization due to mechanical strain of in a dielectric is given by:
where the first term corresponds to the direct piezoelectric effect and the second term corresponds to the flexoelectric polarization induced by the strain gradient.
Here, the flexoelectric coefficient, , is a fourth-rank polar tensor and is the coefficient corresponding to the direct piezoelectric effect.
See also
Piezoelectricity
Ferroelectricity
Ferroelasticity
Triboelectric effect
References
External links
Introduction to Flexoelectricity
Electric and magnetic fields in matter
Condensed matter physics | Flexoelectricity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 377 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
21,728,805 | https://en.wikipedia.org/wiki/Pyrotechnic%20valves | A pyrotechnic valve, also explosive actuated valve, short pyro valve or pyrovalve is a one time use propulsion component often used to control propellant or pressurant systems aboard spacecraft or space probes. The device is activated by an electric signal, upon which one or several small explosive charges are ignited. These in turn produce high-pressure gas which either push a small, perforated piston which initially blocked the flow path of the working fluid forward until the hole aligns with the tubing, allowing the working fluid to flow, or force a sharpened piston through a weakened part of the attached tube to block the flow path of the working fluid. These two versions of pyrotechnic valves are referred to as normally-closed (NC) or normally-open (NO) valves respectively, depending on their initial state before initiation of the pyrotechnic charge.
Modern pyrotechnic valves feature two redundant explosive charges in order to maximize reliability of the valve.
There are several advantages of pyrotechnic valves over other types of valves, such as solenoid valves:
Pyrotechnic valves have an extremely fast response time (time from the actuation signal reaching the device til the valve has closed/opened), often in the order of a few milliseconds.
Normally-closed pyro valves have a far lower leak rate than other valves, allowing them to hermetically seal propellant inside a tank to keep it from evaporating into space before activation. This is especially useful on spacecraft which spend years or decades coasting through space before reaching their destination. If propellant is needed only every few years or so, NO pyro valves and NC pyro valves can be arranged in pairs, breaking the seal when required, then sealing it again afterwards. This type of system is being referred to as a pyro ladder.
Examples
Eight pyrovalves controlled the descent of the Perseverance Mars rover after seven months of flight.
References
External links
Dassault Aviation: Pyrotechnic valve (Archived)
Astrium: Pyrotechnic Valves for Space Propulsion Systems (Archived)
Pyrotechnics
Spacecraft pyrotechnics
Valves | Pyrotechnic valves | [
"Physics",
"Chemistry"
] | 462 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
21,728,926 | https://en.wikipedia.org/wiki/Jones%20reductor | A Jones reductor is a device used to reduce aqueous solutions of metal ions. The active component is a zinc amalgam. Jones reductors have been used for preparing solutions of titanium(III), vanadium(II), chromium(II), molybdenum(III), niobium(III), europium(II), and uranium(III).
Preparation and use
Amalgamated zinc is prepared by treating zinc metal with a 2% solution of mercury(II) chloride. The metal may be in the granulated form or as shavings, wool, or powder. The amalgam forms on the surface of the zinc. After washing to remove salts, the amalgam is placed in a long glass tube, similar to a chromatography column, equipped with a stopcock. The amalgam is a more effective reducing agent than zinc metal. The effluent is often air-sensitive, requiring the use of air-free techniques.
To use the reductor, the solution to be reduced is drawn through the tube. If the column is loosely packed, the solution may pass through without assistance. The length of the column or the flow rate are adjusted to effect full reduction of the soluble reagent. The effluent is also contaminated with zinc(II) salts, but they do not affect subsequent operations. These operations might include iodometric titration to determine the reducible content of the effluent. In some cases, the effluent is treated with other reagents to precipitate a compound of the reduced ions.
See also
Walden reductor
References
Inorganic chemistry
Reducing agents
Zinc compounds | Jones reductor | [
"Chemistry"
] | 354 | [
"Redox",
"nan",
"Reducing agents"
] |
33,324,707 | https://en.wikipedia.org/wiki/Laminar%20flame%20speed | Laminar flame speed is an intrinsic characteristic of premixed combustible mixtures. It is the speed at which an un-stretched laminar flame will propagate through a quiescent mixture of unburned reactants. Laminar flame speed is given the symbol sL. According to the thermal flame theory of Ernest-François Mallard and Le Chatelier, the un-stretched laminar flame speed is dependent on only three properties of a chemical mixture: the thermal diffusivity of the mixture, the reaction rate of the mixture and the temperature through the flame zone:
is thermal diffusivity,
is reaction rate,
and the temperature subscript u is for unburned, b is for burned and i is for ignition temperature.
Laminar flame speed is a property of the mixture (fuel structure, stoichiometry) and thermodynamic conditions upon mixture ignition (pressure, temperature). Turbulent flame speed is a function of the aforementioned parameters, but also heavily depends on the flow field. As flow velocity increases and turbulence is introduced, a flame will begin to wrinkle, then corrugate and eventually the flame front will be broken and transport properties will be enhanced by turbulent eddies in the flame zone. As a result, the flame front of a turbulent flame will propagate at a speed that is not only a function of the mixture's chemical and transport properties but also properties of the flow and turbulence.
See also
Flame speed
Chemical kinetics
Activation energy asymptotics
References
Combustion | Laminar flame speed | [
"Chemistry"
] | 320 | [
"Combustion"
] |
33,327,002 | https://en.wikipedia.org/wiki/Cabbeling | Cabbeling is when two separate water parcels mix to form a third which sinks below both parents. The combined water parcel is denser than the original two water parcels.
The two parent water parcels may have the same density, but they have different properties; for instance, different salinities and temperatures. Seawater almost always gets denser if it gets either slightly colder or slightly saltier. But medium-warm, medium-salty water can be denser than both fresher, colder water and saltier, warmer water; in other words, the equation of state for seawater is monotonic, but non-linear. See diagram.
Cabbeling may also occur in fresh water, since pure water is densest at about 4 °C (39 °F). A mixture of 1 °C water and 6 °C water, for instance, might have a temperature of 4 °C, making it denser than either parent. Ice is also less dense than water, so although ice floats in warm water, meltwater sinks in warm water.
The densification of the new mixed water parcel is a result of a slight contraction upon mixing; a decrease in volume of the combined water parcel. A new water parcel that has the same mass, but is lower in volume, will be denser. Denser water sinks or downwells in the otherwise neutral surface of the water body, where the two initial water parcels originated.
History of term
The importance of this process in oceanography was first pointed out by Witte, in a 1902 publication ().
The German origin of the term has caused some etymological confusion and disagreements as to the correct spelling of the term; for details, see the Wiktionary entry on cabelling. Oceanographers generally follow Stommel and refer to the process as "cabbeling".
High-latitude cabbeling
Cabbeling may occur in high incidence in high latitude waters. Polar region waters are a place where cold and fresh water melting from sea ice meets warmer, saltier water. Ocean currents are responsible for bringing this warmer, saltier water to higher latitudes, especially on the eastern shores of Northern Hemisphere continents, and on the western shores of Southern Hemisphere continents. The phenomenon of cabbeling has been particularly noted in the Weddell Sea and the Greenland Sea.
References
Oceanography
Fluid mechanics
Lakes
Bodies of water | Cabbeling | [
"Physics",
"Engineering",
"Environmental_science"
] | 477 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Civil engineering",
"Lakes",
"Fluid mechanics"
] |
33,329,683 | https://en.wikipedia.org/wiki/Spontaneous%20absolute%20asymmetric%20synthesis | Spontaneous absolute asymmetric synthesis is a chemical phenomenon that stochastically generates chirality based on autocatalysis and small fluctuations in the ratio of enantiomers present in a racemic mixture. In certain reactions which initially do not contain chiral information, stochastically distributed enantiomeric excess can be observed. The phenomenon is different from chiral amplification, where enantiomeric excess is present from the beginning and not stochastically distributed. Hence, when the experiment is repeated many times, the average enantiomeric excess approaches 0%. The phenomenon has important implications concerning the origin of homochirality in nature.
References
Stereochemistry
Biological processes
Origin of life | Spontaneous absolute asymmetric synthesis | [
"Physics",
"Chemistry",
"Biology"
] | 147 | [
"Origin of life",
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Biological hypotheses",
"Spacetime"
] |
33,329,948 | https://en.wikipedia.org/wiki/Transport%20coefficient | A transport coefficient measures how rapidly a perturbed system returns to equilibrium.
The transport coefficients occur in transport phenomenon with transport laws
where:
is a flux of the property
the transport coefficient of this property
, the gradient force which acts on the property .
Transport coefficients can be expressed via a Green–Kubo relation:
where is an observable occurring in a perturbed Hamiltonian, is an ensemble average and the dot above the A denotes the time derivative.
For times that are greater than the correlation time of the fluctuations of the observable the transport coefficient obeys a generalized Einstein relation:
In general a transport coefficient is a tensor.
Examples
Diffusion constant, relates the flux of particles with the negative gradient of the concentration (see Fick's laws of diffusion)
Thermal conductivity (see Fourier's law)
Ionic conductivity
Mass transport coefficient
Shear viscosity , where is the viscous stress tensor (see Newtonian fluid)
Electrical conductivity
Transport coefficients of higher order
For strong gradients the transport equation typically has to be modified with higher order terms (and higher order Transport coefficients).
See also
Linear response theory
Onsager reciprocal relations
References
Thermodynamics
Statistical mechanics | Transport coefficient | [
"Physics",
"Chemistry",
"Mathematics"
] | 240 | [
"Statistical mechanics",
"Thermodynamics",
"Dynamical systems"
] |
33,331,939 | https://en.wikipedia.org/wiki/Kondo%20model | The Kondo model (sometimes referred to as the s-d model) is a model for a single localized quantum impurity coupled to a large reservoir of delocalized and noninteracting electrons. The quantum impurity is represented by a spin-1/2 particle, and is coupled to a continuous band of noninteracting electrons by an antiferromagnetic exchange coupling . The Kondo model is used as a model for metals containing magnetic impurities, as well as quantum dot systems.
Kondo Hamiltonian
The Kondo Hamiltonian is given by
where is the spin-1/2 operator representing the impurity, and
is the local spin-density of the noninteracting band at the impurity site ( are the Pauli matrices). In the Kondo problem, , i.e. the exchange coupling is antiferromagnetic.
Solving the Kondo Model
Jun Kondo applied third-order perturbation theory to the Kondo model and showed that the resistivity of the model diverges logarithmically as the temperature goes to zero. This explained why metal samples containing magnetic impurities have a resistance minimum (see Kondo effect). The problem of finding a solution to the Kondo model which did not contain this unphysical divergence became known as the Kondo problem.
A number of methods were used to attempt to solve the Kondo problem. Phillip Anderson devised a perturbative renormalization group method, known as Poor Man's Scaling, which involves perturbatively eliminating excitations to the edges of the noninteracting band. This method indicated that, as temperature is decreased, the effective coupling between the spin and the band, , increases without limit. As this method is perturbative in J, it becomes invalid when J becomes large, so this method did not truly solve the Kondo problem, although it did hint at the way forward.
The Kondo problem was finally solved when Kenneth Wilson applied the numerical renormalization group to the Kondo model and showed that the resistivity goes to a constant as temperature goes to zero.
There are many variants of the Kondo model. For instance, the spin-1/2 can be replaced by a spin-1 or even a greater spin. The two-channel Kondo model is a variant of the Kondo model which has the spin-1/2 coupled to two independent noninteracting bands. All these models have been solved by Bethe Ansatz. One can also consider the ferromagnetic Kondo model (i.e. the standard Kondo model with J > 0).
The Kondo model is intimately related to the Anderson impurity model, as can be shown by Schrieffer–Wolff transformation.
See also
Anderson impurity model
Kondo effect
References
Condensed matter physics
Quantum magnetism | Kondo model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 582 | [
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Quantum magnetism",
"Condensed matter physics",
"Matter"
] |
43,082,178 | https://en.wikipedia.org/wiki/Thickness%20%28graph%20theory%29 | In graph theory, the thickness of a graph is the minimum number of planar graphs into which the edges of can be partitioned. That is, if there exists a collection of planar graphs, all having the same set of vertices, such that the union of these planar graphs is , then the thickness of is at most . In other words, the thickness of a graph is the minimum number of planar subgraphs whose union equals to graph .
Thus, a planar graph has thickness one. Graphs of thickness two are called biplanar graphs. The concept of thickness originates in the Earth–Moon problem on the chromatic number of biplanar graphs, posed in 1959 by Gerhard Ringel, and on a related 1962 conjecture of Frank Harary: Every graph on nine points or its complementary graph is non-planar. The problem is equivalent to determining whether the complete graph is biplanar (it is not, and the conjecture is true). A comprehensive survey on the state of the arts of the topic as of 1998 was written by Petra Mutzel, Thomas Odenthal and Mark Scharbrodt.
Specific graphs
The thickness of the complete graph on vertices, , is
except when for which the thickness is three.
With some exceptions, the thickness of a complete bipartite graph is generally:
Properties
Every forest is planar, and every planar graph can be partitioned into at most three forests. Therefore, the thickness of any graph is at most equal to the arboricity of the same graph (the minimum number of forests into which it can be partitioned) and at least equal to the arboricity divided by three.
The graphs of maximum degree have thickness at most . This cannot be improved: for a -regular graph with girth at least , the high girth forces any planar subgraph to be sparse, causing its thickness to be exactly .
Graphs of thickness with vertices have at most edges. Because this gives them average degree less than , their degeneracy is at most and their chromatic number is at most . Here, the degeneracy can be defined as the maximum, over subgraphs of the given graph, of the minimum degree within the subgraph. In the other direction, if a graph has degeneracy then its arboricity and thickness are at most . One can find an ordering of the vertices of the graph in which each vertex has at most neighbors that come later than it in the ordering, and assigning these edges to distinct subgraphs produces a partition of the graph into trees, which are planar graphs.
Even in the case , the precise value of the chromatic number is unknown; this is Gerhard Ringel's Earth–Moon problem. An example of Thom Sulanke shows that, for , at least 9 colors are needed.
Related problems
Thickness is closely related to the problem of simultaneous embedding. If two or more planar graphs all share the same vertex set, then it is possible to embed all these graphs in the plane, with the edges drawn as curves, so that each vertex has the same position in all the different drawings. However, it may not be possible to construct such a drawing while keeping the edges drawn as straight line segments.
A different graph invariant, the rectilinear thickness or geometric thickness of a graph , counts the smallest number of planar graphs into which can be decomposed subject to the restriction that all of these graphs can be drawn simultaneously with straight edges. The book thickness adds an additional restriction, that all of the vertices be drawn in convex position, forming a circular layout of the graph. However, in contrast to the situation for arboricity and degeneracy, no two of these three thickness parameters are always within a constant factor of each other.
Computational complexity
It is NP-hard to compute the thickness of a given graph, and NP-complete to test whether the thickness is at most two. However, the connection to arboricity allows the thickness to be approximated to within an approximation ratio of 3 in polynomial time.
References
Graph invariants
Planar graphs
NP-complete problems | Thickness (graph theory) | [
"Mathematics"
] | 841 | [
"Planar graphs",
"Graph theory",
"Computational problems",
"Graph invariants",
"Mathematical relations",
"Planes (geometry)",
"Mathematical problems",
"NP-complete problems"
] |
43,082,400 | https://en.wikipedia.org/wiki/Primula%20hortensis | Primula hortensis is a name which has been applied to various hybrids in the genus Primula, e.g. to Primula × polyantha Mill. by Focke and to Primula × pubescens by Wittstein. The name Primula hortensis is not an accepted taxon name, however.
Description
These flowers are yellowish-orange in colour.
References
hortensis | Primula hortensis | [
"Biology"
] | 88 | [
"Set index articles on plants",
"Set index articles on organisms",
"Plants"
] |
43,082,530 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%2C%20Science%20%26%20Technology%20and%20Public%20Utilities | The Ministry of Energy, Science & Technology, and Public Utilities (Belize) was founded in 2012. The Ministry is currently divided into the Department of Geology and Petroleum, the Energy Unit and the Science and Technology Unit. The Ministry is represented by Senator Joy Grant and CEO Dr Colin Young, and has an office in Belmopan.
Geology and Petroleum Department
The Geology and Petroleum Department was established in 1984 as part of the Ministry of Natural Resources. In 2012, the department moved to the new Ministry of Energy, Science & Technology and Public Utilities. The department is responsible for governance of the petroleum industry in Belize. The department's mission statement is to "To accelerate the development of Belize’s petroleum resources through the creation of a vibrant petroleum industry, with the assistance of international investors, cognizant of environmental costs, thereby improving the welfare of Belizeans into the 21st century."
Energy Unit
The Energy Unit was established in 2012 and has responsibility for governance of the energy sector in Belize. The Unit's mission statement is to "To plan, promote and effectively manage the production, delivery and use of energy through Energy Efficiency, Renewable Energy, and Cleaner Production interventions for the sustainable development of Belize." Key activities performed by the Energy Unit include data collection for the purpose of planning Belize's future energy supplies and calculating greenhouse gas emissions, public awareness on topics such as energy efficiency, as well as regulation and market reforms that promote a sustainable future for Belize.
Science and Technology Unit
The Science and Technology Unit is responsible for the promotion of science and technology in Belize. The Unit plays a key role in Belize's efforts to achieve Target 8.f in the millennium development goals. The Unit conducts a number of activities that promote engagement with Science and Technology in Belize, including through the ICT roadshow.
References
External links
Ministry of Energy, Science & Technology, and Public Utilities home page
Facebook page for the Ministry of Energy, Science & Technology, and Public Utilities
Government of Belize - Ministry of Energy, Science & Technology and Public Utilities
Energy, Science and Technology and Public Utilities
Belize
Belize
Organisations based in Belize
Science and technology in Belize | Ministry of Energy, Science & Technology and Public Utilities | [
"Engineering"
] | 426 | [
"Energy organizations",
"Energy ministries"
] |
44,789,336 | https://en.wikipedia.org/wiki/Blackman%27s%20theorem | Blackman's theorem is a general procedure for calculating the change in an impedance due to feedback in a circuit. It was published by Ralph Beebe Blackman in 1943, was connected to signal-flow analysis by John Choma, and was made popular in the extra element theorem by R. D. Middlebrook and the asymptotic gain model of Solomon Rosenstark. Blackman's approach leads to the formula for the impedance Z between two selected terminals of a negative feedback amplifier as Blackman's formula:
where ZD = impedance with the feedback disabled, TSC = loop transmission with a small-signal short across the selected terminal pair, and TOC = loop transmission with an open circuit across the terminal pair. The loop transmission also is referred to as the return ratio. Blackman's formula can be compared with Middlebrook's result for the input impedance Zin of a circuit based upon the extra-element theorem:
where:
is the impedance of the extra element; is the input impedance with removed (or made infinite); is the impedance seen by the extra element with the input shorted (or made zero); is the impedance seen by the extra element with the input open (or made infinite).
Blackman's formula also can be compared with Choma's signal-flow result:
where is the value of under the condition that a selected parameter P is set to zero, return ratio is evaluated with zero excitation and is for the case of short-circuited source resistance. As with the extra-element result, differences are in the perspective leading to the formula.
See also
Mason's gain formula
Further reading
References
Electronic feedback
Signal processing
Electronic amplifiers
Control engineering | Blackman's theorem | [
"Technology",
"Engineering"
] | 355 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Control engineering",
"Electronic amplifiers",
"Amplifiers"
] |
44,796,138 | https://en.wikipedia.org/wiki/Flask%20%28unit%29 | Flask is a British unit of mass or weight in the avoirdupois system, used to measure mercury. It is defined as . Near room temperature, a flask of mercury occupies a volume of approximately .
Conversion
1 flask (mercury) ≡ 76 lb
1 flask (mercury) ≡ 34.47302012 kg
References
Units of mass
Customary units of measurement
Standards of the United Kingdom | Flask (unit) | [
"Physics",
"Mathematics"
] | 83 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Customary units of measurement",
"Units of measurement"
] |
44,797,598 | https://en.wikipedia.org/wiki/Journal%20of%20Environmental%20Radioactivity | Journal of Environmental Radioactivity is a monthly peer-reviewed scientific journal on environmental radioactivity and radioecology. It was proposed and started by Founding Editor Murdoch Baxter in 1984 and is published by Elsevier. Its current editor-in-chief is Stephen C. Sheppard (ECOMatters Inc.) and is an affiliated journal of the International Union of Radioecology.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Index Medicus/MEDLINE/PubMed
Science Citation Index Expanded
Current Contents/Agriculture, Biology & Environmental Sciences
The Zoological Record
BIOSIS Previews
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.674.
Notes
References
External links
Ecology journals
Elsevier academic journals
English-language journals
Environmental isotopes
Environmental science journals
Monthly journals
Academic journals established in 1984
Radioactivity
Physics journals | Journal of Environmental Radioactivity | [
"Physics",
"Chemistry",
"Environmental_science"
] | 178 | [
"Ecology journals",
"Environmental isotopes",
"Isotopes",
"Environmental science journals",
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Geochemistry stubs",
"Nuclear physics",
"Environmental science journal stubs",
"Radioactivity"
] |
44,797,697 | https://en.wikipedia.org/wiki/Journal%20of%20Nuclear%20Materials | The Journal of Nuclear Materials is a monthly peer-reviewed scientific journal on materials research for accelerator physics, nuclear power generation and fuel cycle applications. It was established in 1959 and is published by Elsevier. The current editor-in-chief is Gary S. Was (University of Michigan).
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Index Medicus/MEDLINE/PubMed
Science Citation Index
Current Contents/Physical, Chemical & Earth Sciences
Current Contents/Engineering, Computing & Technology
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.936.
References
External links
Elsevier academic journals
English-language journals
Monthly journals
Academic journals established in 1959
Physics journals
Materials science journals | Journal of Nuclear Materials | [
"Materials_science",
"Engineering"
] | 151 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
30,818,148 | https://en.wikipedia.org/wiki/Sulfacytine | Sulfacytine is a short-acting sulfonamide antibiotic, taken orally for treatment against bacterial infections. Sulfonamides, as a group of antibiotics, work by inhibiting the bacterial synthesis of folate. In 2006, the drug was discontinued.
References
Sulfonamide antibiotics
Abandoned drugs
4-Aminophenyl compounds | Sulfacytine | [
"Chemistry"
] | 72 | [
"Drug safety",
"Abandoned drugs"
] |
30,818,571 | https://en.wikipedia.org/wiki/Wagner%27s%20gene%20network%20model | Wagner's gene network model is a computational model of artificial gene networks, which explicitly modeled the developmental and evolutionary process of genetic regulatory networks. A population with multiple organisms can be created and evolved from generation to generation. It was first developed by Andreas Wagner in 1996 and has been investigated by other groups to study the evolution of gene networks, gene expression, robustness, plasticity and epistasis.
Assumptions
The model and its variants have a number of simplifying assumptions. Three of them are listing below.
The organisms are modeled as gene regulatory networks. The models assume that gene expression is regulated exclusively at the transcriptional level;
The product of a gene can regulate the expression of (be a regulator of) that source gene or other genes. The models assume that a gene can only produce one active transcriptional regulator;
The effects of one regulator are independent of effects of other regulators on the same target gene.
Genotype
The model represents individuals as networks of interacting transcriptional regulators. Each individual expresses genes encoding transcription factors. The product of each gene can regulate the expression level of itself and/or the other genes through cis-regulatory elements. The interactions among genes constitute a gene network that is represented by a × regulatory matrix in the model. The elements in matrix R represent the interaction strength. Positive values within the matrix represent the activation of the target gene, while negative ones represent repression. Matrix elements with value 0 indicate the absence of interactions between two genes.
Phenotype
The phenotype of each individual is modeled as the gene expression pattern at time . It is represented by a state vector in this model.
whose element denotes the expression state of gene i at time t. In the original Wagner model,
∈
where 1 represents the gene is expressed while -1 implies the gene is not expressed. The expression pattern can only be ON or OFF. The continuous expression pattern between -1 (or 0) and 1 is also implemented in some other variants.
Development
The development process is modeled as the development of gene expression states. The gene expression pattern at time is defined as the initial expression state. The interactions among genes change the expression states during the development process. This process is modeled by the following differential equations
where τ) represents the expression state of at time . It is determined by a filter function σ. represents the weighted sum of regulatory effects () of all genes on gene at time . In the original Wagner model, the filter function is a step function
In other variants, the filter function is implemented as a sigmoidal function
In this way, the expression states will acquire a continuous distribution. The gene expression will reach the final state if it reaches a stable pattern.
Evolution
Evolutionary simulations are performed by reproduction-mutation-selection life cycle. Populations are fixed at size and they will not go extinct. Non-overlapping generations are employed. In a typical evolutionary simulation, a single random viable individual that can produce a stable gene expression pattern is chosen as the founder. Cloned individuals are generated to create a population of identical individuals. According to the asexual or sexual reproductive mode, offspring are produced by randomly choosing (with replacement) parent individual(s) from current generation. Mutations can be acquired with probability μ and survive with probability equal to their fitness. This process is repeated until N individuals are produced that go on to found the following generation.
Fitness
Fitness in this model is the probability that an individual survives to reproduce. In the simplest implementation of the model, developmentally stable genotypes survive (i.e. their fitness is ) and developmentally unstable ones do not (i.e. their fitness is ).
Mutation
Mutations are modeled as the changes in gene regulation, i.e., the changes of the elements in the regulatory matrix .
Reproduction
Both sexual and asexual reproductions are implemented. Asexual reproduction is implemented as producing the offspring's genome (the gene network) by directly copying the parent's genome. Sexual reproduction is implemented as the recombination of the two parents' genomes.
Selection
An organism is considered viable if it reaches a stable gene expression pattern. An organism with oscillated expression pattern is discarded and cannot enter the next generation.
References
External links
Andreas Wagner Lab Webpage
Gene expression
Networks
Systems biology | Wagner's gene network model | [
"Chemistry",
"Biology"
] | 856 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Systems biology"
] |
30,820,704 | https://en.wikipedia.org/wiki/Evolution%20by%20gene%20duplication | Evolution by gene duplication is an event by which a gene or part of a gene can have two identical copies that can not be distinguished from each other. This phenomenon is understood to be an important source of novelty in evolution, providing for an expanded repertoire of molecular activities. The underlying mutational event of duplication may be a conventional gene duplication mutation within a chromosome, or a larger-scale event involving whole chromosomes (aneuploidy) or whole genomes (polyploidy). A classic view, owing to Susumu Ohno, which is known as Ohno model, he explains how duplication creates redundancy, the redundant copy accumulates beneficial mutations which provides fuel for innovation. Knowledge of evolution by gene duplication has advanced more rapidly in the past 15 years due to new genomic data, more powerful computational methods of comparative inference, and new evolutionary models.
Theoretical models
Several models exist that try to explain how new cellular functions of genes and their encoded protein products evolve through the mechanism of duplication and divergence. Although each model can explain certain aspects of the evolutionary process, the relative importance of each aspect is still unclear. This page only presents which theoretical models are currently discussed in the literature. Review articles on this topic can be found at the bottom.
In the following, a distinction will be made between explanations for the short-term effects (preservation) of a gene duplication and its long-term outcomes.
Preservation of gene duplicates
Since a gene duplication occurs in only one cell, either in a single-celled organism or in the germ cell of a multi-cellular organism, its carrier (i.e. the organism) usually has to compete against other organisms that do not carry the duplication. If the duplication disrupts the normal functioning of an organism, the organism has a reduced reproductive success (or low fitness) compared to its competitors and will most likely die out rapidly. If the duplication has no effect on fitness, it might be maintained in a certain proportion of a population. In certain cases, the duplication of a certain gene might be immediately beneficial, providing its carrier with a fitness advantage.
Dosage effect or gene amplification
The so-called 'dosage' of a gene refers to the amount of mRNA transcripts and subsequently translated protein molecules produced from a gene per time and per cell.
If the amount of gene product is below its optimal level, there are two kinds of mutations that can increase dosage: increases in gene expression by promoter mutations and increases in gene copy number by gene duplication.
The more copies of the same (duplicated) gene a cell has in its genome, the more gene product can be produced simultaneously. Assuming that no regulatory feedback loops exist that automatically down-regulate gene expression, the amount of gene product (or gene dosage) will increase with each additional gene copy, until some upper limit is reached or sufficient gene product is available.
Furthermore, under positive selection for increased dosage, a duplicated gene could be immediately advantageous and quickly increase in frequency in a population. In this case, no further mutations would be necessary to preserve (or retain) the duplicates. However, at a later time, such mutations could still occur, leading to genes with different functions (see below).
Gene dosage effects after duplication can also be harmful to a cell and the duplication might therefore be selected against. For instance, when the metabolic network within a cell is fine-tuned so that it can only tolerate a certain amount of a certain gene product, gene duplication would offset this balance.
Activity reducing mutations
In cases of gene duplications that have no immediate fitness effect, a retention of the duplicate copy could still be possible if both copies accumulate mutations that for instance reduce the functional efficiency of the encoded proteins without inhibiting this function altogether. In such a case, the molecular function (e.g. protein/enzyme activity) would still be available to the cell to at least the extent that was available before duplication (now provided by proteins expressed from two gene loci, instead of one gene locus). However, the accidental loss of one gene copy might then be detrimental, since one copy of the gene with reduced activity would almost certainly lie below the activity that was available before duplication.
Long-term fate of duplicated genes
If a gene duplication is preserved, the most likely fate is that random mutations in one duplicate gene copy will eventually cause the gene to become non-functional
. Such non-functional remnants of genes, with detectable sequence homology, can sometimes still be found in genomes and are called pseudogenes.
Functional divergence between the duplicate genes is another possible fate. There are several theoretical models that try to explain the mechanisms leading to divergence:
Neofunctionalization
The term neofunctionalization was first coined by Force et al. 1999,
but it refers to the general mechanism proposed by Ohno 1970. The long-term outcome of Neofunctionalization is that one copy retains the original (pre-duplication) function of the gene, while the second copy acquires a distinct function. It is also known as the MDN model, "mutation during non-functionality". The major criticism of this model is the high likelihood of non-functionalization, i.e. the loss of all functionality of the gene, due to random accumulation of mutations.
IAD model
IAD stands for 'innovation, amplification, divergence' and aims to explain evolution of new gene functions while preserving its existing functions.
Innovation, i.e. the establishment of a new molecular function, can occur via side-activities of genes and thus proteins this is called Enzyme promiscuity. For example, enzymes can sometimes catalyse more than just one reaction, even though they usually are optimised for catalysing just one reaction. Such promiscuous protein functions, if they provide an advantage to the host organism, can then be amplified with additional copies of the gene. Such a rapid amplification is best known from bacteria that often carry certain genes on smaller non-chromosomal DNA molecules (called plasmids) which are capable of rapid replication. Any gene on such a plasmid is also replicated and the additional copies amplify the expression of the encoded proteins, and with it any promiscuous function. After several such copies have been made, and are also passed on to descendent bacterial cells, a few of these copies might accumulate mutations that eventually will lead to a side-activity becoming the main activity.
The IAD model have been previously tested in the lab by using bacterial enzyme with dual function as starting point. This enzyme is capable of catalyzing not only its original function, but also side function that can carried out by other enzyme.
By allowing the bacteria with this enzyme to evolve under selection to improve both activities (original and side) for several generations, it was shown that one ancestral bifunctional gene with poor activities (Innovation) evolved first by gene amplification to increase expression of the poor enzyme, and later accumulated more beneficial mutations that improved one or both of the activities that can be passed on to the next generation (divergence)
Subfunctionalization
"Subfunctionalization" was also first coined by Force et al. 1999. This model requires the ancestral (pre-duplication) gene to have several functions (sub-functions), which the descendant (post-duplication) genes specialise on in a complementary fashion. There are now at least two different models that are labeled as subfunctionalization, "DDC" and "EAC".
DDC model
DDC stands for "duplication-degeneration-complementation". This model was first introduced by Force et al. 1999. The first step is gene duplication. The gene duplication in itself is neither advantageous, nor deleterious, so it will remain at low frequency within a population of individuals that do not carry a duplication. According to DDC, this period of neutral drift may eventually lead to the complementary retention of sub-functions distributed over the two gene copies. This comes about by activity reducing (degenerative) mutations in both duplicates, accumulating over time periods and many generations. Taken together, the two mutated genes provide the same set of functions as the ancestral gene (before duplication). However, if one of the genes was removed, the remaining gene would not be able to provide the full set of functions and the host cell would likely suffer some detrimental consequences. Therefore, at this later stage of the process, there is a strong selection pressure against removing any of the two gene copies that arose by gene duplication. The duplication becomes permanently established in the genome of the host cell or organism.
EAC model
EAC stands for "Escape from Adaptive Conflict". This name first appeared in a publication by Hittinger and Carroll 2007.
The evolutionary process described by the EAC model actually begins before the gene duplication event. A singleton (not duplicated) gene evolves towards two beneficial functions simultaneously. This creates an "adaptive conflict" for the gene, since it is unlikely to execute each individual function with maximum efficiency. The intermediate evolutionary result could be a multi-functional gene and after a gene duplication its sub-functions could be carried out by specialised descendants of the gene. The result would be the same as under the DDC model, two functionally specialised genes (paralogs). In contrast to the DDC model, the EAC model puts more emphasis on the multi-functional pre-duplication state of the evolving genes and gives a slightly different explanation as to why the duplicated multi-functional genes would benefit from additional specialisation after duplication (because of the adaptive conflict of the multi-functional ancestor that needs to be resolved). Under EAC there is an assumption of a positive selection pressure driving evolution after gene duplication, whereas the DDC model only requires neutral ("undirected") evolution to take place, i.e. degeneration and complementation.
See also
Pseudogenes
Molecular evolution
Gene duplication
Functional divergence
Mutation
References
Molecular evolution
Molecular genetics
Mutation | Evolution by gene duplication | [
"Chemistry",
"Biology"
] | 2,083 | [
"Evolutionary processes",
"Molecular genetics",
"Molecular evolution",
"Molecular biology"
] |
27,974,163 | https://en.wikipedia.org/wiki/Project%20523 | Project 523 () is a code name for a 1967 secret military project of the People's Republic of China to find antimalarial medications. Named after the date the project launched, 23 May, it addressed malaria, an important threat in the Vietnam War. At the behest of Ho Chi Minh, Prime Minister of North Vietnam, Zhou Enlai, the Premier of the People's Republic of China, convinced Mao Zedong, Chairman of the Chinese Communist Party, to start the mass project "to keep [the] allies' troops combat-ready", as the meeting minutes put it. More than 500 Chinese scientists were recruited. The project was divided into three streams. The one for investigating traditional Chinese medicine discovered and led to the development of a class of new antimalarial drugs called artemisinins. Launched during and lasting throughout the Cultural Revolution, Project 523 was officially terminated in 1981.
For their high efficacy, safety and stability, artemisinins such as artemether and artesunate became the drugs of choice in treating falciparum malaria. The World Health Organization advocates their combination drugs and includes them in its List of Essential Medicines. Among the scientists of the project, Zhou Yiqing and his team at the Institute of Microbiology and Epidemiology of the Chinese Academy of Military Medical Sciences, were awarded the European Inventor Award of 2009 in the category "Non-European countries" for the development of Coartem (artemether-lumefantrine combination drug). Tu Youyou of the Qinghaosu Research Center, Institute of Chinese Materia Medica, Academy of Traditional Chinese Medicine (now the China Academy of Traditional Chinese Medical Sciences), received both the 2011 Lasker-DeBakey Clinical Medical Research Award and 2015 Nobel Prize in Physiology or Medicine for her role in the discovery of artemisinin.
Background
The Vietnam War was fought between North Vietnam (with support from Communist countries such as Soviet Union and China) and South Vietnam (with support from the United States and its allies). The conflicts began in 1954 and became large-scale battles by 1961. Although in a better warfare position, the People's Army of Vietnam (North Vietnamese Army) and its allies in the South, Viet Cong, suffered increasing mortality because of malaria epidemics. In some battlefields, the disease would reduce military strengths by half and in severe cases, disable 90% of the troops. North Vietnamese Prime Minister Ho Chi Minh asked Chinese Premier Zhou Enlai for medical help. The year before, party Chairman Mao Zedong had introduced the Cultural Revolution, during which he would close schools and universities and banish scientists and intellectuals. Mao took Ho's plea seriously and approved a military project. On 23 May 1967, about six hundred scientists convened. These included military personnel, scientists, and medical practitioners of Western and traditional Chinese medicine. The meeting marked the start of the military-research programme, which received the code name Project 523, after the date (23 May) it launched. The project was divided into three main streams, one for developing synthetic compounds, one for clinical studies (or infection control) and another for investigating traditional Chinese medicine. Classified as a top secret state mission, the project itself saved many scientists from the atrocities of the Cultural Revolution.
Execution and achievements
As the first line strategy, the troops were given synthetic drugs. Drug combinations using pyrimethamine and dapsone, pyrimethamine and sulfadoxine, and sulfadoxine and piperaquine phosphate were tested in the battlefield. Because these drugs had serious adverse effects, the primary focus was to examine traditional Chinese medicines and look for new compounds. The first drug of interest was chángshān (), an extract from the roots of Dichroa febrifuga depicted in the Shennong Ben Cao Jing. Another early candidate was huanghuahao (sweet wormwood or Artemisia annua). These two plants became a huge success in modern pharmacology.
Febrifugine from chángshān
The first interest was on chángshān, the root extract of Dichroa febrifuga. In the 1940s, Chinese scientists had shown that it was effective against different species of Plasmodium. American scientists isolated febrifugine as its major active antimalarial compound. The project scientists confirmed the antimalarial activity but found it unsuitable for human use due to its overwhelming potency and toxicity, outrivaling that of quinine. After the project, the compound remained under investigation, with attempts to discover suitable derivatives, among which halofuginone is an effective drug against malaria, cancer, fibrosis and inflammatory disease.
Discovery of artemisinin and its derivatives
The fourth-century Chinese physician Ge Hong's book Zhouhou Beiji Fang () described Artemisia annua extract, called qinghao, as a treatment of malarial fever. Tu Youyou and her team were the first to investigate. In 1971 they found that their extract from the dried leaves (collected from Beijing) did not indicate any antimalarial activity. On careful reading of Ge's description they changed their extraction method of using fresh leaves under low temperature. Ge explicitly describes the recipe as: "qinghao, one bunch, take two sheng [2 × 0.2 L] of water for soaking it, wring it out, take the juice, ingest it in its entirety". Following the findings of scientists at the Yunnan Institute of Pharmacology, they found that only the fresh plant specimen collected from Sichuan province would yield the active compound. They made the purified extract into tablets, which showed very low activity. They soon realized that the compound was very insoluble and made it in capsules instead. On 4 October 1971 they successfully treated malaria in experimental mice (infected with Plasmodium berghei) and monkeys (infected with Plasmodium cynomolgi) using the new extract.
In August 1972 they reported a clinical trial in which 21 malarial patients were cured. In 1973 the Yunnan scientists and those at the Shandong Institute of Pharmacology independently obtained the antimalarial compound in a crystalline form gave the name huanghaosu or huanghuahaosu, eventually renamed qinghaosu (yet later to be popularised as "artemisinin", after the botanical name). The same year Tu synthesized the compound dihydroartemisinin from the extract. This compound was more soluble and potent than the native compound. Other scientists subsequently synthesized other artemisinin derivatives, of which the most important are artemether and artesunate. All clinical trials by this time confirmed that artemisinins are more effective than the conventional antimalarial drugs, such as chloroquine and quinine. A group of scientists in Shanghai, including chemist Wu Yulin, determined artemisinin's chemical structure in 1975 and published it in 1977 when the secrecy rules lifted. The artemisinins became the most potent as well as the safest and most rapidly acting antimalarial drugs, recommended by the World Health Organization for the treatment of different types of malaria.
Discovery of synthetic drugs
Project 523 also resulted in the discovery of synthetic drugs such as pyronaridine in 1973, lumefantrine in 1976 and naphthoquine in 1986. These are all antimalarial drugs and are still used in artemisinin-combination therapy.
Termination and legacy
After Saigon fell on 30 April 1975, ending the Vietnam War, the military purpose of Project 523 subsided. Researchers could not publish their findings but could share their works within the working groups. The first publication in English (and thus circulated outside China) was in the December 1979 issue of the Chinese Medical Journal, authored simply by the Qinghaosu Antimalaria Coordinating Research Group. This attracted collaboration with the Special Programme for Research and Training in Tropical Diseases (TDR), sponsored by the United Nations Children's Fund, the United Nations Development Programme, the World Bank, and WHO, but the research remained closed to non-Chinese scientists. By the early 1980s, research had practically stopped, and the project was officially terminated in 1981. The TDR took this opportunity to organise the first international conference in Beijing on artemisinin and its variants in 1981. Supported by WHO, the Chinese Ministry of Health established the National Chinese Steering Committee for Development of Qinghaosu and its Derivatives to continue the important achievements of Project 523.
The first international collaboration was between Keith Arnold at the Roche Far East Research Foundation, Hong Kong, and Chinese researchers Jing-Bo Jiang, Xing-Bo Guo, Guo-Qiao Li, and Yun Cheung Kong. They made their first international publication in 1982 in The Lancet, in which they reported the comparative efficacy of artemisinin and mefloquine on chloroquine-resistant Plasmodium falciparum. Arnold was among those who developed mefloquine in 1979 and was planning to test the new drug in China. He and his wife Moui became the most important people in translating the historical account of the Project 523 and bringing it to international recognition. The Division of Experimental Therapeutics at the Walter Reed Army Institute of Research, under the United States Army, was the first to produce artemisinin and its derivatives outside China. Their production paved the way for commercial success.
Invention of Coartem
Artemether was more promising for clinical drug than its parent molecule artemisinin. In 1981, the National Steering Committee for Development of Qinghaosu (artemisinin) and its Derivatives authorised Zhou Yiqing, who was working at the Institute of Microbiology and Epidemiology of the Chinese Academy of Military Medical Sciences, to work on artemether. Zhou showed that artemether combined with another antimalarial lumefantrine was the most potent of all antimalarial drugs. He worked alone for four years, and Ning Dianxi and his team joined Zhou in 1985. They found that in clinical trials the combined tablet had cure rate of severe malaria of more than 95%, including in areas where multi-drug resistance is experienced. They applied for patent in 1991 but received it only in 2002. In 1992, they registered it as a new drug in China. Noticing this, Novartis signed a pact for mass production. In 1999, Novartis obtained the international licensing rights and gave the brand name Coartem. The US Food and Drug Administration approved the drug in 2009.
See also
Yunnan Baiyao
Drug discovery
Antimalarial medication
Artemisinin (major contributors: Yu Yagang (余亚纲), Gu Guoming (顾国明), Tu Youyou (屠呦呦), Luo Zeyuan (罗泽渊), Li Guoqiao (李国桥) et al., 1972)
Dihydroartemisinin (Tu Youyou et al., 1973)
Pyronaridine (1973)
Artemether (Li Ying (李英), 1975)
Lumefantrine (1976)
Artesunate (Li Guoqiao (李国桥), 1977)
Artemether/lumefantrine (Zhou Yiqing (周义清), 1985)
Naphthoquine (1986)
History of science and technology in the People's Republic of China
List of Chinese discoveries and List of Chinese inventions
Chinese herbology and Traditional Chinese medicine
References
Further reading
Science and technology in the People's Republic of China
Antimalarial agents
Drug discovery
Chinese discoveries
History of medicine in China
Maoist China
1967 establishments in China
Military logistics of the Vietnam War
China–Vietnam relations
China Projects | Project 523 | [
"Chemistry",
"Biology"
] | 2,403 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
27,976,434 | https://en.wikipedia.org/wiki/FlexGen%20B.V. | FlexGen was a biotechnology company based in Leiden, Netherlands. FlexGen was a spin-off from Leiden University Medical Centre and Dutch Space (part of EADS) and had proprietary technologies for laser based in-situ synthesis of oligonucleotides and other biomolecules. On 21 December 2015, Flexgen Bv in Leiden (South Holland) was declared bankrupt by the court in Gelderland Source.
Products
FleXelect
FleXelect oligopools consist of custom oligonucleotides in solution and can be used for in solution target enrichment prior to next generation DNA sequencing. Target enrichment or In solution hybrid selection is a method for genomic selection in an increasing number of applications such as;
Analysis of custom genomic regions of interest (e.g. specific genes, multiple variants and/or complete pathways).
Analysis of Chromosomal translocation
Validation of Single-nucleotide polymorphism or SNPs (typically after whole genome or whole exome studies)
Other research and diagnostic applications (e.g. Synthetic biology)
An example of a recent application is testing of the BRCA1 and BRCA2 breast cancer risk genes
FlexArrayer
The FlexArrayer is an in-house custom oligonucleotide synthesis instrument. The FlexArrayer facilitates high throughput synthesis of FleXelect oligopools for in-solution target enrichment as well as custom microarray production. The FlexArrayer is also applicable for array based re-sequencing.
The FlexArrayer provides microarray and oligopool synthesis typically used by:
Genomics centres and sequencing facilities
Health and safety institutes & microbiology labs
Technology innovators in the fields of: Surface chemistries, PNA's (Peptide nucleic acid), siRNA's (Small interfering RNA) and more
Technology
Production of microarrays and FleXelect oligopools is done with the FlexArrayer (see image) using proprietary technology. The FlexArrayer synthesizes custom probesets on a substrate based on oligonucleotide deprotection technology;
Before the first oligonucleotide synthesis step the complete DNA microarray surface is covered by photolabile groups.
Those spots the first nucleotide addition is to occur are individually activated by the laser .
The nucleotide solution is washed over the microarray surface and the nucleotides chemically bind to the activated spots.
All nucleotides contain a photolabile group that can in turn be activated. As many rounds of photoactivation and nucleotide addition are performed as are required to synthesize oligonucleotides of the desired length.
This is repeated up to 60 times until the required sequences (up to 100.000) have been synthesized. Thus, the maximum length of any oligonucleotide produced on this platform is 60mer in length.
The microarray is now ready to be used, alternatively the oligonucleotides can be cleaved off to produce FleXelect oligopools.
References
External links
FlexGen's Former Company Website on WayBack Machine
Biotechnology companies of the Netherlands
Biotechnology companies established in 2004
Microarrays
Companies based in Leiden
2004 establishments in the Netherlands | FlexGen B.V. | [
"Chemistry",
"Materials_science",
"Biology"
] | 673 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
27,977,609 | https://en.wikipedia.org/wiki/Comprehensive%20two-dimensional%20gas%20chromatography | Comprehensive two-dimensional gas chromatography, or GC×GC, is a multidimensional gas chromatography technique that was originally described in 1984 by J. Calvin Giddings and first successfully implemented in 1991 by John Phillips and his student Zaiyou Liu.
GC×GC utilizes two different columns with two different stationary phases. In GC×GC, all of the effluent from the first dimension column is diverted to the second dimension column via a modulator. The modulator quickly traps, then "injects" the effluent from the first dimension column onto the second dimension. This process creates a retention plane of the 1st dimension separation x 2nd dimension separation.
The oil and gas industry was an early adopter of the technology for the complex oil samples to determine the many different types of hydrocarbons and their isomers. In these types of samples, over 30000 different compounds could be identified in a crude oil with this comprehensive chromatography technology (CCT).
The CCT evolved from a technology only used in academic R&D laboratories into a more robust technology used in many different industrial labs. Comprehensive chromatography is used in forensics, food and flavor, environmental, metabolomics, biomarkers and clinical applications. Some of the most well-established research groups in the world that are found in Australia, Italy, the Netherlands, Canada, United States, and Brazil use this analytical technique.
Modulation: The process
In GC × GC two columns are connected sequentially, typically the first dimension is a conventional column and the second dimension is a short fast GC type, with a modulator positioned between them. The function of the modulator can be divided into basically three processes:
continuously collect small fractions of the effluent from 1D, ensuring that the separation is maintained in this dimension;
focus or refocus the effluent of a narrow band;
to quickly transfer the 2D fraction collected and focused as a narrow pulse. Taken together, these three steps are called modulation cycle, which is repeated throughout the chromatographic run.
Thermal modulation
Thermal modulators use broad temperature differentials (by way of hot and cold jets) to trap and release analytes eluting out of the primary column. Commercial devices typically use two-stage modulation either via a quad jet approach (where there are two pairs of jets to trap and release the analytes on two different sections of the column) or a delay loop (where the column loops back between a single pair of jets). Both approaches ensure there are two opportunities to focus the analytes.
There are also different versions of thermal modulators based on what is used to cool the cold jet (a stream of dry gas, usually air or nitrogen). Liquid nitrogen cooled loop system provide the lowest temperature for thermal modulation, meaning it is capable of modulating volatiles from C2. However, there is the compromise that liquid nitrogen is expensive and causes additional health and safety concerns.
Alternatively, consumable-free thermal modulators are available that use a closed cycle refrigeration unit to cool the cold jet.
Consumable-free thermal modulation
This approach eliminates the need for liquid nitrogen for thermal modulation. The system employs a closed cycle refrigerator/heat exchanger to produce −90 °C at the jet. The cooling is done by indirect cooling of gaseous nitrogen (or air) and therefore is capable of modulating volatile and semi volatile compounds over the C7+ range.
Flow modulation
This is a valve-based approach, where differential flows are used to 'fill' and 'flush' a sample loop. Flow modulation does not suffer from the same volatility restrictions as thermal modulation, as it does not rely on trapping analytes using a cold jet – meaning volatiles <C5 can be efficiently modulated.
Modulation period
The time required to complete a cycle is called the period of modulation (modulation time) and is actually the time in between two hot pulses, which typically lasts between 2 and 10 seconds is related to the time needed for the compounds to eluted in 2D.
Sensitivity
Another key aspect of GC x GC that can be highlighted is that the result from the refocusing in the 2D, which occurs during the modulation, causes a significant increase in sensitivity, when thermal modulators are used. The modulation process causes the chromatographic bands in GC × GC systems are 10-50 times closer than in 1D-GC, resulting in values for much better peak widths (FWHM Full Width Half Mass) between 50 ms to 500 ms, which requires detectors with fast response and small internal volumes.
When traditional flow modulators are used, the higher flows used to release the analytes from the trap have a diluting effect and do not produce an increase in sensitivity (GC × GC-FID) in concentration-dependant detectors (e.g. ECD), however there can be an increase in mass-dependant detectors such as FID. As most mass spectrometers cannot handle higher flows from flow modulation a splitting device often needs to be used, greatly reducing the amount of material reaching the MS (1/10th to 1/20th), thus causing a further loss of sensitivity.
Column set
The set of columns can be configured with various types. In the original work, column sets were mainly poly(dimethylsiloxane) in the first dimension and poly(ethyleneglycol) in the second dimension. These so-called straight phase column sets are suitable for hydrocarbon analysis. Therefore, these are still used most frequently in the oil and gas industry. For applications that require the analysis of polar compounds in a non-polar matrix, a reverse-phase column set gives more resolution. The first dimension column in this situation is a polar column, followed by a mid-polar second dimension column.
Other applications can be configured differently according to their specific needs. For example, they may include chiral columns for optical isomer separation or PLOT columns for volatiles and gas samples.
Software
Optimizing the application is more complex compared to 1D separations, as there are more parameters involved. Column flow and oven temperature program are both important when using either flow or thermal modulation. However, with thermal modulation, cold jet and hot jet pulse duration, length of the second dimension column and modulation time also affect the final results. In the case of flow modulation, the modulation time, split flow (for MS), loading flow, unloading flow, valve timings are crucial.
The output is also different: the GC×GC technique produces a three-dimensional plot rather than a traditional chromatogram, facilitated by specially designed software packages. For example, GC Image was the first software developed for two dimensional gas chromatography. Some software packages are used in addition to the normal GC (or GC-MS) packages while others are built as a complete platform, controlling all aspects of the analysis. The new and different way of presenting and evaluating data offers additional information. For example, modern software can perform group-type separation as well as automated peak identification (with mass spectrometry).
Detectors
Due to the small width of the peak in the second dimension, suitable detectors are needed. Examples include flame ionization detector (FID), (micro) electron capture detector (μECD) and mass spectrometry analyzers such as fast time of flight (TOF). Several authors have published work using quadrupole Mass Spectrometry (qMS), though some trade-offs have to be accepted as these are much slower.
References
Gas chromatography | Comprehensive two-dimensional gas chromatography | [
"Chemistry"
] | 1,579 | [
"Chromatography",
"Gas chromatography"
] |
27,980,263 | https://en.wikipedia.org/wiki/Immunogold%20labelling | Immunogold labeling or immunogold staining (IGS) is a staining technique used in electron microscopy. This staining technique is an equivalent of the indirect immunofluorescence technique for visible light. Colloidal gold particles are most often attached to secondary antibodies which are in turn attached to primary antibodies designed to bind a specific antigen or other cell component. Gold is used for its high electron density which increases electron scatter to give high contrast 'dark spots'.
First used in 1971, immunogold labeling has been applied to both transmission electron microscopy and scanning electron microscopy, as well as brightfield microscopy. The labeling technique can be adapted to distinguish multiple objects by using differently-sized gold particles.
Immunogold labeling can introduce artifacts, as the gold particles reside some distance from the labelled object and very thin sectioning is required during sample preparation.
History
Immunogold labeling was first used in 1971 by Faulk and Taylor to identify Salmonella antigens. It was first applied in transmission electron microscopy (TEM) and was especially useful in highlighting proteins found in low densities, such as some cell surface antigens. As the resolution of scanning electron microscopy (SEM) increased, so too did the need for nanoparticle-sized labels such as immunogold. In 1975, Horisberger and coworkers successfully visualised gold nanoparticles with a diameter of less than 30 nm
and this soon became an established SEM technique.
Technique
First, a thin section of the sample is cut, often using a microtome. Various other stages of sample preparation may then take place.
The prepared sample is then incubated with a specific antibody designed to bind the molecule of interest. Next, a secondary antibody which has gold particles attached is added, and it binds to the primary antibody. Gold can also be attached to protein A or protein G instead of a secondary antibody, as these proteins bind mammalian IgG Fc regions in a non-specific way.
The electron-dense gold particle can now be seen under an electron microscope as a black dot, indirectly labeling the molecule of interest.
Labeling multiple objects
Immunogold labeling can be used to visualize more than one target simultaneously. This can be achieved in electron microscopy by using two different-sized gold particles. An extension of this method used three different sized gold particles to track the localisation of regulatory peptides. A more complex method of multi-site labeling involves labeling opposite sides of an antigenic site separately, the immunogold particles attached to both sides can then be viewed simultaneously.
Uses in brightfield microscopy
Although immunogold labeling is typically used for transmission electron microscopy, when the gold is 'silver-enhanced' it can be seen using brightfield microscopy. The silver enhancement increases the particle size, also making scanning electron microscopy possible. In order to produce the silver-enhanced gold particles, colloidal gold particles are placed in an acidic enhancing solution containing silver ions. Gold particles then act as a nucleation site and silver is deposited onto the particle. An example of the application of silver-enhanced immunogold labeling (IGSS) was in the identification of the pathogen Erwinia amylovora.
Limitations
An inherent limitation to the immunogold technique is that the gold particle is around 15-30 nm away from the site to which the primary antibody is bound (when using a primary and secondary antibodies labeling strategy). The precise location of the targeted molecule can therefore not be accurately calculated. Gold particles can be created with a diameter of 1 nm (or lower) but another limitation is then realized—at these sizes the gold label becomes hard to distinguish from tissue structure.
Thin sections are required for immunogold labeling and these can produce misleading images; a thin slice of a cell component may not give an accurate view of its three-dimensional structure. For example, a microtubule may appear as a 'spike' depending on which plane the sectioning occurred. To overcome this limitation serial sections can be taken, which can then be compiled into a three-dimensional image.
A further limitation is that antibodies and gold particles cannot penetrate the resin used to embed samples for imaging. Thus, only accessible molecules can be targeted and visualized. Labeling prior to embedding the sample can reduce the negative impact of this limitation.
See also
Immunohistochemistry
References
Biochemistry
Microscopy
Electron microscopy stains | Immunogold labelling | [
"Chemistry",
"Biology"
] | 899 | [
"Biochemistry",
"nan",
"Microscopy"
] |
21,731,590 | https://en.wikipedia.org/wiki/RNA-Seq | RNA-Seq (named as an abbreviation of RNA sequencing) is a technique that uses next-generation sequencing to reveal the presence and quantity of RNA molecules in a biological sample, providing a snapshot of gene expression in the sample, also known as transcriptome.
Specifically, RNA-Seq facilitates the ability to look at alternative gene spliced transcripts, post-transcriptional modifications, gene fusion, mutations/SNPs and changes in gene expression over time, or differences in gene expression in different groups or treatments. In addition to mRNA transcripts, RNA-Seq can look at different populations of RNA to include total RNA, small RNA, such as miRNA, tRNA, and ribosomal profiling. RNA-Seq can also be used to determine exon/intron boundaries and verify or amend previously annotated 5' and 3' gene boundaries. Recent advances in RNA-Seq include single cell sequencing, bulk RNA sequencing, 3' mRNA-sequencing, in situ sequencing of fixed tissue, and native RNA molecule sequencing with single-molecule real-time sequencing. Other examples of emerging RNA-Seq applications due to the advancement of bioinformatics algorithms are copy number alteration, microbial contamination, transposable elements, cell type (deconvolution) and the presence of neoantigens.
Prior to RNA-Seq, gene expression studies were done with hybridization-based microarrays. Issues with microarrays include cross-hybridization artifacts, poor quantification of lowly and highly expressed genes, and needing to know the sequence a priori. Because of these technical issues, transcriptomics transitioned to sequencing-based methods. These progressed from Sanger sequencing of Expressed sequence tag libraries, to chemical tag-based methods (e.g., serial analysis of gene expression), and finally to the current technology, next-gen sequencing of complementary DNA (cDNA), notably RNA-Seq.
Methods
Library preparation
The general steps to prepare a complementary DNA (cDNA) library for sequencing are described below, but often vary between platforms.
RNA Isolation: RNA is isolated from tissue and mixed with Deoxyribonuclease (DNase). DNase reduces the amount of genomic DNA. The amount of RNA degradation is checked with gel and capillary electrophoresis and is used to assign an RNA integrity number to the sample. This RNA quality and the total amount of starting RNA are taken into consideration during the subsequent library preparation, sequencing, and analysis steps.
RNA selection/depletion: To analyze signals of interest, the isolated RNA can either be kept as is, enriched for RNA with 3' polyadenylated (poly(A)) tails to include only eukaryotic mRNA, depleted of ribosomal RNA (rRNA), and/or filtered for RNA that binds specific sequences (RNA selection and depletion methods table, below). RNA molecules having 3' poly(A) tails in eukaryotes are mainly composed of mature, processed, coding sequences. Poly(A) selection is performed by mixing RNA with oligomers covalently attached to a substrate, typically magnetic beads. Poly(A) selection has important limitations in RNA biotype detection. Many RNA biotypes are not polyadenylated, including many noncoding RNA and histone-core protein transcripts, or are regulated via their poly(A) tail length (e.g., cytokines) and thus might not be detected after poly(A) selection. Furthermore, poly(A) selection may display increased 3' bias, especially with lower quality RNA. These limitations can be avoided with ribosomal depletion, removing rRNA that typically represents over 90% of the RNA in a cell. Both poly(A) enrichment and ribosomal depletion steps are labor intensive and could introduce biases, so more simple approaches have been developed to omit these steps. Small RNA targets, such as miRNA, can be further isolated through size selection with exclusion gels, magnetic beads, or commercial kits.
cDNA synthesis: RNA is reverse transcribed to cDNA because DNA is more stable and to allow for amplification (which uses DNA polymerases) and leverage more mature DNA sequencing technology. Amplification subsequent to reverse transcription results in loss of strandedness, which can be avoided with chemical labeling or single molecule sequencing. Fragmentation and size selection are performed to purify sequences that are the appropriate length for the sequencing machine. The RNA, cDNA, or both are fragmented with enzymes, sonication, divalent ions, or nebulizers. Fragmentation of the RNA reduces 5' bias of randomly primed-reverse transcription and the influence of primer binding sites, with the downside that the 5' and 3' ends are converted to DNA less efficiently. Fragmentation is followed by size selection, where either small sequences are removed or a tight range of sequence lengths are selected. Because small RNAs like miRNAs are lost, these are analyzed independently. The cDNA for each experiment can be indexed with a hexamer or octamer barcode, so that these experiments can be pooled into a single lane for multiplexed sequencing.
Complementary DNA sequencing (cDNA-Seq)
The cDNA library derived from RNA biotypes is then sequenced into a computer-readable format. There are many high-throughput sequencing technologies for cDNA sequencing including platforms developed by Illumina, Thermo Fisher, BGI/MGI, PacBio, and Oxford Nanopore Technologies. For Illumina short-read sequencing, a common technology for cDNA sequencing, adapters are ligated to the cDNA, DNA is attached to a flow cell, clusters are generated through cycles of bridge amplification and denaturing, and sequence-by-synthesis is performed in cycles of complementary strand synthesis and laser excitation of bases with reversible terminators. Sequencing platform choice and parameters are guided by experimental design and cost. Common experimental design considerations include deciding on the sequencing length, sequencing depth, use of single versus paired-end sequencing, number of replicates, multiplexing, randomization, and spike-ins.
Small RNA/non-coding RNA sequencing
When sequencing RNA other than mRNA, the library preparation is modified. The cellular RNA is selected based on the desired size range. For small RNA targets, such as miRNA, the RNA is isolated through size selection. This can be performed with a size exclusion gel, through size selection magnetic beads, or with a commercially developed kit. Once isolated, linkers are added to the 3' and 5' end then purified. The final step is cDNA generation through reverse transcription.
Direct RNA sequencing
Because converting RNA into cDNA, ligation, amplification, and other sample manipulations have been shown to introduce biases and artifacts that may interfere with both the proper characterization and quantification of transcripts, single molecule direct RNA sequencing has been explored by companies including Helicos (bankrupt), Oxford Nanopore Technologies, and others. This technology sequences RNA molecules directly in a massively-parallel manner.
Single-molecule real-time RNA sequencing
Massively parallel single molecule direct RNA-Seq has been explored as an alternative to traditional RNA-Seq, in which RNA-to-cDNA conversion, ligation, amplification, and other sample manipulation steps may introduce biases and artifacts. Technology platforms that perform single-molecule real-time RNA-Seq include Oxford Nanopore Technologies (ONT) Nanopore sequencing, PacBio IsoSeq, and Helicos (bankrupt). Sequencing RNA in its native form preserves modifications like methylation, allowing them to be investigated directly and simultaneously. Another benefit of single-molecule RNA-Seq is that transcripts can be covered in full length, allowing for higher confidence isoform detection and quantification compared to short-read sequencing. Traditionally, single-molecule RNA-Seq methods have higher error rates compared to short-read sequencing, but newer methods like ONT direct RNA-Seq limit errors by avoiding fragmentation and cDNA conversion. Recent uses of ONT direct RNA-Seq for differential expression in human cell populations have demonstrated that this technology can overcome many limitations of short and long cDNA sequencing.
Single-cell RNA sequencing (scRNA-Seq)
Standard methods such as microarrays and standard bulk RNA-Seq analysis analyze the expression of RNAs from large populations of cells. In mixed cell populations, these measurements may obscure critical differences between individual cells within these populations.
Single-cell RNA sequencing (scRNA-Seq) provides the expression profiles of individual cells. Although it is not possible to obtain complete information on every RNA expressed by each cell, due to the small amount of material available, patterns of gene expression can be identified through gene clustering analyses. This can uncover the existence of rare cell types within a cell population that may never have been seen before. For example, rare specialized cells in the lung called pulmonary ionocytes that express the Cystic fibrosis transmembrane conductance regulator were identified in 2018 by two groups performing scRNA-Seq on lung airway epithelia.
Experimental procedures
Current scRNA-Seq protocols involve the following steps: isolation of single cell and RNA, reverse transcription (RT), amplification, library generation and sequencing. Single cells are either mechanically separated into microwells (e.g., BD Rhapsody, Takara ICELL8, Vycap Puncher Platform, or CellMicrosystems CellRaft) or encapsulated in droplets (e.g., 10x Genomics Chromium, Illumina Bio-Rad ddSEQ, 1CellBio InDrop, Dolomite Bio Nadia). Single cells are labeled by adding beads with barcoded oligonucleotides; both cells and beads are supplied in limited amounts such that co-occupancy with multiple cells and beads is a very rare event. Once reverse transcription is complete, the cDNAs from many cells can be mixed together for sequencing; transcripts from a particular cell are identified by each cell's unique barcode. Unique molecular identifier (UMIs) can be attached to mRNA/cDNA target sequences to help identify artifacts during library preparation.
Challenges for scRNA-Seq include preserving the initial relative abundance of mRNA in a cell and identifying rare transcripts. The reverse transcription step is critical as the efficiency of the RT reaction determines how much of the cell's RNA population will be eventually analyzed by the sequencer. The processivity of reverse transcriptases and the priming strategies used may affect full-length cDNA production and the generation of libraries biased toward the 3’ or 5' end of genes.
In the amplification step, either PCR or in vitro transcription (IVT) is currently used to amplify cDNA. One of the advantages of PCR-based methods is the ability to generate full-length cDNA. However, different PCR efficiency on particular sequences (for instance, GC content and snapback structure) may also be exponentially amplified, producing libraries with uneven coverage. On the other hand, while libraries generated by IVT can avoid PCR-induced sequence bias, specific sequences may be transcribed inefficiently, thus causing sequence drop-out or generating incomplete sequences.
Several scRNA-Seq protocols have been published:
Tang et al.,
STRT,
SMART-seq,
CEL-seq,
RAGE-seq, Quartz-seq and C1-CAGE. These protocols differ in terms of strategies for reverse transcription, cDNA synthesis and amplification, and the possibility to accommodate sequence-specific barcodes (i.e. UMIs) or the ability to process pooled samples.
In 2017, two approaches were introduced to simultaneously measure single-cell mRNA and protein expression through oligonucleotide-labeled antibodies known as REAP-seq, and CITE-seq.
Applications
scRNA-Seq is becoming widely used across biological disciplines including Development, Neurology, Oncology, Autoimmune disease, and Infectious disease.
scRNA-Seq has provided considerable insight into the development of embryos and organisms, including the worm Caenorhabditis elegans, and the regenerative planarian Schmidtea mediterranea. The first vertebrate animals to be mapped in this way were Zebrafish and Xenopus laevis. In each case multiple stages of the embryo were studied, allowing the entire process of development to be mapped on a cell-by-cell basis. Science recognized these advances as the 2018 Breakthrough of the Year.
Experimental considerations
A variety of parameters are considered when designing and conducting RNA-Seq experiments:
Tissue specificity: Gene expression varies within and between tissues, and RNA-Seq measures this mix of cell types. This may make it difficult to isolate the biological mechanism of interest. Single cell sequencing can be used to study each cell individually, mitigating this issue.
Time dependence: Gene expression changes over time, and RNA-Seq only takes a snapshot. Time course experiments can be performed to observe changes in the transcriptome.
Coverage (also known as depth): RNA harbors the same mutations observed in DNA, and detection requires deeper coverage. With high enough coverage, RNA-Seq can be used to estimate the expression of each allele. This may provide insight into phenomena such as imprinting or cis-regulatory effects. The depth of sequencing required for specific applications can be extrapolated from a pilot experiment.
Data generation artifacts (also known as technical variance): The reagents (e.g., library preparation kit), personnel involved, and type of sequencer (e.g., Illumina, Pacific Biosciences) can result in technical artifacts that might be mis-interpreted as meaningful results. As with any scientific experiment, it is prudent to conduct RNA-Seq in a well controlled setting. If this is not possible or the study is a meta-analysis, another solution is to detect technical artifacts by inferring latent variables (typically principal component analysis or factor analysis) and subsequently correcting for these variables.
Data management: A single RNA-Seq experiment in humans is usually 1-5 Gb (compressed), or more when including intermediate files. This large volume of data can pose storage issues. One solution is compressing the data using multi-purpose computational schemas (e.g., gzip) or genomics-specific schemas. The latter can be based on reference sequences or de novo. Another solution is to perform microarray experiments, which may be sufficient for hypothesis-driven work or replication studies (as opposed to exploratory research).
Analysis
Transcriptome assembly
Two methods are used to assign raw sequence reads to genomic features (i.e., assemble the transcriptome):
De novo: This approach does not require a reference genome to reconstruct the transcriptome, and is typically used if the genome is unknown, incomplete, or substantially altered compared to the reference. Challenges when using short reads for de novo assembly include 1) determining which reads should be joined together into contiguous sequences (contigs), 2) robustness to sequencing errors and other artifacts, and 3) computational efficiency. The primary algorithm used for de novo assembly transitioned from overlap graphs, which identify all pair-wise overlaps between reads, to de Bruijn graphs, which break reads into sequences of length k and collapse all k-mers into a hash table. Overlap graphs were used with Sanger sequencing, but do not scale well to the millions of reads generated with RNA-Seq. Examples of assemblers that use de Bruijn graphs are Trinity, Oases (derived from the genome assembler Velvet), Bridger, and rnaSPAdes. Paired-end and long-read sequencing of the same sample can mitigate the deficits in short read sequencing by serving as a template or skeleton. Metrics to assess the quality of a de novo assembly include median contig length, number of contigs and N50.
Genome guided: This approach relies on the same methods used for DNA alignment, with the additional complexity of aligning reads that cover non-continuous portions of the reference genome. These non-continuous reads are the result of sequencing spliced transcripts (see figure). Typically, alignment algorithms have two steps: 1) align short portions of the read (i.e., seed the genome), and 2) use dynamic programming to find an optimal alignment, sometimes in combination with known annotations. Software tools that use genome-guided alignment include Bowtie, TopHat (which builds on BowTie results to align splice junctions), Subread, STAR, HISAT2, and GMAP. The output of genome guided alignment (mapping) tools can be further used by tools such as Cufflinks or StringTie to reconstruct contiguous transcript sequences (i.e., a FASTA file). The quality of a genome guided assembly can be measured with both 1) de novo assembly metrics (e.g., N50) and 2) comparisons to known transcript, splice junction, genome, and protein sequences using precision, recall, or their combination (e.g., F1 score). In addition, in silico assessment could be performed using simulated reads.
A note on assembly quality: The current consensus is that 1) assembly quality can vary depending on which metric is used, 2) assembly tools that scored well in one species do not necessarily perform well in the other species, and 3) combining different approaches might be the most reliable.
Gene expression quantification
Expression is quantified to study cellular changes in response to external stimuli, differences between healthy and diseased states, and other research questions. Transcript levels are often used as a proxy for protein abundance, but these are often not equivalent due to post transcriptional events such as RNA interference and nonsense-mediated decay.
Expression is quantified by counting the number of reads that mapped to each locus in the transcriptome assembly step. Expression can be quantified for exons or genes using contigs or reference transcript annotations. These observed RNA-Seq read counts have been robustly validated against older technologies, including expression microarrays and qPCR. Tools that quantify counts are HTSeq, FeatureCounts, Rcount, maxcounts, FIXSEQ, and Cuffquant. These tools determine read counts from aligned RNA-Seq data, but alignment-free counts can also be obtained with Sailfish and Kallisto. The read counts are then converted into appropriate metrics for hypothesis testing, regressions, and other analyses. Parameters for this conversion are:
Sequencing depth/coverage: Although depth is pre-specified when conducting multiple RNA-Seq experiments, it will still vary widely between experiments. Therefore, the total number of reads generated in a single experiment is typically normalized by converting counts to fragments, reads, or counts per million mapped reads (FPM, RPM, or CPM). The difference between RPM and FPM was historically derived during the evolution from single-end sequencing of fragments to paired-end sequencing. In single-end sequencing, there is only one read per fragment (i.e., RPM = FPM). In paired-end sequencing, there are two reads per fragment (i.e., RPM = 2 x FPM). Sequencing depth is sometimes referred to as library size, the number of intermediary cDNA molecules in the experiment.
Gene length: Longer genes will have more fragments/reads/counts than shorter genes if transcript expression is the same. This is adjusted by dividing the FPM by the length of a feature (which can be a gene, transcript, or exon), resulting in the metric fragments per kilobase of feature per million mapped reads (FPKM). When looking at groups of features across samples, FPKM is converted to transcripts per million (TPM) by dividing each FPKM by the sum of FPKMs within a sample.
Total sample RNA output: Because the same amount of RNA is extracted from each sample, samples with more total RNA will have less RNA per gene. These genes appear to have decreased expression, resulting in false positives in downstream analyses. Normalization strategies including quantile, DESeq2, TMM and Median Ratio attempt to account for this difference by comparing a set of non-differentially expressed genes between samples and scaling accordingly.
Variance for each gene's expression: is modeled to account for sampling error (important for genes with low read counts), increase power, and decrease false positives. Variance can be estimated as a normal, Poisson, or negative binomial distribution and is frequently decomposed into technical and biological variance.
Spike-ins for absolute quantification and detection of genome-wide effects
RNA spike-ins are samples of RNA at known concentrations that can be used as gold standards in experimental design and during downstream analyses for absolute quantification and detection of genome-wide effects.
Absolute quantification: Absolute quantification of gene expression is not possible with most RNA-Seq experiments, which quantify expression relative to all transcripts. It is possible by performing RNA-Seq with spike-ins, samples of RNA at known concentrations. After sequencing, read counts of spike-in sequences are used to determine the relationship between each gene's read counts and absolute quantities of biological fragments. In one example, this technique was used in Xenopus tropicalis embryos to determine transcription kinetics.
Detection of genome-wide effects: Changes in global regulators including chromatin remodelers, transcription factors (e.g., MYC), acetyltransferase complexes, and nucleosome positioning are not congruent with normalization assumptions and spike-in controls can offer precise interpretation.
Differential expression
The simplest but often most powerful use of RNA-Seq is finding differences in gene expression between two or more conditions (e.g., treated vs not treated); this process is called differential expression. The outputs are frequently referred to as differentially expressed genes (DEGs) and these genes can either be up- or down-regulated (i.e., higher or lower in the condition of interest). There are many tools that perform differential expression. Most are run in R, Python, or the Unix command line. Commonly used tools include DESeq, edgeR, and voom+limma, all of which are available through R/Bioconductor. These are the common considerations when performing differential expression:
Inputs: Differential expression inputs include (1) an RNA-Seq expression matrix (M genes x N samples) and (2) a design matrix containing experimental conditions for N samples. The simplest design matrix contains one column, corresponding to labels for the condition being tested. Other covariates (also referred to as factors, features, labels, or parameters) can include batch effects, known artifacts, and any metadata that might confound or mediate gene expression. In addition to known covariates, unknown covariates can also be estimated through unsupervised machine learning approaches including principal component, surrogate variable, and PEER analyses. Hidden variable analyses are often employed for human tissue RNA-Seq data, which typically have additional artifacts not captured in the metadata (e.g., ischemic time, sourcing from multiple institutions, underlying clinical traits, collecting data across many years with many personnel).
Methods: Most tools use regression or non-parametric statistics to identify differentially expressed genes, and are either based on read counts mapped to a reference genome (DESeq2, limma, edgeR) or based on read counts derived from alignment-free quantification (sleuth, Cuffdiff, Ballgown). Following regression, most tools employ either familywise error rate (FWER) or false discovery rate (FDR) p-value adjustments to account for multiple hypotheses (in human studies, ~20,000 protein-coding genes or ~50,000 biotypes).
Outputs: A typical output consists of rows corresponding to the number of genes and at least three columns, each gene's log fold change (log-transform of the ratio in expression between conditions, a measure of effect size), p-value, and p-value adjusted for multiple comparisons. Genes are defined as biologically meaningful if they pass cut-offs for effect size (log fold change) and statistical significance. These cut-offs should ideally be specified a priori, but the nature of RNA-Seq experiments is often exploratory so it is difficult to predict effect sizes and pertinent cut-offs ahead of time.
Pitfalls: The raison d'etre for these complex methods is to avoid the myriad of pitfalls that can lead to statistical errors and misleading interpretations. Pitfalls include increased false positive rates (due to multiple comparisons), sample preparation artifacts, sample heterogeneity (like mixed genetic backgrounds), highly correlated samples, unaccounted for multi-level experimental designs, and poor experimental design. One notable pitfall is viewing results in Microsoft Excel without using the import feature to ensure that the gene names remain text. Although convenient, Excel automatically converts some gene names (SEPT1, DEC1, MARCH2) into dates or floating point numbers.
Choice of tools and benchmarking: There are numerous efforts that compare the results of these tools, with DESeq2 tending to moderately outperform other methods. As with other methods, benchmarking consists of comparing tool outputs to each other and known gold standards.
Downstream analyses for a list of differentially expressed genes come in two flavors, validating observations and making biological inferences. Owing to the pitfalls of differential expression and RNA-Seq, important observations are replicated with (1) an orthogonal method in the same samples (like real-time PCR) or (2) another, sometimes pre-registered, experiment in a new cohort. The latter helps ensure generalizability and can typically be followed up with a meta-analysis of all the pooled cohorts. The most common method for obtaining higher-level biological understanding of the results is gene set enrichment analysis, although sometimes candidate gene approaches are employed. Gene set enrichment determines if the overlap between two gene sets is statistically significant, in this case the overlap between differentially expressed genes and gene sets from known pathways/databases (e.g., Gene Ontology, KEGG, Human Phenotype Ontology) or from complementary analyses in the same data (like co-expression networks). Common tools for gene set enrichment include web interfaces (e.g., ENRICHR, g:profiler, WEBGESTALT) and software packages. When evaluating enrichment results, one heuristic is to first look for enrichment of known biology as a sanity check and then expand the scope to look for novel biology.
Alternative splicing
RNA splicing is integral to eukaryotes and contributes significantly to protein regulation and diversity, occurring in >90% of human genes. There are multiple alternative splicing modes: exon skipping (most common splicing mode in humans and higher eukaryotes), mutually exclusive exons, alternative donor or acceptor sites, intron retention (most common splicing mode in plants, fungi, and protozoa), alternative transcription start site (promoter), and alternative polyadenylation. One goal of RNA-Seq is to identify alternative splicing events and test if they differ between conditions. Long-read sequencing captures the full transcript and thus minimizes many of issues in estimating isoform abundance, like ambiguous read mapping. For short-read RNA-Seq, there are multiple methods to detect alternative splicing that can be classified into three main groups:
Count-based (also event-based, differential splicing): estimate exon retention. Examples are DEXSeq, MATS, and SeqGSEA.
Isoform-based (also multi-read modules, differential isoform expression): estimate isoform abundance first, and then relative abundance between conditions. Examples are Cufflinks 2 and DiffSplice.
Intron excision based: calculate alternative splicing using split reads. Examples are MAJIQ and Leafcutter.
Differential gene expression tools can also be used for differential isoform expression if isoforms are quantified ahead of time with other tools like RSEM.
Coexpression networks
Coexpression networks are data-derived representations of genes behaving in a similar way across tissues and experimental conditions. Their main purpose lies in hypothesis generation and guilt-by-association approaches for inferring functions of previously unknown genes. RNA-Seq data has been used to infer genes involved in specific pathways based on Pearson correlation, both in plants and mammals. The main advantage of RNA-Seq data in this kind of analysis over the microarray platforms is the capability to cover the entire transcriptome, therefore allowing the possibility to unravel more complete representations of the gene regulatory networks. Differential regulation of the splice isoforms of the same gene can be detected and used to predict their biological functions.
Weighted gene co-expression network analysis has been successfully used to identify co-expression modules and intramodular hub genes based on RNA seq data. Co-expression modules may correspond to cell types or pathways. Highly connected intramodular hubs can be interpreted as representatives of their respective module. An eigengene is a weighted sum of expression of all genes in a module. Eigengenes are useful biomarkers (features) for diagnosis and prognosis. Variance-Stabilizing Transformation approaches for estimating correlation coefficients based on RNA seq data have been proposed.
Variant discovery
RNA-Seq captures DNA variation, including single nucleotide variants, small insertions/deletions. and structural variation. Variant calling in RNA-Seq is similar to DNA variant calling and often employs the same tools (including SAMtools mpileup and GATK HaplotypeCaller) with adjustments to account for splicing. One unique dimension for RNA variants is allele-specific expression (ASE): the variants from only one haplotype might be preferentially expressed due to regulatory effects including imprinting and expression quantitative trait loci, and noncoding rare variants. Limitations of RNA variant identification include that it only reflects expressed regions (in humans, <5% of the genome), could be subject to biases introduced by data processing (e.g., de novo transcriptome assemblies underestimate heterozygosity), and has lower quality when compared to direct DNA sequencing.
RNA editing (post-transcriptional alterations)
Having the matching genomic and transcriptomic sequences of an individual can help detect post-transcriptional edits (RNA editing). A post-transcriptional modification event is identified if the gene's transcript has an allele/variant not observed in the genomic data.
Fusion gene detection
Caused by different structural modifications in the genome, fusion genes have gained attention because of their relationship with cancer. The ability of RNA-Seq to analyze a sample's whole transcriptome in an unbiased fashion makes it an attractive tool to find these kinds of common events in cancer.
The idea follows from the process of aligning the short transcriptomic reads to a reference genome. Most of the short reads will fall within one complete exon, and a smaller but still large set would be expected to map to known exon-exon junctions. The remaining unmapped short reads would then be further analyzed to determine whether they match an exon-exon junction where the exons come from different genes. This would be evidence of a possible fusion event, however, because of the length of the reads, this could prove to be very noisy. An alternative approach is to use paired-end reads, when a potentially large number of paired reads would map each end to a different exon, giving better coverage of these events (see figure). Nonetheless, the end result consists of multiple and potentially novel combinations of genes providing an ideal starting point for further validation.
Copy number alteration
Copy number alteration (CNA) analyses are commonly used in cancer studies. Gain and loss of the genes have signalling pathway implications and are a key biomarker of molecular dysfunction in oncology. Calling the CNA information from RNA-Seq data is not straightforward because of the differences in gene expression, which lead to the read depth variance of different magnitudes across genes. Due to these difficulties, most of these analyses are usually done using whole-genome sequencing / whole-exome sequencing (WGS/WES). But advanced bioinformatics tools can call CNA from RNA-Seq.
Other emerging analysis and applications
The applications of RNA-Seq are growing day by day. Other new application of RNA-Seq includes detection of microbial contaminants, determining cell type abundance (cell type deconvolution), measuring the expression of TEs and Neoantigen prediction etc.
History
RNA-Seq was first developed in mid 2000s with the advent of next-generation sequencing technology. The first manuscripts that used RNA-Seq even without using the term includes those of prostate cancer cell lines (dated 2006), Medicago truncatula (2006), maize (2007), and Arabidopsis thaliana (2007), while the term "RNA-Seq" itself was first mentioned in 2008. The number of manuscripts referring to RNA-Seq in the title or abstract (Figure, blue line) is continuously increasing with 6754 manuscripts published in 2018. The intersection of RNA-Seq and medicine (Figure, gold line) has similar celerity.
Applications to medicine
RNA-Seq has the potential to identify new disease biology, profile biomarkers for clinical indications, infer druggable pathways, and make genetic diagnoses. These results could be further personalized for subgroups or even individual patients, potentially highlighting more effective prevention, diagnostics, and therapy. The feasibility of this approach is in part dictated by costs in money and time; a related limitation is the required team of specialists (bioinformaticians, physicians/clinicians, basic researchers, technicians) to fully interpret the huge amount of data generated by this analysis.
Large-scale sequencing efforts
A lot of emphasis has been given to RNA-Seq data after the Encyclopedia of DNA Elements (ENCODE) and The Cancer Genome Atlas (TCGA) projects have used this approach to characterize dozens of cell lines and thousands of primary tumor samples, respectively. ENCODE aimed to identify genome-wide regulatory regions in different cohort of cell lines and transcriptomic data are paramount to understand the downstream effect of those epigenetic and genetic regulatory layers. TCGA, instead, aimed to collect and analyze thousands of patient's samples from 30 different tumor types to understand the underlying mechanisms of malignant transformation and progression. In this context RNA-Seq data provide a unique snapshot of the transcriptomic status of the disease and look at an unbiased population of transcripts that allows the identification of novel transcripts, fusion transcripts and non-coding RNAs that could be undetected with different technologies.
See also
Transcriptomics
DNA microarray
List of RNA-Seq bioinformatics tools
References
Further reading
External links
: a high-level guide to designing and implementing an RNA-Seq experiment.
Molecular biology
RNA
Gene expression
RNA sequencing | RNA-Seq | [
"Chemistry",
"Biology"
] | 7,374 | [
"Genetics techniques",
"Gene expression",
"RNA sequencing",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
26,096,136 | https://en.wikipedia.org/wiki/Johari%E2%80%93Goldstein%20relaxation | Johari–Goldstein relaxation, also known as the JG β-relaxation, is a universal property of glasses and certain other disordered materials.
Proposed in 1969 by Martin Goldstein, JG β-relaxation were described as a secondary relaxation mechanism required to explain the viscosity behavior of liquids approaching the glass transition in the potential energy landscape picture presented in Goldstein's seminal 1969 paper. Previous experiments on glass forming liquids showed multiple relaxation times present in liquids measured by time dependent compliance measurements. Gyan Johari and Martin Goldstein measured the dielectric loss spectrum of a set of rigid glass forming molecules to further test the hypothesis of Goldstein in 1969. The relaxation, a peak in mechanical or dielectric loss at a particular frequency, had previously been attributed to a type of molecular flexibility. The fact that such a loss peak shows up in glasses of rigid molecules lacking this flexibility demonstrated its universal character.
The JG β-relaxation process is speculated to be a precursor of the structural α-relaxation, i.e., its occurrence facilitates viscous flow, however the microscopic mechanism of the β-relaxation has not been definitively identified.
Evidence for the universality and importance of the J. G. β-relaxation
Johari determined the temperature dependence of the α-relaxation and β-relaxation as a function of frequency by measuring the dielectric loss ε″ as a function of frequency at multiple temperatures. They observed two peaks in the system with the lower frequency peak attributed to the structural α-relaxation and the higher frequency peak related to the fast (high frequency short time) β-relaxation. The peak in the high frequency ε″ response of the β-relaxation has also been shown to broaden and shift to lower frequencies. Furthermore, the α-relaxation peak changes more rapidly on cooling than the rate of JG β-relaxation, where the α-relaxation times diverge following the VFT law as glass transition temperature (Tg) is approached which is much faster than the Arrhenius temperature dependence observed for the peak in the β-relaxation curve over the same temperature ranges.
Relation to other relaxation mechanism
The J.G. β-relaxation was developed based on the theoretical predictions of Martin Goldstein in his seminal 1969 paper discussing the potential energy landscape picture and activated energy barrier hopping for viscous liquids. These developments have often focused on understanding secondary relaxations below Tg that are present in small molecule and metallic glasses. Polymer glasses also show multiple relaxation mechanisms at temperatures below Tg, with β, γ, and δ relaxations having been measured well below Tg into the glassy state. However, the exact molecular mechanism for these relaxations is often subject to debate and how they may relate to J. G β-relaxations is not established by the literature.
References
Further reading
External links
Aging of the Johari-Goldstein relaxation in the glass-forming liquids sorbitol and xylitol
Interdependence of Primary and Johari-Goldstein Secondary Relaxations in Glass-Forming Systems
Merging of The α and β relaxations and aging via the Johari–Goldstein modes in rapidly quenched metallic glasses
Phase transitions
Critical phenomena
Glass | Johari–Goldstein relaxation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 634 | [
"Physical phenomena",
"Phase transitions",
"Glass",
"Unsolved problems in physics",
"Critical phenomena",
"Phases of matter",
"Homogeneous chemical mixtures",
"Condensed matter physics",
"Statistical mechanics",
"Amorphous solids",
"Matter",
"Dynamical systems"
] |
26,099,835 | https://en.wikipedia.org/wiki/C5H8N2O5 | {{DISPLAYTITLE:C5H8N2O5}}
The molecular formula C5H8N2O5 (molar mass: 176.13 g/mol, exact mass: 176.0433 u) may refer to:
Carbamoyl aspartic acid (or ureidosuccinic acid)
Oxalyldiaminopropionic acid | C5H8N2O5 | [
"Chemistry"
] | 81 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
26,099,892 | https://en.wikipedia.org/wiki/C6H10N2O5 | {{DISPLAYTITLE:C6H10N2O5}}
The molecular formula C6H10N2O5 may refer to:
ADA (buffer), a zwitterionic organic chemical buffering agent
Carglumic acid, an orphan drug marketed by Orphan Europe under the trade name Carbaglu | C6H10N2O5 | [
"Chemistry"
] | 69 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
26,102,914 | https://en.wikipedia.org/wiki/Head%20grammar | Head grammar (HG) is a grammar formalism introduced in Carl Pollard (1984) as an extension of the context-free grammar class of grammars. Head grammar is therefore a type of phrase structure grammar, as opposed to a dependency grammar. The class of head grammars is a subset of the linear context-free rewriting systems.
One typical way of defining head grammars is to replace the terminal strings of CFGs with indexed terminal strings, where the index denotes the "head" word of the string. Thus, for example, a CF rule such as might instead be , where the 0th terminal, the a, is the head of the resulting terminal string. For convenience of notation, such a rule could be written as just the terminal string, with the head terminal denoted by some sort of mark, as in .
Two fundamental operations are then added to all rewrite rules: wrapping and concatenation.
Operations on headed strings
Wrapping
Wrapping is an operation on two headed strings defined as follows:
Let and be terminal strings headed by x and y, respectively.
Concatenation
Concatenation is a family of operations on n > 0 headed strings, defined for n = 1, 2, 3 as follows:
Let , , and be terminal strings headed by x, y, and z, respectively.
And so on for . One can sum up the pattern here simply as "concatenate some number of terminal strings m, with the head of string n designated as the head of the resulting string".
Form of rules
Head grammar rules are defined in terms of these two operations, with rules taking either of the forms
where , , ... are each either a terminal string or a non-terminal symbol.
Example
Head grammars are capable of generating the language . We can define the grammar as follows:
The derivation for "abcd" is thus:
And for "":
Formal properties
Equivalencies
Vijay-Shanker and Weir (1994) demonstrate that linear indexed grammars, combinatory categorial grammar, tree-adjoining grammars, and head grammars are weakly equivalent formalisms, in that they all define the same string languages.
References
Formal languages
Grammar frameworks
Syntax | Head grammar | [
"Mathematics"
] | 443 | [
"Formal languages",
"Mathematical logic"
] |
26,102,963 | https://en.wikipedia.org/wiki/Cybernetical%20physics | Cybernetical physics is a scientific area on the border of cybernetics and physics which studies physical systems with cybernetical methods. Cybernetical methods are understood as methods developed within control theory, information theory, systems theory and related areas: control design, estimation, identification, optimization, pattern recognition, signal processing, image processing, etc. Physical systems are also understood in a broad sense; they may be either lifeless, living nature or of artificial (engineering) origin, and must have reasonably understood dynamics and models suitable for posing cybernetical problems. Research objectives in cybernetical physics are frequently formulated as analyses of a class of possible system state changes under external (controlling) actions of a certain class. An auxiliary goal is designing the controlling actions required to achieve a prespecified property change. Among typical control action classes are functions which are constant in time (bifurcation analysis, optimization), functions which depend only on time (vibration mechanics, spectroscopic studies, program control), and functions whose value depends on measurement made at the same time or on previous instances. The last class is of special interest since these functions correspond to system analysis by means of external feedback (feedback control).
Roots of cybernetical physics
Until recently no creative interaction of physics and control theory (cybernetics) had been seen and no control theory methods were directly used for discovering new physical effects and phenomena. The situation dramatically changed in the 1990s when two new areas emerged: control of chaos and quantum control.
Control of chaos
In 1990 a paper was published in Physical Review Letters by Edward Ott, Celso Grebogi and James Yorke from the University of Maryland reporting that even small feedback action can dramatically change the behavior of a nonlinear system, e.g., turn chaotic motions into periodic ones and vice versa. The idea almost immediately became popular in the physics community, and since 1990 hundreds of papers have been published demonstrating the ability of small control, with or without feedback, to significantly change the dynamics of real or model systems. By 2003, this paper by Ott, Grebogi and Yorke had been quoted over 1300 times whilst the total number of papers relating to control of chaos exceeded 4000 by the beginning of the 21st century, with 300-400 papers per year being published in peer-reviewed journals. The method proposed in is now called the OGY-method after the authors' initials.
Later, a number of other methods were proposed for transforming chaotic trajectories into periodic ones, for example delayed feedback (Pyragas method). Numerous nonlinear and adaptive control methods were also applied for the control of chaos, see surveys in.
It is important that the results obtained were interpreted as discovering new properties of physical systems. Thousands of papers were published that examine and predict properties of systems based on the use of control, identification and other cybernetic methods. Notably, most of those papers were published in physical journals, their authors representing university physics departments. It has become clear that such types of control goals are important not only for the control of chaos, but also for the control of a broader class of oscillatory processes. This provides evidence for the existence of an emerging field of research related to both physics and control, that of "cybernetical physics".
Quantum control
It is conceivable that molecular physics was the area where ideas of control first appeared. James Clerk Maxwell introduced a hypothetical being, known as Maxwell's demon, with the ability to measure the velocities of gas molecules in a vessel and to direct the fast molecules to one part of the vessel while keeping the slow molecules in another part. This produces a temperature difference between the two parts of the vessel, which seems to contradict the second law of thermodynamics. Now, after more than a century of fruitful life, this demon is even more active than in the past. Recent papers discussed issues relating to the experimental implementation of Maxwell's demon, particularly at the quantum-mechanical level.
At the end of the 1970s the first mathematical results for the control of quantum mechanical models appeared based on control theory
At the end of the 1980s and beginning of the 1990s rapid developments in the laser industry led to the appearance of ultrafast, so-called femtosecond lasers. This new generation of lasers has the ability to generate pulses with durations of a few femtoseconds and even less (1 fs = sec). The duration of such a pulse is comparable with the period of a molecule's natural oscillation. Therefore, a femtosecond laser can, in principle, be used as a mean of controlling single molecules and atoms. A consequence of such an application is the possibility of realizing the alchemists' dream of changing the natural course of chemical reactions. A new area in chemistry emerged, femtochemistry, and new femtotechnologies were developed. Ahmed Zewail from Caltech was awarded the 1999 Nobel Prize in Chemistry for his work on femtochemistry.
Using modern control theory, new horizons may open for studying the interaction of atoms and molecules, and new ways and possible limits may be discovered for intervening in the intimate processes of the microworld. Besides, control is an important part of many recent nanoscale applications, including nanomotors, nanowires, nanochips, nanorobots, etc. The number of publications in peer-reviewed journals exceeds 600 per year.
Control thermodynamics
The basics of thermodynamics were stated by Sadi Carnot in 1824. He considered a heat engine which operates by drawing heat from a source which is at thermal equilibrium at temperature , and delivering useful work. Carnot saw that, in order to operate continuously, the engine requires also a cold reservoir with the temperature , to which some heat can be discharged. By simple logic he established the famous
‘’’Carnot Principle’’’: ‘’No heat engine can be more efficient than a reversible one operating between the same temperatures’’.
In fact it was nothing but the solution to an optimal control problem: maximum work can be extracted by a reversible machine and the value of extracted work depends only on the temperatures of the source and the bath. Later, Kelvin introduced his absolute temperature scale (Kelvin scale) and accomplished the next step, evaluating Carnot's reversible efficiency
However, most work was devoted to studying stationary systems over infinite time intervals, while for practical purposes it is important to know the possibilities and limitations of the system's evolution for finite times as well as under other types of constraints caused by a finite amount of available resources.
The pioneer work devoted to evaluating finite time limitations for heat engines was published by I. Novikov in 1957, and independently by F.L. Curzon and B. Ahlborn in 1975: the efficiency at maximum power per cycle of a heat engine coupled to its surroundings through a constant heat conductor is
(the Novikov-Curzon-Ahlborn formula). The Novikov-Curzon-Ahlborn process is also optimal in the sense of minimal dissipation. Otherwise, if the dissipation degree is given, the process corresponds to the maximum entropy principle. Later, the results were extended and generalized for other criteria and for more complex situations based on modern optimal control theory. As a result, a new direction in thermodynamics arose known under the names "optimization thermodynamics", "finite-time thermodynamics", Endoreversible thermodynamics or
"control thermodynamics", see.
Subject and methodology of cybernetical physics
By the end of the 1990s it had become clear that a new area in physics dealing with control methods had emerged. The term "cybernetical physics" was proposed in. The subject and methodology of the field are systematically presented in.
A description of the control problems related to cybernetical physics includes classes of controlled plant models, control objectives (goals) and admissible control algorithms. The methodology of cybernetical physics comprises typical methods used for solving problems and typical results in the field.
Models of controlled systems
A formal statement of any control problem begins with a model of the system to be
controlled (plant) and a model of the control objective (goal). Even if the plant model is not given (the case in many real world applications) it should be determined in some way. The system models used in cybernetics are similar to traditional models of physics and mechanics with one difference: the inputs and outputs of the model should be explicitly specified. The following main classes of models are considered in the literature related to control of physical systems: continuous systems with lumped parameters described in state space by differential equations, distributed (spatio-temporal) systems described by partial differential equations, and discrete-time state-space models described by difference equations.
Control goals
It is natural to classify control problems by their control goals. Five kinds are listed below.
Regulation (often called stabilization or positioning) is the most common and simple control goal. Regulation is understood as driving the state vector (or the output vector ) to some equilibrium state (respectively, ).
Tracking. State tracking is driving a solution to the prespecified function of time . Similarly, output tracking is driving the output to the desired output function . The problem is more complex if the desired equilibrium or trajectory is unstable in the absence of control action. For example, a typical problem of chaos control can be formulated as tracking an unstable periodic solution (orbit). The key feature of the control problems for physical systems is that the goal should be achieved by means of sufficiently small control. A limit case is stabilization of a system by an arbitrarily small control. The solvability of this task is not obvious if the trajectory is unstable, for example in the case of chaotic systems. See.
Generation (excitation) of oscillations. The third class of control goals corresponds to the problems of "excitation" or "generation" of oscillations. Here, it is assumed that the system is initially at rest. The problem is to find out if it is possible to drive it into an oscillatory mode with the desired characteristics (energy, frequency, etc.) In this case the goal trajectory of the state vector is not prespecified. Moreover, the goal trajectory may be unknown, or may even be irrelevant to the achievement of the control goal. Such problems are well known in electrical, radio engineering, acoustics, laser, and vibrational technologies, and indeed wherever it is necessary to create an oscillatory mode for a system. Such a class of control goals can be related to problems of dissociation, ionization of molecular systems, escape from a potential well, chaotization, and other problems related to the growth of the system energy and its possible phase transition. Sometimes such problems can be reduced to tracking, but the reference trajectories in these cases are not necessarily periodic and may be unstable. Besides, the goal trajectory may be known only partially.
Synchronization. The fourth important class of control goals corresponds to synchronization (more accurately, "controlled synchronization" as distinct from
"autosynchronization" or "self-synchronization"). Generally speaking, synchronization is understood as concurrent change of the states of two or more systems or, perhaps, concurrent change of some quantities related to the systems, e.g., equalizing of oscillation frequencies. If the required relation is established only asymptotically, one speaks of "asymptotic synchronization". If synchronization does not exist in the system without control the problem may be posed as finding the control function which ensures synchronization in the closed-loop system, i.e., synchronization may be a control goal. Synchronization problem differs from the model reference control problem in that some phase shifts between the processes are allowed that are either constant or tend to constant values. Besides, in a number of synchronization problems the links between the systems to be synchronized are bidirectional. In such cases the limit mode (synchronous mode) in the overall system is not known in advance.
Modification of the limit sets (attractors) of the systems. The last class of control goals is related to the modification of some quantitative characteristics that limit the behavior of the system. It includes such specific goals as
changing the type of the equilibrium (e.g., transforming an unstable equilibrium into a stable one, or vice versa);
changing the type of the limit set (e.g., transforming a limit cycle into a chaotic attractor, or vice versa, changing the fractal dimension of the limit set, etc.);
changing the position or the type of the bifurcation point in the parameter space of the system.
Investigation of the above problems started at the end of the 1980s with work on bifurcation control and continued with work on the control of chaos. Ott, Grebogi and Yorke and their followers introduced a new class of control goals not requiring any quantitative characteristic of the desired motion. Instead, the desired qualitative type of the limit set (attractor) was specified, e.g., control should provide the system with a chaotic attractor. Additionally, the desired degree of chaoticity may be specified by specifying the Lyapunov exponent, fractal dimension, entropy, etc. See.
In addition to the main control goal, some additional goals or constraints may be specified. A typical example is the "small control" requirement: the control function should have little power or should require a small expenditure of energy. Such a restriction is needed to avoid "violence" and preserve the inherent properties of the system under control. This is important for ensuring the elimination of artefacts and for an adequate study of the system.
Three types of control are used in physical problems: constant control, feedforward control and feedback control. Implementation of a feedback control requires additional measurement devices working in real time, which are often hard to install. Therefore, studying the system may start with the application of inferior forms of control: time-constant and then feedforward control. The possibilities of changing system behavior by means of feedback control can then be studied.
Methodology
The methodology of cybernetical physics is based on control theory. Typically, some parameters of physical systems are unknown and some variables are not available for measurement. From the control viewpoint this means that control design should be
performed under significant uncertainty, i.e., methods of robust control or adaptive control should be used. A variety of design methods have been developed by control theorists and control engineers for both linear and nonlinear systems. Methods of partial control, control by weak signals, etc. have also been developed.
Fields of research and prospects
Currently, an interest in applying control methods in physics is still growing.
The following areas of research are being actively developed:
Control of oscillations
Control of synchronization
Control of chaos, bifurcations
Control of phase transitions, stochastic resonance
Optimal control in thermodynamics
Control of micromechanical, molecular and quantum systems
Among the most important applications are: control of fusion, control of beams, control in nano- and femto-technologies.
In order to facilitate information exchange in the area of cybernetical physics the International Physics and Control Society (IPACS) was created.
IPACS organizes regular conferences (Physics and Control Conferences) and supports an electronic library, IPACS Electronic Library and an information portal, Physics and Control Resources.
See also
Maxwell's demon
References
External links
Portal, Physics and Control Resources
IPACS Electronic Library
International Physics and Control Society (IPACS)
Applied and interdisciplinary physics
Computational physics
Cybernetics | Cybernetical physics | [
"Physics"
] | 3,244 | [
"Applied and interdisciplinary physics",
"Computational physics"
] |
26,103,936 | https://en.wikipedia.org/wiki/Integrated%20Biological%20Detection%20System | The Integrated Biological Detection System is a system used by the British Army and Royal Air Force for detecting Chemical, biological, radiological, and nuclear agents or elements.
The Integrated Biological Detection System can provide early warning of a chemical or biological warfare attack and is in service with the United Kingdom Joint NBC Regiment. It can be installed in a container which can be mounted on a vehicle or ground dumped. It is also able to be transported by either a fixed-wing aircraft or by helicopter.
The system comprises
A detection suite, including equipment for atmospheric sampling
Meteorological station and GPS
CBRN filtration and environmental control for use in all climates
Chemical agent detection
A independent power supply
Cameras for 360 degree surveillance
A U.S. military system with a similar purpose and a similar name is the Biological Integrated Detection System (BIDS).
References
Biological warfare
British Army equipment
Chemical warfare
Toxicology in the United Kingdom | Integrated Biological Detection System | [
"Chemistry",
"Biology",
"Environmental_science"
] | 178 | [
"Toxicology in the United Kingdom",
"Toxicology",
"nan",
"Biological warfare"
] |
26,104,444 | https://en.wikipedia.org/wiki/Water%20tariff | A water tariff (often called water rate in the United States and Canada) is a price assigned to water supplied by a public utility through a piped network to its customers. The term is also often applied to wastewater tariffs. Water and wastewater tariffs are not charged for water itself, but to recover the costs of water treatment, water storage, transporting it to customers, collecting and treating wastewater, as well as billing and collection. Prices paid for water itself are different from water tariffs. They exist in a few countries and are called water abstraction charges or fees. Abstraction charges are not covered in this article, but in the article on water pricing). Water tariffs vary widely in their structure and level between countries, cities and sometimes between user categories (residential, commercial, industrial or public buildings). The mechanisms to adjust tariffs also vary widely.
Most water utilities in the world are publicly owned, but some are privately owned or managed (see water privatization). Utilities are network industries and natural monopolies. Economic theory predicts that unregulated private utilities set the price of their product at a level that allows to extract a monopoly profit. However, in reality tariffs charged by utilities are regulated. They can be set below costs, at the level of cost recovery without a return on capital, or at the level of cost recovery including a predetermined rate of return on capital. In many developing countries tariffs are set below the level of cost recovery, even without considering a rate of return on capital [ref]. This often leads to a lack of maintenance and requires significant subsidies for both investment and operation. In developed countries water and, to a lesser degree, wastewater tariffs, are typically set close to or at the level of cost recovery, sometimes including an allowance for profit[ref].
Criteria for tariff setting
Water tariffs are set based on a number of formal criteria defined by law, as well as informal criteria. Formal criteria typically include:
financial criteria (cost recovery),
economic criteria (efficiency pricing based on marginal cost) and sometimes
environmental criteria (incentives for water conservation).
Social and political considerations often are also important in setting tariffs. Tariff structure and levels are influenced in some cases by the desire to avoid an overly high burden for poor users. Political considerations in water pricing often lead to a delay in the approval of tariff increases in the run-up to elections. Another criterion for tariff setting is that water tariffs should be easy to understand for consumers. This is not always the case for the more complex types of tariffs, such as increasing-block tariffs and tariffs that differentiate between different categories of users.
Tariff structures
There are numerous different tariff structures. Their prevalence differs between countries, as shown by international tariff surveys.
Types of tariff structures
Water and wastewater tariffs include at least one of the following components:
a volumetric tariff, where water metering is applied, and
a flat rate, where no water metering is applied.
Many utilities apply two-part tariffs where a volumetric tariff is combined with a fixed charge. The latter may include a minimum consumption or not. The level of the fixed charge often depends on the diameter of the connection.
Volumetric tariffs can
be proportional to consumption (linear tariffs),
increase with consumption (increasing-block tariffs, IBT), or
decrease with consumption (decreasing-block tariffs, DBT).
The tariff for a first block on an IBT is usually set at a very low tariff with the objective to protect poor households that are assumed to consume less water than non-poor households. The size of the first block can vary from 5 cubic meters to 50 cubic meters per household and month. In South Africa, the first block of consumption of 6 cubic meters per household and month is even provided for free (free basic water). Average monthly water consumption varies depending on household size and consumption habits between about 4 cubic meters for a single-person household in temperate climate (e.g. in Germany) with no outdoor water use and about 50 cubic meters for a four-person household in warm climate (e.g. in the Southern United States) including outdoor water use.
However, there is not always a positive correlation between the level of household income and water consumption.
Wastewater tariffs typically follow the same structure as water tariffs. They are typically measured based on the volume of water supplied, sometimes after subtracting an allowance made for estimated or actual outdoor use. In the case of industries, wastewater tariffs are sometimes differentiated based on the pollutant load of the wastewater. In some cases wastewater tariffs are a fixed percentage of water tariffs, but usually they are set separately. In addition to regular bills, many utilities levy a one-time connection fee both for water and for sewer connections.
International tariff surveys
The OECD conducted two surveys of residential water tariffs in 1999 and in 2007-08, using a reference consumption of 15 cubic meters per household and month. The 2007-08 survey covered more than 150 cities in all 30 OECD member countries. The survey does not claim to be representative. The OECD survey was complemented by a survey of the industry information service Global Water Intelligence (GWI) conducted in 2007-2008 in parallel with the second OECD survey. The 2008 GWI survey covered 184 utilities in OECD countries and 94 utilities in non-OECD countries. GWI has repeated its survey every year from 2009 to 2012, increasing the number of utilities surveyed to 310 in 2012. The data from the OECD/GWI surveys are widely quoted and, unlike the results of other global tariff surveys, have been indirectly made available to the public.
The database of the International Benchmarking Network (IB-Net) for Water and Sanitation Utilities includes tariff data from more than 190 countries and territories tariffs.ib-net.org. Another tariff survey has been conducted by the International Water Association. In addition, surveys of tariffs for commercial and industrial customers in selected OECD countries have also regularly been conducted by the consulting firm NUS.
Prevalence of tariff structure types
Linear volumetric tariffs are the most common form of water tariffs in OECD countries, being used by 90 out of 184 utilities surveyed by Global Water Intelligence in 2007 and 2008, either with or without a fixed charge element. Some eastern European countries (Hungary, Poland and the Czech Republic) use pricing systems based solely on volumetric pricing, with no fixed charge element at all. Increasing-block tariff systems are used by 87 of the 184 utilities in OECD countries surveyed, such as e.g. in Spain. Since the late 1980s there has been a trend in OECD countries away from decreasing-block tariffs, which are apparently only still found in some cities of the United States. Where fixed charges exist as part of two-part tariffs, there is a shift toward the reduction or even abolition of large minimum free allowances in OECD countries. For example, Australia and South Korea have both moved in this direction during the 1990s. Flat rates are still reported in Canada, Mexico, New Zealand, Norway and the United Kingdom.
Concerning developing countries and transition economies, in the non-representative GWI sample of 94 utilities in 54 countries, 59 used linear volumetric tariffs and 31 used increasing-block tariffs. However, utilities from Sub-Saharan Africa where increasing-block tariffs are very common are under-represented in the GWI sample with only 6 utilities. On the other hand, utilities from transition economies where linear volumetric tariffs are common are over-represented with 28 utilities. The survey thus probably underestimates the prevalence of increasing-block tariffs in non-OECD countries.
Tariff levels
There are different valid methods to compare water tariff levels. According to one method, the highest water and wastewater tariff in the world is found in Bermudas, equivalent to US$7.45 per m3 in 2017 (consumption of 15 m3 per month). The lowest water tariffs in the world are found in Turkmenistan and Cook Islands, where residential water is provided for free, followed by Uzbekistan with a water tariff equivalent to US$0.01 per m3 and no wastewater tariff
Difficulties related to tariff comparisons
There are two basic ways to calculate water and wastewater tariff levels for the purpose of comparing tariff levels between cities: One way is to calculate an average tariff for the utility. This is done by dividing total tariff revenues by the total consumption billed across all usage categories and all levels of consumption. Another way is to determine a typical level of consumption and calculate the residential tariff that corresponds to this consumption. Depending on which of these two methods is used the resulting tariff can vary significantly for the same utility.
The comparison of water and wastewater tariffs across countries is further complicated by the choice of the appropriate exchange rate (nominal exchange rates for a given year or over the average of several years, or purchasing power parity exchange rates).
Furthermore, providing a global overview of water tariff levels is complicated by the large number of service providers (utilities). In urban areas in the United States alone, there are more than 4,000 water utilities. In Germany there are more than 1,200 utilities. Few countries in the world maintain national databases of water and wastewater tariffs charged by utilities. There are few countries that maintain national tariff databases, usually those with a specialized regulatory agency for the water sector such as England (OFWAT), Chile (SISS), Colombia (CRA) or Peru (SUNASS).
Tariff levels
Among the 310 cities in the GWI 2012 tariff survey the average combined water and wastewater tariff was US$1.98/m3 for the 15 m3/month "benchmark" customer used by the survey. Utilities in four of the surveyed cities provide residential water and wastewater services for free: Dublin and Cork (see Ireland), as well as Belfast and Ashgabat in Turkmenistan. The lowest residential water and wastewater tariffs were found in Saudi Arabia (equivalent to US$0.03/m3) and in Havana, Cuba as well as Damascus, Syria (equivalent to US$0.04/m3). Rates in the United States in Clovis, CA are $0.42/m3. and $1.60/m3 in Seattle The highest water and wastewater tariffs were found in Aarhus, Denmark (US$9.21/m3), Essen, Germany (US$7.35/m3; not included in the OECD survey) Copenhagen, Denmark (US$7.09), and four Australian cities (Perth, Brisbane, Adelaide and Sydney) where the tariff for the benchmark user ranged from US$6.38/m3 - US$6.47/m3.
Concerning wastewater tariffs, in some countries such as [Nigeria] there is no wastewater tariff at all. In other countries - such as Mexico, Turkey, Belgium, Portugal and Korea - wastewater tariffs are low relative to water tariffs. Finally, in many OECD countries - such as in Australia, Germany, Italy, the UK and the US - wastewater tariffs are now higher than water tariffs, reflecting increasing cost recovery rates and an increase in the prevalence of wastewater treatment.
Wastewater tariffs from 126 countries and territories can be found here https://tariffs.ib-net.org/sites/IBNET/VisualSearch/IndexCurrentUSD?Weight=0&ServiceId=3&Yearid=0
Many utilities charge higher tariffs for commercial and industrial customers than for residential users, in an effort to cross-subsidy residential customers.
Tariff adjustment processes
The process of adjusting water tariffs differs greatly from one location to another. In many large countries (China, France, Germany, India, Mexico, South Africa and the United States) the process of price adjustment takes place at the municipal level. Rules for price adjustments vary greatly. In the case of public service provision, tariffs are typically adjusted through a decision by the municipal council after a request by the municipal utility. Some countries, such as Germany, stipulate by law that all the financial costs of service provision must be recovered through tariff revenues. Other countries define cost recovery as a long-term objective, such as in Mexico. In the case of private service providers tariff adjustment rules are often laid out in concession or lease contracts, often providing for indexation to inflation.
In some developing countries, water tariffs are set at the national level. Tariff increases are often considered a politically sensitive issue and have to be decided by the Cabinet of Ministers or a National Pricing Commission. This is the case in many countries of the Middle East and North Africa (Egypt, Jordan, Lebanon, Morocco, Syria, Tunisia), as well as in many countries in Sub-Saharan Africa. In many countries, there are no objective criteria for tariff adjustments. Adjustments tend to be infrequent and often lag behind inflation so that cost recovery remains elusive.
Some countries have created regulatory agencies at the national level that review requests for tariff adjustments submitted by service providers. The earliest and best-known example is the regulatory agency OFWAT, which was established for England and Wales in 1989. Some developing countries followed suit. They include Chile (1990), Colombia (1994), Honduras (2004), Kenya, Mozambique (1998), Peru (1994), Portugal (1997), and Zambia (2000). The review process is typically based on transparent and objective criteria set by law, in an attempt to move decision-making at least partly out of the realm of politics. The track record of these agencies has been diverse, usually mirroring the political and administrative traditions of each country.
Changes in water use in response to tariff increases
The responsiveness of demand to a change in price is measured by the price elasticity of demand, which is defined as the percentage change in demand divided by the percentage change in price. The price elasticity of drinking water demand by urban households is typically low. In European countries it ranges between -0.1 and -0.25, i.e. the demand for water decreases by 0.1% to 0.25% for every 1% increase in tariffs. In Australia and the United States price elasticity is somewhat higher in the range of -0.1 and -0.4.
Affordability and social protection measures
In about half the OECD member countries, affordability of water charges for low-income households is or could become a significant issue, according to the OECD. In developing countries, the poor are often not connected to the network and often pay a higher share of their meager incomes for lower quantities of water supplied by water vendors through trucks. On the other hand, utility bills paid by those fortunate enough to be connected to the network are very low in some developing countries. Different countries have introduced a variety of approaches to protect the poor from high water tariffs.
Affordability
The affordability of water charges can be measured by macro- and micro-affordability. Macro-affordability" indicators relate national average household water and wastewater bills to average net disposable household income. In OECD countries it varies from 0.2% (Italy and Mexico) to 1.4% (Slovak Republic, Poland and Hungary). In the largest OECD countries the share is 0.3% in the United States and Japan, 0.7% in France and 0.9% in Germany. However, micro-affordability is quite different. It measures the share of bills in the income of the poor, defined in an OECD affordability study as the lowest decile of the population. This share varies between 1.1% (Sweden, Netherlands, Italy) and 5.3% in the Slovak Republic, 9.0% in Poland and 10.3% in Turkey. The OECD concludes that in half its member countries (15 out of 30), affordability of water charges for low-income households "is either a significant issue now or might become one in the future, if appropriate policy measures are not put in place." In developing countries the situation is more serious, not only because of lower incomes, but also because the poor are often not connected to the network. They usually pay a higher share of their meager incomes for lower quantities of water at often lower quality supplied by water vendors through trucks. On the other hand, utility bills paid by those fortunate enough to be connected to the network are often relatively low, especially in South Asia. Because of this situation the OECD does not recommend to use uniform "thresholds" for the affordability of water and wastewater bills. These "thresholds" are often quoted in the range of 3-5% of household income.
Social protection measures
Social protection measures to ensure that piped water remains affordable can be broadly classified into income support measures and tariff-related measures. Income support measures address the individual customer’s ability to pay from the income side (through income assistance, water services vouchers, tariff rebates and discounts, bill re-phasing and easier payment plans, arrears forgiveness). An example of income assistance to poor users is the subsidy system applied in Chile. Tariff-related measures keep the size of water bills low for certain groups (e.g. refinement of increasing-block tariffs, tariff choice, tariff capping). Examples of increasing block tariffs with a price of zero in the first block are found in Flanders and South Africa. Another measure is the cross-subsidization using different tariffs for different neighborhoods, as practiced in Colombia. A similar approach has been used at the national level in Portugal. The Portuguese economic water regulator carried out an affordability study that found out that 10.5% of the population paid more than 3% of their income for water and wastewater services. As a result, the regulator showed flexibility concerning tariff increases and tariff solutions in municipalities where affordability was a particular issue.
See also
Water industry
Water metering
Water pricing
References
Further reading
Tom Jones, OECD Environment Directorate: Pricing water, OECD Observer No. 236, March 2003
U.S. Environmental Protection Agency: Water and Wastewater Pricing
France:Water pricing
Australian government, National Water Commission: Water pricing
Pricing
Water supply | Water tariff | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,669 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
26,108,124 | https://en.wikipedia.org/wiki/Gravity%20feed | Gravity feed is the use of earth's gravity to move something (usually a liquid) from one place to another. It is a simple means of moving a liquid without the use of a pump. A common application is the supply of fuel to an internal combustion engine by placing the fuel tank above the engine, e.g. in motorcycles, lawn mowers, etc. A non-liquid application is the carton flow shelving system.
Ancient Roman aqueducts were gravity-fed, as water supply systems to remote villages in developing countries often are. In this case the flow of water to the village is provided by the hydraulic head, the vertical distance from the intake at the source to the outflow in the village, on which gravity acts; while it is opposed by the friction in the pipe which is determined primarily by the length and diameter of the pipe as well as by its age and the material of which it is made.
See also
Siphon
References
Gravity
Fluid dynamics | Gravity feed | [
"Chemistry",
"Engineering"
] | 198 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
26,108,571 | https://en.wikipedia.org/wiki/Mathematical%20tile | Mathematical tiles are tiles which were used extensively as a building material in the southeastern counties of England—especially East Sussex and Kent—in the 18th and early 19th centuries. They were laid on the exterior of timber-framed buildings as an alternative to brickwork, which their appearance closely resembled. A distinctive black variety with a glazed surface was used on many buildings in Brighton (now part of the city of Brighton and Hove) from about 1760 onwards, and is considered a characteristic feature of the town's early architecture. Although the brick tax (1784–1850) was formerly thought to have encouraged use of mathematical tiles, in fact the tiles were subject to the same tax.
Name
The precise origin of the name "mathematical" is unknown. Local historian Norman Nail ascribes it to the "neat geometric pattern" produced by the tiles. John W. Cowan suggests it means "exactly regular", an older sense of "mathematical" which is now rare. Other attributive names include "brick", "geometrical", "mechanical", "rebate", "wall", or "weather" tiles. According to Christopher Hussey, "weather tile" is an earlier more general term, with the true "mathematical tile" distinguished by its flush setting. In 18th-century Oxford "feather edge tile" was used. While "mathematical tile" is now usual, Nail considered it a "pretentious" innovation, preferring "brick tile" as an older and more authentic name.
Usage and varieties
The tiles were laid in a partly overlapping pattern, akin to roof shingles. Their lower section—the part intended to be visible when the tiling was complete—was thicker; the upper section would slide under the overlapping tile above and would therefore be hidden. In the top corner was a hole for a nail to be inserted. They would then be hung on a lath of wood, and the lower sections would be moulded together with an infill of lime mortar to form a flat surface. The interlocking visible surfaces would then resemble either header bond or stretcher bond brickwork. Mathematical tiles had several advantages over brick: they were cheaper, easier to lay than bricks (skilled workmen were not needed), and were more resistant to the weathering effects of wind, rain and sea-spray, making them particularly useful at seaside locations such as Brighton.
Various colours of tile were produced: red, to resemble brick most closely; honey; cream; and black. Brighton, the resort most closely associated with mathematical tiles, has examples of each. Many houses on the seafront east of the Royal Pavilion and Old Steine, for example on Wentworth Street, have cream-coloured tiles, and honey-coloured tiles were used by Henry Holland in his design for the Marine Pavilion—forerunner of the Royal Pavilion. Holland often used mathematical tiles in his commissions, although he usually used blue Gault clay to make them.
A 1987 count of surviving mathematical tiles in English counties found the most in Kent (407 buildings), followed by Sussex (382), Wiltshire (50), Surrey (47), and Hampshire (37 including the Isle of Wight).
Black glazed tiles
The black glazed type is most closely associated with the Brighton's early architecture: such tiles had the extra advantage of reflecting light in a visually attractive way. Black mathematical tiles started to appear in the 1760s, soon after the town began to grow in earnest as its reputation as a health resort became known. When Patcham Place, a mid 16th-century house in nearby Patcham (now part of the city of Brighton and Hove), was rebuilt in 1764, it was clad entirely in the tiles. Royal Crescent, Brighton's first unified architectural set piece and first residential development built to face the sea, was faced in the same material when it was built between 1799 and 1807. When Pool Valley—the site where a winterbourne drained into the English Channel—was built over in the 1790s, one of the first buildings erected there was a mathematical tiled two-storey shop. Both the building (now known as 9 Pool Valley) and the façade survive. All three of these have Grade II* listed status, indicating that in the context of England's architecture they are "particularly important ... [and] of more than special interest". Other examples can be seen at Grand Parade—the east side of Old Steine, developed haphazardly with large houses in a variety of styles and materials in the early 19th century; York Place, a fashionable address when built in the 1800s; and Market Street in The Lanes, Brighton's ancient core of narrow streets.
Lewes, the county town of East Sussex, has many buildings clad with mathematical tiles in black and other colours. These include the Grade I-listed Jireh Chapel in the Cliffe area of the town which is clad in red mathematical tiles and dark grey slate. The timber-framed chapel was built in 1805. Elsewhere, a study in 2005 identified 22 18th-century timber-framed buildings (mostly townhouses) with mathematical tiles of various colours. Examples are the semi-detached pair at 199 and 200 High Street, the small terrace at 9–11 Market Street, 33 School Hill (an old building with a mid-18th century renewed façade), and the Quaker meeting house of 1784.
Examples from Brighton
See also
Buildings and architecture of Brighton and Hove
Notes
References
External links
Building materials
Brighton
Lewes | Mathematical tile | [
"Physics",
"Engineering"
] | 1,106 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
26,108,747 | https://en.wikipedia.org/wiki/Relieving%20tackle | Relieving tackle is tackle employing one or more lines attached to a vessel's steering mechanism, to assist or substitute for the whipstaff or ship's wheel in steering the craft. This enabled the helmsman to maintain control in heavy weather, when the rudder is under more stress and requires greater effort to handle, and also to steer the vessel were the helm damaged or destroyed.
In vessels with whipstaffs (long vertical poles extending above deck, acting as a lever to move the tiller below deck), relieving lines were attached to the tiller or directly to the whipstaff. When wheels were introduced, their greater mechanical advantage lessened the need for such assistance, but relieving tackle could still be used on the tiller, located on a deck underneath the wheel. Relieving tackle was also rigged on vessels going into battle, to assist in steering in case the helm was damaged or shot away. When a storm threatened, or battle impended, the tackle would be affixed to the tiller, and hands assigned to man them. Additional tackle was available to attach directly to the rudder as surety against loss of the tiller.
The term can also refer to lines or cables attached to a vessel that has been careened (laid over to one side for maintenance). The lines passed under the hull and were secured to the opposite side, to keep the vessel from overturning further, and to aid in righting the ship when the work was finished.
References
Ships
Simple machines | Relieving tackle | [
"Physics",
"Technology"
] | 309 | [
"Simple machines",
"Machines",
"Physical systems"
] |
26,109,619 | https://en.wikipedia.org/wiki/Thermosonic%20bonding | Thermosonic bonding is widely used to wire bond silicon integrated circuits into computers. Alexander Coucoulas was named "Father of Thermosonic Bonding" by George Harman, the world's foremost authority on wire bonding, where he referenced Coucoulas's leading edge publications in his book, Wire Bonding In Microelectronics. Owing to the well proven reliability of thermosonic bonds, it is extensively used to connect the central processing units (CPUs), which are encapsulated silicon integrated circuits that serve as the "brains" of today's computers.
Thermosonic bonding is widely used to electrically connect silicon integrated circuit microprocessor chips into computers as well as a myriad of other electronic devices that require wire bonding.
As a result of Coucoulas introducing thermosonic bonding lead wires in the early 1960s, its applications and scientific investigations by researchers throughout the world have grown. The reliability of thermosonic bonding has made it the process of choice for connecting these crucially important electronic components. And since relatively low bonding parameters were shown to form reliable thermosonic bonds, the integrity of the fragile silicon integrated circuit chip central processor unit or CPU, is assured throughout its intended lifetime use as the "brains" of the computer.
Description
A thermosonic bond is formed using a set of parameters which include ultrasonic, thermal and mechanical (force) energies. A thermosonic bonding machine includes a magnetostrictive or piezoelectric-type transducer which is used to convert electrical energy into vibratory motion which is known as piezoelectricity. The vibratory motion travels along the coupler system, a portion which is tapered to serve as the velocity transformer. The velocity transformer amplifies the oscillatory motion and delivers it to a heated bonding tip. It is akin to a friction bond, since the introduction of ultrasonic energy (via a bonding tool vertically attached to an ultrasonic transformer or horn) simultaneously delivers a force and vibratory or scrubbing motion to the interfacial contact points between a pre-heated deforming lead-wire and the metallized pads of a silicon integrated circuit. In addition to the delivery of thermal energy, the transmission of ultrasonic vibratory energy creates an ultrasonic softening effect by interacting at the atomic lattice level of the preheated lead wire. These two softening effects dramatically facilitates the lead wire deformation by forming the desirable contact area using relatively low temperatures and forces. As a result of the frictional action and ultrasonic softening induced in the preheated lead wire during the bonding cycle, thermosonic bonding can be used to reliably bond high melting point lead wires (such as gold and lower cost aluminum and copper) using relatively low bonding parameters. This ensures that the fragile and costly silicon integrated circuit chip is not exposed to potentially damaging conditions by having to use higher bonding parameters (ultrasonic energy, temperatures or mechanical forces) to deform the lead wire in forming the required contact area during the bonding process.
Background
Initially referred to as Hot Work Ultrasonic Bonding by Alexander Coucoulas, thermosonic bonding falls in the category of a solid state metallic bond which is formed by mating two metal surfaces well below their respective melting points. Introduced by Coucoulas, thermosonic bonding significantly improved upon the bond-reliability achieved by available commercial solid-state bonding machines by pre-heating the lead wire (and/or metallized silicon chip) prior to introducing an ultrasonic energy cycle.
Thermosonic bonding was found to bond a wide range of conductive metals such as aluminum and copper wires to tantalum and palladium thin films deposited on aluminum oxide and glass substrates, all of which simulated the metallized silicon chip. In addition to thermal softening of the lead wire, the subsequent delivery of ultrasonic energy produced further softening by interacting at the atomic lattice level of the heated wire (known as ultrasonic softening). These two independent softening mechanisms eliminated the incidence of cracking in the fragile and costly silicon chip which was observed by Coucoulas when using earlier commercially available solid-state bonding machines. The improvement occurs because pre-heating and ultrasonic softening of the lead-wire dramatically eases deformation as to produce the required contact area using a relatively low set of bonding parameters. Depending on the temperature level and material properties of the lead wire, the onset of recrystallization (metallurgy) or hot working of the deforming wire can occur while it is forming the required contact area. Recrystallization takes place in the strain hardening region of the lead wire where it aids in the softening effect; if the wire was ultrasonically deformed at room temperature, it would face extensive strain hardening (cold working) and therefore tend to transmit damaging mechanical stresses to the silicon chip.
Uses
At present, the majority of connections to the silicon integrated circuit chip are made using thermosonic bonding because it employs lower bonding temperatures, forces and dwell times than thermocompression bonding, as well as lower vibratory energy levels and forces than ultrasonic bonding to form the required bond area. Therefore the use of thermosonic bonding eliminates damaging the relatively fragile silicon integrated circuit chip during the bonding cycle. The proven reliability of thermosonic bonding has made it the process of choice, since such potential failure modes could be costly whether they occur during the manufacturing stage or detected later, during an operational field-failure of a chip which had been connected inside a computer or a myriad of other microelectronic devices.
Thermosonic bonding is also used in the flip chip process which is an alternate method of electrically connecting silicon integrated circuits.
Josephson effect and superconducting interference (DC SQUID) devices use the thermosonic bonding process as well. In this case, other bonding methods would degrade or even destroy YBaCuO7 microstructures, such as microbridges, Josephson junctions and superconducting interference devices (DC SQUID).
When electrically connecting light-emitting diodes with thermosonic bonding techniques, an improved performance of the device has been shown.
See also
Semiconductor device fabrication
Transistor
LED lamps
References
Semiconductor device fabrication
Packaging (microfabrication) | Thermosonic bonding | [
"Materials_science"
] | 1,305 | [
"Semiconductor device fabrication",
"Packaging (microfabrication)",
"Microtechnology"
] |
41,585,022 | https://en.wikipedia.org/wiki/Radiofluorination | Radiofluorination is the process by which a radioactive isotope of fluorine is attached to a molecule and is preferably performed by nucleophilic substitution using nitro or halogens as leaving groups. Fluorine-18 is the most common isotope used for this procedure. This is due to its 97% positron emission and relatively long 109.8 min half-life. The half-life allows for a long enough time to be incorporated into the molecule and be used without causing exceedingly harmful effects. This process has many applications especially with the use of positron emission tomography (PET) as the aforementioned low positron energy is able to yield a high resolution in PET imaging.
History
The first notable radiofluorination synthesis was performed in 1976 for the synthesis of Fluorine-18 labeled fludeoxyglucose. In the 1980s this molecule was discovered to accumulate in tumors of cancer patients. Since this time, this molecule has become a standard in PET imaging of cancer, and currently the only FDA-approved substance to do so. In recent years, research is being performed to find alternatives to the fludeoxyglucose molecule. These new molecules are bifunctional labeling agents that can attach to proteins or peptides to label not only cancer, but also amyloid plaques and inflammatory processes.
Procedure
Due to the ongoing research involving radiofluorinated molecules and their various uses, the demand for suitable syntheses has increased over the years. In order for synthetic methods to be considered viable, the process must be rapid and efficient as well as compatible with the forms of 18F with are available. In many cases, the synthesis must also be capable of regio- and stereo-specificity.
Typically, radiofluorinated products are synthesized using nucleophilic or electrophilic substitution processes. One classical method for radiofluorination is the Balz-Schiemann reaction, or a modified Balz-Schiemann reaction with [18F] F−. Electrophilic substitution reactions typically make use of [18F] F2 as a precursor which can then be added to an array of molecules such as alkenes, aromatic rings, and carbanions [21]. However, methods utilizing [18F] F2 are at a disadvantage due to the loss 50% of the input activity in the form of [18F] F−. To facilitate these procedures the reaction may also be carried out within a microfluidic chamber.
Uses
One of the most popular uses of radiofluorination is its application in PET scans. Positron emission tomography (PET) is a widely used imaging technique in the field of nuclear medicine. With applications in research and in diagnosis, a PET scan can be used to image tumors, diagnose brain disease, and monitor brain or heart function [8,9,12]. These images are created with the aid of radiotracers that emit positrons which decay via an annihilation reaction to generate two 510 KeV photons that are then detected and used to reconstruct images using the same software utilized in X-Ray CT units. The gamma rays are then emitted nearly 180 degrees from each other and their detection allows the ability to pinpoint the source, thus creating an image. One of the most popular isotopes used as a positron emitting radiotracer is fluorine-18. This isotope is particularly advantageous due to its short half-life of approximately 109.8 min, its decay being 97% positron emission, its ease of production, and its energy being low (0.64 MeV). Therefore, the radiofluorination procedure is incorporates the radioactive isotope of choice in order to create the images.
Another application in the field of radiofluorination chemistry lies in the field of biofuels. Recent interest has been given to the exploration of lignocellulostic material as a biofuel source. Given that it is the most plentiful renewable carbon source in the biosphere, it is a natural choice for this purpose. The composition consists of three elements—hemicellulose, cellulose, and lignin. It is the last of these three, lignin, that presents the greatest obstacle to the efficient use of such material as a feasible biofuel source. The recalcitrant chemical nature of the lignin molecule currently requires an extensive and expensive process to degrade for bioethanol. Current research is being conducted to find more economical ways to breakdown this lignin barrier. This research will explore the use of radiofluorination with the fluorine-18 isotope to search for places in nature that lignin is being degraded. The radioactive fluorine will be attached to lignin-degradation products in order to search for enzymes in nature that breakdown lignin. This will help to make the process more efficient for use in biofuel production.
Applications with radiopharmaceuticals
Fluorine-18 is typically produced by proton bombardment of oxygen-18 enriched water in a particle accelerator. Due to the relatively short half-life, the isotope must be quickly incorporated into a tracer molecule designed for the desired target. These radiotracers generally fall into two main categories—labeled molecules normally used in the body such as water or glucose or labeled molecules that react with or bind to receptors within the body. One important application in the latter class is the attachment of the molecule to a biologically active proteins and peptides, including antibodies and antibody fragments. This class of radiotracers is of particular interest due to their role in imaging the regulation of cellular growth and function. Consequently, radiolabeling these labeled biologically active proteins and peptides with fluorine-18 to image various aspects of nuclear medicinal purposes such as tumors and inflammatory processes is important in nuclear medicine.
However, due to the chemically sensitive nature of proteins, the synthesis of radiofluorine-labeled proteins and peptides presents some formidable challenges. The harsh conditions needed for the addition of the into the biomacromolecule can easily hinder its use in radiolabeling reactions. In order to overcome these obstacles, protein or peptide labeling can be performed through a prosthetic group or bifunctional labeling agent to which the radiofluorine has been attached. This molecule can then be conjugated to the protein or peptide under milder conditions.
The three main categories of prosthetic groups are carboxyl-reactive, amino-reactive, and thiol-reactive. Of these three, the carboxyl-reactive group is the least utilized, and the amino-reactive is the most utilized. The thiol-reactive prosthetic groups are the newest class of the three. The choice of method by which the protein is labeled is dependent upon the structure. Thiol-reactive molecules can be used in cases where the amino-reactive prosthetic groups would not work. Below can be seen the structures and names of various prosthetic groups currently being
used for protein and peptide labeling.
References
Radiopharmaceuticals
Halogenation reactions | Radiofluorination | [
"Chemistry"
] | 1,478 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
41,585,434 | https://en.wikipedia.org/wiki/Anti-miRNA%20oligonucleotides | Anti-miRNA oligonucleotides (also known as AMOs) have many uses in cellular mechanics. These synthetically designed molecules are used to neutralize microRNA (miRNA) function in cells for desired responses. miRNA are complementary sequences (≈22 bp) to mRNA that are involved in the cleavage of RNA or the suppression of the translation. By controlling the miRNA that regulate mRNAs in cells, AMOs can be used as further regulation as well as for therapeutic treatment for certain cellular disorders. This regulation can occur through a steric blocking mechanism as well as hybridization to miRNA. These interactions, within the body between miRNA and AMOs, can be for therapeutics in disorders in which over/under expression occurs or aberrations in miRNA lead to coding issues. Some of the miRNA linked disorders that are encountered in the humans include cancers, muscular diseases, autoimmune disorders, and viruses. In order to determine the functionality of certain AMOs, the AMO/miRNA binding expression (transcript concentration) must be measured against the expressions of the isolated miRNA. The direct detection of differing levels of genetic expression allow the relationship between AMOs and miRNAs to be shown. This can be detected through luciferase activity (bioluminescence in response to targeted enzymatic activity). Understanding the miRNA sequences involved in these diseases can allow us to use anti miRNA Oligonucleotides to disrupt pathways that lead to the under/over expression of proteins of cells that can cause symptoms for these diseases.
Synthesis
During anti-miRNA oligonucleotide design, necessary modifications to optimize binding affinity, improve nuclease resistance, and in vivo delivery must be considered. There have been several generations of designs with attempts to develop AMOs with high binding affinity as well as high specificity. The first generation utilized 2’-O-Methyl RNA nucleotides with phosphorothioate internucleotide linkages positioned at both ends to prevent exonuclease attack. A recent study discovered a compound, N,N-diethyl-4-(4-nitronaphthalen-1-ylazo)-phenylamine (ZEN), that improved binding affinity and blocked exonuclease degradation. This method was combined with the first generation design to create a new generation ZEN-AMO with an improved effectiveness.
Various components of AMOs can be manipulated to affect the binding affinity and potency of the AMO. The 2’-sugar of the AMOs can be modified to be substituted with fluorine and various methyl groups, almost all with an increase in binding affinity. However, some of these modified 2’-sugar AMOs led to negative effects on cell growth. Modifying the 5'-3' phosphodiester backbone linkage to a phosphorothiorate (P-S) backbone linkage was also shown to have an effect on target affinity. Using the P-S mutation was shown to decrease the Tm of the oligonucleotide, which leads to a lower target affinity. A final requirement for AMOs is mismatch specificity and length restrictions. Due to miRNAs in the same families sharing “seed” (shared) sequences and differ by only a couple of additional nucleotides; one AMO can potentially target multiple miRNA sequences. However, studies have suggested that this is difficult due to the loss of activity with single nucleotide mismatches. Greater than three mismatches demonstrates complete loss of activity. Changes in the length of AMOs were tolerated far better, with changes of one nucleotide and two nucleotides resulting in little loss of activity and three or more in total loss of activity. Truncating a single nucleotide from the 3’ end resulted in a slight improvement of AMO activity.
Delivery and Detection
Delivery of AMOs requires in vitro transfection into target cells. Presently there are difficulties with conventional methods of transfection that result in low delivery efficiency. In order to increase the effectiveness of AMO delivery, a 2011 paper proposed using functionalized gold nanoparticles. The gold nanoparticles increase delivery efficiency by conjugating with a cargo DNA that anneals to the AMO using complementarity. The cargo DNA is attached to the surface of the nanoparticle. Because many variations of DNA and RNA are unstable in in vivo conditions, carriers, such as nanoparticles, are necessary to protect from degeneration by nucleases. These nanoparticles are useful in order to facilitate uptake into the cell, and transfer the genetic information to the nucleus. Another in vivo method for delivery supported by results in mice is the injection of AMOs intravenously. Tail vein injection of AMOs in the mice were shown to be effective. In order for this system to be useful, the AMOs were conjugated with cholesterol for increased uptake into the cell through the membrane and were chemically modified by 2′-OMe phosphoramidites to prevent degradation of the AMOs.
To detect the presence and functionality of AMOs, researchers can observe the relative activity of the target enzyme or protein of the miRNA. This method was used in a study of single AMOs targeting multiple miRNAs, where relative luciferase activity in HEK293 cells was monitored. To determine relative Luciferase activity levels, a control with no miRNA present was included. The presence of functional AMOs with the inhibiting miRNA would result in an increase in Luciferase activity due to the inactivation of the miRNA suppressing the enzyme's activity.
Disorders/Therapeutics
Many human disorders have been found to have some alterations in expression or aberrations involving miRNA. It has been found that miRNA have been involved in many key regulation pathways that are suspected to be related to cancer, viral genes, and metabolic pathways, as well as muscular disorders (specifically cardiovascularly related). By targeting cells affected with improper miRNA expression, the normal balance of the expression can be restored by using AMOs. By minimizing overexpression and increasing underexpression with AMOs, some of these genetic disorders can be potentially bypassed or at least have their symptoms minimized. This is done by hybridization of the AMOs to miRNA sequences that are involved in the expression of specific genes. The issue is finding a way for the AMOs to successfully perform their function in concentrations that are sufficient for success, while at the same time being low enough to avoid toxicity of the vector and AMOs themselves.
Cancer
All cancers are mutations in the genomes that cause abnormal cell growth. Determining factors that contribute to or regulate this excessive growth can potentially lead to preventative, therapeutic treatments of cancer. For example, chronic lymphocytic leukemias illustrates a region of miRNAs (mir-15 and mir-16) are missing from the genome in the expression of this cancer. While in other cancers, such as Burkitt's lymphoma, expression of miRNA sequences are amplified. This leads to the suggestion that many miRNA have regulatory sequences involved in cancer. If those were to be better regulated, potentially through AMOs, perhaps the onset and progression of cancer could be regulated.
Following a study of 540 tumor samples of various cancer types, it was discovered that 15 miRNAs were upregulated and 12 were downregulated. From the study, it was concluded that these miRNA sequences had an effect on cell growth and apoptosis in the cell. AMOs play into the equation as this regulatory factor for the miRNAs involved in cancer. If bound to a single affected miRNA site, the effect appears to be minimal. However, by creating sequences of anti-miRNA Oligonucleotides to bind to all of these implicit miRNAs, there was increased cell death within the cancer cells. One study involving antagomirs, a different variation of anti-miRNA oligonucleotides, focused on reducing induced tumors in mice. After 2 weeks of treatment, tumor growth was inhibited and regression was shown in 30% of cases. This illustrates that AMOs can be used to successfully inhibit cancers through miRNAs. This inhibition is caused by a direct silencing interaction of the miRNAs that in turn bind on the mRNA sequences that create proteins in cancer cells, as well as increased control of cellular processes of cancer.
Muscular Development
In the development of tissues in embryos, miRNA can have a role in the upregulation or downregulation of specific muscular development. miRNA-1 plays a role in muscle differentiation between cardiac and skeletal muscle precursor cells. In development, if levels of precursor cells are not properly regulated, it can result in muscular hypoplasia. By creating AMOs for these known miRNAs involved in muscle generation, it is possible to track a miRNA's specific mechanisms throughout the process of muscle generation by essentially using the created AMO to turn off the miRNA. This halts the production of myogenin(the transcription factor involved in myogenesis). By then measuring the changes in myogenin compared to standard, non-inhibited myogenesis, a miRNA's function can be determined as either upregulating or downregulating the synthesis of myogenin. By further understanding how certain miRNA sequences control the development of muscle, AMOs can be utilized to promote normal production levels of myogenin in organisms that have been detected to contain genetic errors involving myogenesis.
AMOs can also be used to prevent apoptosis, or organ hypoplasia, of the heart in the presence of high concentrations of hydrogen peroxide. Hydrogen peroxide can induce apoptosis through oxidative stress. This is because oxidative stress caused by induces increased activity of miRNA-1. This increased miRNA-1 activity represses the activity of Bcl-2, inducing apoptosis. However, by creating and introducing an AMO for miRNA-1 in an environment of oxidative stress, the response to is reduced, creating a resistance to oxidative stress in the heart. Because of this, the amount of hydrogen peroxide-induced apoptosis of cardiomyocytes is reduced in heart disease. Due to the reduction of cardiomyocyte death in conditions of oxidative stress by the anti-miRNA-1 Oligonucleotide, miRNA regulation can allow us to more deeply understand the development of the heart, as well as the survival of heart muscle in low oxygen conditions.
Autoimmune Response and Disorders
Autoimmune disorders are when the body has an immune response to itself, causing an inflammatory reaction to occur within the body. Because autoimmune disorders involve abnormalities in the immune system cells (i.e., B-cells, T-cells). It can be inferred that miRNA are strongly expressed in regions of the body that have to do with the maturation of these T and B lymphocytes, such as in the spleen and lymph nodes. Abnormalities in the miRNA or the function of the miRNA in the post-transcriptional process can result in an increased sensitivity of the lymphocytes. Due to increased sensitivity, these lymphocytes can now target antigens that it could not previously bind, which can allow for these lymphocytes to attack itself, if these antigens happen to naturally occur in cells in the body.
One instance of this is Rheumatoid Arthritis, in which the body breaks down its own joints. The breakdown is caused by the overexpression of specific miRNA clusters. These clusters causes an increase in synovial fibroblasts. Due to this increased fibroblast amount, certain proteases' concentrations are increased which cause the breakdown of cartilage in joints. By targeting the miRNA clusters responsible for expression of the disease, inflammation caused by this disorder can be reduced when AMOs are added to afflicted areas.
Systemic lupus erythematosus causes long term organ damage to the body. It propagates due to environmental and genetic factors. By targeting microRNA (miR-184, miR-196a, miR-198, and miR-21) that are down-regulated in SLE with AMOs in the affected organs, the normal expression of these genes can be restored.
Viral Studies
It is believed that cellular miRNAs inhibit viral gene expression. In a study of HIV-1, anti-miRNA inhibitors were used to deactivate two miRNAs that inhibit viral gene expression, has-miR-29a and 29b. It was shown that viral gene expression increased after the introduction of anti-miRNAs targeting has-miR-29a and 29b. This demonstrated miRNA inhibitors were able to directly target and reverse the inhibitory effect of has-miR-29a and 29b in the HIV-1 virus. By creating an AMO, certain genomic sequences of HIV were able to be studied more in depth. A further understanding of the way the genome of certain viruses work can allow scientists to create preventative measures against these viruses.
The mechanism for anti-miRNA regulation in the case of the Epstein-Barr virus (EBV) differs slightly than other viral cases such as HIV-1. EBV is a herpesvirus related to various cancers that has the ability to express miRNAs, unlike many other viruses that affect humans. Unlike other studies that utilize anti-miRNAs as a knockdown tool to demonstrate the effects of miRNAs, researchers of EBV used them to inhibit the miRNAs produced by the virus. MiR-BART5, a miRNA of EBV, regulates the protein: p53 Up-regulated Modulator of Apoptosis (PUMA). When the viral mir-BART5 was depleted using its anti-miRNA, anti-miR-BART5, cell apoptosis was triggered and resulted in disease control, killing the cells that are identified as infected.
Another peculiar case of host-viral interaction mediated by a microRNA occurs with the hepatitis C virus (HCV). HCV, which causes acute infection of the liver, often going undetected and progressing to chronic, uses the human miRNA miR-122 to recruit Argonaute2 proteins to the uncapped 5' end of its RNA genome, thereby masking it from the cellular antiviral response and stabilizing it. This interaction has led to the development of AMOs that target miR-122 in an effort to clear the virus from the hepatic cells. The most advanced of these compounds is miravirsen, a locked nucleic acid-DNA mixmer, currently undergoing clinical trials.
An interesting aspect of miravirsen is its reported ability to inhibit not just the mature miR-122, but to also invade the stem-loop structures in the microRNA precursor molecules, disrupting the biogenesis of miR-122 in biochemical assays and cell culture.
References
Nucleic acids | Anti-miRNA oligonucleotides | [
"Chemistry"
] | 3,079 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
41,588,297 | https://en.wikipedia.org/wiki/Mulling%20%28spectroscopy%29 | Mulling is the process of grinding up a sample into fine powder through mortar and pestle that is dispersed in a paraffin for infrared spectroscopy.
Sample preparation
Using a nonporous ceramic mortar and pestle, a small quantity of the solid sample is ground up until the sample is exceedingly fine and has a glassy appearance. A drop of the mulling agent is added to the ground solid in the mortar. The mixture is further ground up until a uniform paste with the consistency of toothpaste is acquired. The resulting paste is transferred to a salt plate (sodium chloride) with a small flat spatula. The disks are gently pressed together, leaving the sample ready for analysis.
Mulling agents
There are a variety of mineral oils used as mulling agents, their differences being the absorption bands in the infrared spectra.
The most common mineral oil is Nujol, which is essentially a liquid paraffin based solution and when used for mulling, strong carbon to hydrogen bond absorptions are exhibited in the infrared spectrum. The carbon to hydrogen bond absorptions that may be present in the sample itself are masked by those from the Nujol mulling agent.
Fluorolube is also commonly used, and is essentially a fluorocarbon based solution and exhibits strong carbon to fluorine bond absorptions from 1300 cm−1 onwards to 400 cm−1 in the mid-infrared spectrum. The useful range for observation of a sample in a mid-infrared spectrum when using Fluorolube as the mulling agent is 4000 cm−1 to 1300 cm−1.
Therefore, if possible, it is preferable to run a sample as both a Nujol mull and a Fluorolube mull. This allows for all of the spectral features of the sample to be seen in an infrared spectrum, because the regions masked by each specific mulling agent are unaffected in the other spectrum.
References
Infrared spectroscopy | Mulling (spectroscopy) | [
"Physics",
"Chemistry"
] | 382 | [
"Infrared spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
5,360,730 | https://en.wikipedia.org/wiki/Titanium%28III%29%20oxide | Titanium(III) oxide is the inorganic compound with the formula Ti2O3. A black semiconducting solid, it is prepared by reducing titanium dioxide with titanium metal at 1600 °C.
Ti2O3 adopts the Al2O3 (corundum) structure. It is reactive with oxidising agents. At around 200 °C, there is a transition from semiconducting to metallic conducting. Titanium(III) oxide occurs naturally as the extremely rare mineral in the form of tistarite.
Other titanium(III) oxides include LiTi2O4 and LiTiO2.
References
Titanium(III) compounds
Sesquioxides
Transition metal oxides
Semiconductor materials | Titanium(III) oxide | [
"Chemistry"
] | 142 | [
"Semiconductor materials",
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,365,395 | https://en.wikipedia.org/wiki/Incentive%20compatibility | In game theory and economics, a mechanism is called incentive-compatible (IC) if every participant can achieve their own best outcome by reporting their true preferences. For example, there is incentive compatibility if high-risk clients are better off in identifying themselves as high-risk to insurance firms, who only sell discounted insurance to high-risk clients. Likewise, they would be worse off if they pretend to be low-risk. Low-risk clients who pretend to be high-risk would also be worse off. The concept is attributed to the Russian-born American economist Leonid Hurwicz.
Typology
There are several different degrees of incentive-compatibility:
The stronger degree is dominant-strategy incentive-compatibility (DSIC). It means that truth-telling is a weakly-dominant strategy, i.e. you fare best or at least not worse by being truthful, regardless of what the others do. In a DSIC mechanism, strategic considerations cannot help any agent achieve better outcomes than the truth; such mechanisms are called strategyproof, truthful, or straightforward.
A weaker degree is Bayesian-Nash incentive-compatibility (BNIC). It means there is a Bayesian Nash equilibrium in which all participants reveal their true preferences. In other words, if all other players act truthfully, then it is best to be truthful.
Every DSIC mechanism is also BNIC, but a BNIC mechanism may exist even if no DSIC mechanism exists.
Typical examples of DSIC mechanisms are second-price auctions and a simple majority vote between two choices. Typical examples of non-DSIC mechanisms are ranked voting with three or more alternatives (by the Gibbard–Satterthwaite theorem) or first-price auctions.
In randomized mechanisms
A randomized mechanism is a probability-distribution on deterministic mechanisms. There are two ways to define incentive-compatibility of randomized mechanisms:
The stronger definition is: a randomized mechanism is universally-incentive-compatible if every mechanism selected with positive probability is incentive-compatible (i.e. if truth-telling gives the agent an optimal value regardless of the coin-tosses of the mechanism).
The weaker definition is: a randomized mechanism is incentive-compatible-in-expectation if the game induced by expectation is incentive-compatible (i.e. if truth-telling gives the agent an optimal expected value).
Revelation principles
The revelation principle comes in two variants corresponding to the two flavors of incentive-compatibility:
The dominant-strategy revelation-principle says that every social-choice function that can be implemented in dominant-strategies can be implemented by a DSIC mechanism.
The Bayesian–Nash revelation-principle says that every social-choice function that can be implemented in Bayesian–Nash equilibrium (Bayesian game, i.e. game of incomplete information) can be implemented by a BNIC mechanism.
See also
Implementability (mechanism design)
Lindahl tax
Monotonicity (mechanism design)
Preference revelation
Strategyproofness
References
Mechanism design | Incentive compatibility | [
"Mathematics"
] | 608 | [
"Game theory",
"Mechanism design"
] |
5,365,936 | https://en.wikipedia.org/wiki/Unimpaired%20runoff | Unimpaired runoff, also known as full natural flow, is a hydrology term for the natural runoff of a watershed or waterbody that would have occurred under current land use but without dams or diversions. Flow readings from river gauges are influenced by upstream diversions, impoundments, and many other alternations of the land that drains into a watershed or of alternatives of a river channel itself. Engineers estimate unimpaired or natural runoff by estimating all of the effects of human "impairments" to flow and then removing these effects. Since these calculations involve many assumptions, they tend to be more accurate for either smaller watersheds or when expressed as longer period averages.
Unimpaired runoff is important for legal and scientific reasons. Since human development continues to alter watersheds, unimpaired runoff provides fixed frames of references for flow rates. The reason unimpaired runoff is important is because long-term hydrologic records are often used to develop relationships between precipitation, runoff, and water supply. By removing changes in the timing between precipitation and runoff due to human influences, the long-term relationships will be more useful.
Calculating unimpaired runoff is also extremely important in identifying long-term climate change impacts. By subtracting the known water management influences on a long-term hydrologic record, the records may still show signs of a long-term change. These long-term signals may include long-term climate and land use change. It is still possible that the long-term climate signal is caused by larger scale anthropogenic sources.
Examples of Use
Unimpaired runoff calculations are used extensively in western United States states such as California for water resources management applications, particularly in the calculation of water year classifications and river indexes.
The following is a list of some examples of use of unimpaired runoff calculations:
Sacramento Valley 40-30-30 Index (California)
References
California Cooperative Snow Survey
NOAA Colorado River Water Supply Outlook
Hydrology
Hydraulic engineering | Unimpaired runoff | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 406 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
35,971,220 | https://en.wikipedia.org/wiki/C9H11NO | {{DISPLAYTITLE:C9H11NO}}
The molecular formula C9H11NO (molar mass: 149.19 g/mol, exact mass: 149.0841 u) may refer to:
4'-Aminopropiophenone
Cathinone
para-Dimethylaminobenzaldehyde
Molecular formulas | C9H11NO | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,971,482 | https://en.wikipedia.org/wiki/Day%20length%20fluctuations | The length of the day (LOD), which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Exact measurements of time by atomic clocks and satellite laser ranging have revealed that the LOD is subject to a number of different changes. These subtle variations have periods that range from a few weeks to a few years. They are attributed to interactions between the dynamic atmosphere and Earth itself. The International Earth Rotation and Reference Systems Service monitors the changes.
In the absence of external torques, the total angular momentum of Earth as a whole system must be constant. Internal torques are due to relative movements and mass redistribution of Earth's core, mantle, crust, oceans, atmosphere, and cryosphere. In order to keep the total angular momentum constant, a change of the angular momentum in one region must necessarily be balanced by angular momentum changes in the other regions.
Crustal movements (such as continental drift) or polar cap melting are slow secular (non-periodic) events. The characteristic coupling time between core and mantle has been estimated to be on the order of ten years, and the so-called 'decade fluctuations' of Earth's rotation rate are thought to result from fluctuations within the core, transferred to the mantle. The length of day (LOD) varies significantly even for time scales from a few years down to weeks (Figure), and the observed fluctuations in the LOD - after eliminating the effects of external torques - are a direct consequence of the action of internal torques. These short term fluctuations are very probably generated by the interaction between the solid Earth and the atmosphere.
The length of day of other planets also varies, particularly of the planet Venus, which has such a dynamic and strong atmosphere that its length of day fluctuates by up to 20 minutes.
Observations
Any change of the axial component of the atmospheric angular momentum (AAM) must be accompanied by a corresponding change of the angular momentum of Earth's crust and mantle (due to the law of conservation of angular momentum). Because the moment of inertia of the system mantle-crust is only slightly influenced by atmospheric pressure loading, this mainly requires a change in the angular velocity of the solid Earth; i.e., a change of LOD. The LOD can presently be measured to a high accuracy over integration times of only a few hours, and general circulation models of the atmosphere allow high precision determination of changes in AAM in the model. A comparison between AAM and LOD shows that they are highly correlated. In particular, one recognizes an annual period of LOD with an amplitude of 0.34 milliseconds, maximizing on February 3, and a semiannual period with an amplitude of 0.29 milliseconds, maximizing on May 8, as well as 10‑day fluctuations of the order of 0.1 milliseconds. Interseasonal fluctuations reflecting El Niño events and quasi-biennial oscillations have also been observed. There is now general agreement that most of the changes in LOD on time scales from weeks to a few years are excited by changes in AAM.
Exchange of angular momentum
One means of exchange of angular momentum between the atmosphere and the non gaseous parts of the earth is evaporation and precipitation. The water cycle moves massive quantities of water between the oceans and the atmosphere. As the mass of water (vapour) rises its rotation must slow due to conservation of angular momentum. Equally when it falls as rain, its rate of rotation will increase to conserve angular momentum. Any net global transfer of water mass from oceans to the atmosphere or the opposite implies a change in the speed of rotation of the solid/liquid Earth which will be reflected in LOD.
Observational evidence shows that there is no significant time delay between the change of AAM and its corresponding change of LOD for periods longer than about 10 days. This implies a strong coupling between atmosphere and solid Earth due to surface friction with a time constant of about 7 days, the spin-down time of the Ekman layer. This spin-down time is the characteristic time for the transfer of atmospheric axial angular momentum to Earth's surface and vice versa.
The zonal wind-component on the ground, which is most effective for the transfer of axial angular momentum between Earth and atmosphere, is the component describing rigid rotation of the atmosphere. The zonal wind of this component has the amplitude u at the equator relative to the ground, where u > 0 indicates superrotation and u < 0 indicates retrograde rotation with respect to the solid Earth. All other wind terms merely redistribute the AAM with latitude, an effect that cancels out when averaged over the globe.
Surface friction allows the atmosphere to 'pick up' angular momentum from Earth in the case of retrograde rotation or release it to Earth in the case of superrotation. Averaging over longer time scales, no exchange of AAM with the solid Earth takes place. Earth and atmosphere are decoupled. This implies that the ground level zonal wind-component responsible for rigid rotation must be zero on the average. Indeed, the observed meridional structure of the climatic mean zonal wind on the ground shows westerly winds (from the west) in middle latitudes beyond about ± 30o latitude and easterly winds (from the east) in low latitudes—the trade winds—as well as near the poles
(prevailing winds).
The atmosphere picks up angular momentum from Earth at low and high latitudes and transfers the same amount to Earth at middle latitudes.
Any short term fluctuation of the rigidly rotating zonal wind-component is then accompanied by a corresponding change in LOD. In order to estimate the order of magnitude of that effect, one may consider the total atmosphere to rotate rigidly with velocity u (in m/s) without surface friction. Then this value is related to the corresponding change of the length of day (in milliseconds) as
The annual component of the change of the length of day of ms corresponds then to a superrotation of m/s, and the semiannual component of ms to m/s.
See also
Atmospheric super-rotation
References
Further reading
Day
Earth
Meteorological phenomena
Geodesy
Astrometry | Day length fluctuations | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,282 | [
"Physical phenomena",
"Earth phenomena",
"Applied mathematics",
"Astrometry",
"Meteorological phenomena",
"Geodesy",
"Astronomical sub-disciplines"
] |
35,972,869 | https://en.wikipedia.org/wiki/List%20of%20the%20largest%20optical%20telescopes%20in%20North%20America | This is a list of the largest optical telescopes in North America.
21st century
A list of optical telescopes located in North America by aperture.
Refractors
Some of the big traditional refracting lens telescopes in North America:
Biggest telescopes in 1950
Optical telescopes only
Biggest telescopes in 1900
Biggest telescopes in 1850
Some of the largest at observatories:
See also
Lists of telescopes
List of radio telescopes
List of largest optical reflecting telescopes (mirrors)
List of largest optical refracting telescopes (lenses)
References
External links
US Telescopes (1989)
Lists of telescopes
Telescopes
Optical telescopes | List of the largest optical telescopes in North America | [
"Astronomy"
] | 114 | [
"Astronomy-related lists",
"Lists of telescopes"
] |
35,974,447 | https://en.wikipedia.org/wiki/Cyproterone%20acetate | Cyproterone acetate (CPA), sold alone under the brand name Androcur or with ethinylestradiol under the brand names Diane or Diane-35 among others, is an antiandrogen and progestin medication used in the treatment of androgen-dependent conditions such as acne, excessive body hair growth, early puberty, and prostate cancer, as a component of feminizing hormone therapy for transgender individuals, and in birth control pills. It is formulated and used both alone and in combination with an estrogen. CPA is taken by mouth one to three times per day.
Common side effects of high-dose CPA in men include gynecomastia (breast development) and feminization. In both men and women, possible side effects of CPA include low sex hormone levels, reversible infertility, sexual dysfunction, fatigue, depression, weight gain, and elevated liver enzymes. With prolonged use, brain tumors prompting surgery are common, from 5% at high doses to 2% at low doses. At very high doses in older individuals, significant cardiovascular complications can occur. Rare but serious adverse reactions of CPA include blood clots, and liver damage. CPA can also cause adrenal insufficiency as a withdrawal effect if it is discontinued abruptly from a high dosage. CPA blocks the effects of androgens such as testosterone in the body, which it does by preventing them from interacting with their biological target, the androgen receptor (AR), and by reducing their production by the gonads, hence their concentrations in the body. In addition, it has progesterone-like effects by activating the progesterone receptor (PR). It can also produce weak cortisol-like effects at very high doses.
CPA was discovered in 1961. It was originally developed as a progestin. In 1965, the antiandrogenic effects of CPA were discovered. CPA was first marketed, as an antiandrogen, in 1973, and was the first antiandrogen to be introduced for medical use. A few years later, in 1978, CPA was introduced as a progestin in a birth control pill. It has been described as a "first-generation" progestin and as the prototypical antiandrogen. CPA is available widely throughout the world. An exception is the United States, where it is not approved for use.
Medical uses
CPA is used as a progestin and antiandrogen in hormonal birth control and in the treatment of androgen-dependent conditions. Specifically, CPA is used in combined birth control pills, in the treatment of androgen-dependent skin and hair conditions such as acne, seborrhea, excessive hair growth, and scalp hair loss, high androgen levels, in transgender hormone therapy, to treat prostate cancer, to reduce sex drive in sex offenders or men with paraphilias or hypersexuality, to treat early puberty, and for other uses. Treatment dosages range from 2mg or less, to 100mg or more daily.
In the United States, where CPA is not available, other medications with antiandrogenic effects are used to treat androgen-dependent conditions instead. Examples of such medications include gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelin and degarelix, nonsteroidal antiandrogens like flutamide and bicalutamide, the diuretic and steroidal antiandrogen spironolactone, the progestin medroxyprogesterone acetate, and the 5α-reductase inhibitors finasteride and dutasteride. The steroidal antiandrogen and progestin chlormadinone acetate is used as an alternative to CPA in Japan, South Korea, and a few other countries.
In 2020, the European Medicine Agency issued a warning that high doses of cyproterone acetate may contribute to risk of meningioma, and recommended physicians use alternative treatment for most indications (or the minimum effective dose where no alternatives were available) with the exception of prostate carcinoma.
Birth control
CPA is used with ethinylestradiol as a combined birth control pill to prevent pregnancy. This birth control combination has been available since 1978. The formulation is taken once daily for 21 days, followed by a 7-day free interval. CPA has also been available in combination with estradiol valerate (brand name Femilar) as a combined birth control pill in Finland since 1993. High-dose CPA tablets have a contraceptive effect and can be used as a form of birth control, although they are not specifically licensed as such.
Skin and hair conditions
Females
CPA is used as an antiandrogen to treat androgen-dependent skin and hair conditions such as acne, seborrhea, hirsutism (excessive hair growth), scalp hair loss, and hidradenitis suppurativa in women. These conditions are worsened by the presence of androgens, and by suppressing androgen levels and blocking their actions, CPA improves the symptoms of these conditions. CPA is used to treat such conditions both at low doses as a birth control pill and on its own at higher doses. A birth control pill containing low-dose CPA in combination with ethinylestradiol to treat acne has been found to result in overall improvement in 75 to 90% of women, with responses approaching 100% improvement. High-dose CPA alone likewise has been found to improve symptoms of acne by 75 to 90% in women. Discontinuation of CPA has been found to result in marked recurrence of symptoms in up to 70% of women. CPA is one of the most commonly used medications in the treatment of hirsutism, hyperandrogenism, and polycystic ovary syndrome in women throughout the world.
Higher dosages of CPA are used in combination with an estrogen specifically at doses of 25 to 100 mg/day cyclically in the treatment of hirsutism in women. The efficacy of such dosages of CPA in the treatment of hirsutism in women appear to be similar to that of spironolactone, flutamide, and finasteride. Randomized controlled trials have found that higher dosages of CPA (e.g., 20 mg/day or 100 mg/day) added cyclically to a birth control pill containing ethinylestradiol and 2 mg/day CPA were no more effective or only marginally more effective in the treatment of severe hirsutism in women than the birth control pill alone. Maintenance therapy with lower doses of CPA, such as 25 mg/day, has been found to be effective in preventing relapse of symptoms of hirsutism. CPA has typically been combined with ethinylestradiol, but it can alternatively be used in combination with hormone replacement therapy dosages of estradiol instead. CPA at a dosage of 50 mg/day in combination with 100 μg/day transdermal estradiol patches has been found to be effective in the treatment of hirsutism similarly to the combination of CPA with ethinylestradiol.
The efficacy of the combination of an estrogen and CPA in the treatment of hirsutism in women appears to be due to marked suppression of total and free androgen levels as well as additional blockade of the androgen receptor.
Males
CPA has been found to be effective in the treatment of acne in males, with marked improvement in symptoms observed at dosages of 25, 50, and 100 mg/day in different studies. It can also halt further progression of scalp hair loss in men. Increased head hair and decreased body hair has been observed with CPA in men with scalp hair loss. However, its side effects in men, such as demasculinization, gynecomastia, sexual dysfunction, bone density loss, and reversible infertility, make the use of CPA in males impractical in most cases. In addition, lower dosages of CPA, such as 25 mg/day, have been found to be better-tolerated in men. But such doses also show lower effectiveness in the treatment of acne in men.
High androgen levels
CPA is used as an antiandrogen to treat high androgen levels and associated symptoms such as masculinization due to conditions like polycystic ovary syndrome (PCOS) and congenital adrenal hyperplasia (CAH) in women. It is almost always combined with an estrogen, such as ethinylestradiol, when it is used in the treatment of PCOS in women.
Menopausal hormone therapy
CPA is used at low doses in menopausal hormone therapy in combination with an estrogen to provide endometrial protection and treat menopausal symptoms. It is used in menopausal hormone therapy under the brand name Climen, which is a sequential preparation that contains 2 mg estradiol valerate and 1 mg CPA. Climen was the first product for use in menopausal hormone therapy containing CPA to be marketed. It is available in more than 40 countries.
Transgender hormone therapy
CPA is widely used as an antiandrogen and progestogen in feminizing hormone therapy for transgender individuals. It has been historically used orally at a dosage of 10 to 100 mg/day and by intramuscular injection at a dosage of 300 mg once every 4 weeks. Many transgender individuals seeking feminizing hormone therapy have breast growth as one of the goals for undergoing feminizing hormone therapy, making this particular side effect of CPA generally viewed as a beneficial outcome rather than an issue.
Studies have found that 10, 25, 50, and 100 mg/day CPA in combination with estrogen all result in equivalent and full testosterone suppression in transgender women. In light of risks of CPA such as fatigue, blood clots, benign brain tumors, and liver damage, the use of lower dosages of CPA may help to minimize such risks. As a result, a CPA dosage of 10mg/day and no greater is now recommended by the World Professional Association for Transgender Health (WPATH) Standards of Care for the Health of Transgender and Gender Diverse People, Version 8 (SOC8).
CPA has an advantage over spironolactone as an antiandrogen in transgender people, as the combination of estrogen and CPA consistently suppresses testosterone levels into the normal female range whereas estrogen with spironolactone does not. Spironolactone is the most widely used antiandrogen in transgender women in the United States, whereas CPA is widely used in Europe and throughout the rest of the world.
Aside from adult transgender people, CPA has also been used as a puberty blocker and hence as an antiandrogen and antiestrogen to suppress puberty in transgender adolescents, although GnRH modulators are primarily used and more effective for this purpose.
Prostate cancer
CPA is used as an antiandrogen monotherapy and means of androgen deprivation therapy in the palliative treatment of prostate cancer in men. It is used at very high doses by mouth or by intramuscular injection to treat this disease. Antiandrogens do not cure prostate cancer, but can significantly extend life in men with the disease. CPA has similar effectiveness to GnRH modulators and surgical castration, high-dose estrogen therapy (e.g., with diethylstilbestrol), and high-dose nonsteroidal antiandrogen monotherapy (e.g., with bicalutamide), but has significantly inferior effectiveness to combined androgen blockade with a GnRH modulator and a nonsteroidal antiandrogen (e.g., with bicalutamide or enzalutamide). In addition, the combination of CPA with a GnRH modulator or surgical castration has not been found to improve outcomes relative to a GnRH modulator or surgical castration alone, in contrast to nonsteroidal antiandrogens. Due to its inferior effectiveness, tolerability, and safety, CPA is rarely used in the treatment of prostate cancer today, having largely been superseded by GnRH modulators and nonsteroidal antiandrogens. CPA is the only steroidal antiandrogen that continues to be used in the treatment of prostate cancer.
Dose-ranging studies of CPA for prostate cancer were not performed, and the optimal dosage of CPA for the treatment of the condition has not been established. A dosage range of oral CPA of 100 to 300 mg/day is used in the treatment of prostate cancer, but generally 150 to 200 mg/day oral CPA is used. Schröder (1993, 2002) reviewed the issue of CPA dosage and recommended a dosage of 200 to 300 mg/day for CPA as a monotherapy and a dosage of 100 to 200 mg/day for CPA in combined androgen blockade (that is, CPA in combination with surgical or medical castration). However, the combination of CPA with castration for prostate cancer has been found to significantly decrease overall survival compared to castration alone. Hence, the use of CPA as the antiandrogen component in combined androgen blockade would appear not to be advisable. When used by intramuscular injection to treat prostate cancer, CPA is used at a dosage of 300 mg once a week.
The combination of CPA with an estrogen such as ethinylestradiol sulfonate or low-dose diethylstilbestrol has been used as a form of combined androgen blockade and as an alternative to the combination of CPA with surgical or medical castration.
Sexual deviance
CPA is used as an antiandrogen and form of chemical castration in the treatment of paraphilias and hypersexuality in men. It is used to treat sex offenders. The medication is approved in more than 20 countries for this indication and is predominantly employed in Canada, Europe, and the Middle East. CPA works by decreasing sex drive and sexual arousal and producing sexual dysfunction. CPA can also be used to reduce sex drive in individuals with inappropriate sexual behaviors, such as people with intellectual disability and dementia. The medication is also used to reduce sexual behavior diagnosed as self-harmful, such as masochism. CPA has comparable effectiveness to medroxyprogesterone acetate in suppressing sexual urges and function but appears to be less effective than GnRH modulators like leuprorelin and has more side effects.
High-dose CPA significantly decreases sexual fantasies and sexual activity in 80 to 90% of men with paraphilias. In addition, it has been found to decrease the rate of reoffending in sex offenders from 85% to 6%, with most of the reoffenses being committed by individuals who did not follow their CPA treatment prescription. It has been reported that in 80% of cases, 100 mg/day CPA is adequate to achieve the desired reduction of sexuality, whereas in the remaining 20% of cases, 200 mg/day is sufficient. When only a partial reduction in sexuality is desired, 50 mg/day CPA can be useful. Reduced sexual desire and erectile function occurs with CPA by the end of the first week of treatment, and becomes maximal within three to four weeks. The dosage range is 50 to 300 mg/day.
Early puberty
CPA is used as an antiandrogen and antiestrogen to treat precocious puberty in boys and girls. However, it is not fully satisfactory for this indication because it is not able to completely suppress puberty. For this reason, CPA has mostly been superseded by GnRH agonists in the treatment of precocious puberty. CPA is not satisfactory for gonadotropin-independent precocious puberty. CPA has been used at dosages of 50 to 300 mg/m2 to treat precocious puberty.
Other uses
CPA is useful in the treatment of hot flashes, for instance due to androgen deprivation therapy for prostate cancer.
CPA is useful for suppressing the testosterone flare at the initiation of GnRH agonist therapy. It has been used successfully both alone and in combination with estrogens such as diethylstilbestrol for this purpose.
Available forms
CPA is available in the form of oral tablets alone (higher-dose; 10 mg, 50 mg, 100 mg) or in combination with ethinylestradiol or estradiol valerate (low-dose; 1 or 2 mg CPA) and in the form of ampoules for intramuscular injection (higher-dose; 100 mg/mL, 300 mg/3 mL; brand name Androcur Depot).
The higher-dose formulations are used to treat prostate cancer and certain other androgen-related indications while the low-dose formulations which also have an estrogen are used as combined birth control pills and are used in menopausal hormone therapy for the treatment of menopausal symptoms.
Contraindications
Contraindications of CPA include:
Hypersensitivity to CPA or any of the other components of the medication
Pregnancy, lactation, and breastfeeding
Puberty (except if being used to treat precocious puberty or delay puberty)
Liver diseases and liver dysfunction
Chronic kidney disease
Dubin–Johnson syndrome and Rotor syndrome
History of jaundice or persistent pruritus during pregnancy
History of herpes during pregnancy
Previous or existing liver tumors (only if not due to metastases from prostate cancer)
Previous or existing meningioma, hyperprolactinemia, or prolactinoma
Wasting syndromes (except in inoperable prostate cancer)
Severe depression
Previous or existing thromboembolic processes, as well as stroke and myocardial infarction
Severe diabetes with vascular changes
Sickle-cell anemia
When CPA is used in combination with an estrogen, contraindications for birth control pills should also be considered.
Side effects
CPA is generally well-tolerated and has a mild side-effect profile regardless of dosage when it is used in combination with an estrogen in women. Side effects of CPA in general include hypogonadism (low sex-hormone levels) and associated symptoms such as demasculinization, sexual dysfunction, infertility, and osteoporosis (fragile bones); breast changes such as breast tenderness, breast enlargement, and gynecomastia (breasts in men); emotional changes such as fatigue and depression; and other side effects such as vitamin B12 deficiency, weak glucocorticoid effects, and elevated liver enzymes. Weight gain can occur with CPA when it is used at high doses. Some of the side effects of CPA can be improved or fully prevented if it is combined with an estrogen to prevent estrogen deficiency. Few quantitative data are available on many of the potential side effects of CPA. Pooled tolerability data for CPA is not available in the literature. Cyproterone is also known to suppress adrenocortical function.
At very high doses in aged men with prostate cancer, CPA can cause cardiovascular side effects. Rarely, CPA can produce blood clots, liver toxicity (including hepatitis, liver failure, and liver cancer), excessively high prolactin levels, and certain benign brain tumors including meningiomas (tumors of the meninges) and prolactinomas (prolactin-secreting tumors of the pituitary gland). Upon discontinuation from high doses, CPA can produce adrenal insufficiency as a withdrawal effect.
Overdose
CPA is relatively safe in acute overdose. It is used at very high doses of up to 300 mg/day by mouth and 700 mg per week by intramuscular injection. For comparison, the dose of CPA used in birth control pills is 2 mg/day. There have been no deaths associated with CPA overdose. There are no specific antidotes for CPA overdose, and treatment should be symptom-based. Gastric lavage can be used in the event of oral overdose within the last 2 to 3 hours.
Interactions
Inhibitors and inducers of the cytochrome P450 enzyme CYP3A4 may interact with CPA. Examples of strong CYP3A4 inhibitors include ketoconazole, itraconazole, clotrimazole, and ritonavir, while examples of strong CYP3A4 inducers include rifampicin, rifampin, phenytoin, carbamazepine, phenobarbital, and St. John's wort. Certain anticonvulsant medications can substantially reduce levels of CPA, by as much as 8-fold.
Pharmacology
Pharmacodynamics
CPA has antiandrogenic activity, progestogenic activity, weak partial glucocorticoid activity, weak steroidogenesis inhibitor activity, and agonist activity at the pregnane X receptor. It has no estrogenic or antimineralocorticoid activity. In terms of potency, CPA is described as a highly potent progestogen, a moderately potent antiandrogen, and a weak glucocorticoid. Due to its progestogenic activity, CPA has antigonadotropic effects, and is able to suppress fertility and sex-hormone levels in both males and females.
Pharmacokinetics
CPA can be taken by mouth or by injection into muscle. It has near-complete oral bioavailability, is highly and exclusively bound to albumin in terms of plasma protein binding, is metabolized in the liver by hydroxylation and conjugation, has 15β-hydroxycyproterone acetate (15β-OH-CPA) as a single major active metabolite, has a long elimination half-life of about 2 to 4 days regardless of route of administration, and is excreted in feces primarily and to a lesser extent in urine.
Chemistry
CPA, also known as 1α,2α-methylene-6-chloro-17α-acetoxy-δ6-progesterone or as 1α,2α-methylene-6-chloro-17α-hydroxypregna-4,6-diene-3,20-dione acetate, is a synthetic pregnane steroid and an acetylated derivative of 17α-hydroxyprogesterone. It is structurally related to other 17α-hydroxyprogesterone derivatives such as chlormadinone acetate, hydroxyprogesterone caproate, medroxyprogesterone acetate, and megestrol acetate.
Synthesis
Chemical syntheses of CPA have been published. The following is one such synthesis:
The dehydrogenation of 17α-hydroxyprogesterone acetate [302-23-8] (1) with chloranil (tetrachloro-p-benzoquinone) gives a compound that has been called melengestrol acetate [425-51-4] (2). Dehydrogenation with selenium dioxide gives 17-acetoxy-1,4,6-pregnatriene-3,20-dione [2668-75-9] (3). Reacting this with diazomethane results in a 1,3-dipolar addition reaction at C1–C2 of the double bond of the steroid system, which forms a derivative of dihydropyrazole, CID:134990386 (4). This compound cleaves when reacted with perchloric acid, releasing nitrogen molecules and forming a cyclopropane derivative, 6-deschloro cyproterone acetate [2701-50-0] (5). Selective oxidation of the C6=C7 olefin with benzoyl peroxide gives the epoxide, i.e. 6-deschloro-6,7-epoxy cyproterone [15423-97-9] (6). The penultimate step involves a reaction with hydrochloric acid in acetic acid, resulting in the formation of chlorine and its subsequent dehydration, and a simultaneous opening of the cyclopropane ring giving 1α-(chloromethyl) chlormadinone acetate [17183-98-1] (7). The heating of this in collidine reforms the cyclopropane ring, completing the synthesis of CPA (8).
History
CPA was first synthesized in 1961 by Rudolf Wiechert, a Schering employee, and together with Friedmund Neumann in Berlin, they filed for a patent for CPA as "progestational agent" in 1962. The antiandrogenic activity of CPA was discovered serendipitously by Hamada, Neumann, and Karl Junkmann in 1963. Along with the steroidal antiandrogens benorterone (17α-methyl-B-nortestosterone; SKF-7690), cyproterone, BOMT (Ro 7–2340), and trimethyltrienolone (R-2956) and the nonsteroidal antiandrogens flutamide and DIMP (Ro 7–8117), CPA was one of the first antiandrogens to be discovered and researched.
CPA was initially developed as a progestogen for the prevention of threatened abortion. As part of its development, it was assessed for androgenic activity to ensure that it would not produce teratogenic effects in female fetuses. The drug was administered to pregnant rats and its effects on the rat fetuses were studied. To the surprise of the researchers, all of the rat pups born appeared to be female. After 20 female rat pups in a row had been counted, it was clear that this could not be a chance occurrence. The rat pups were further evaluated and it was found that, in terms of karyotype, about 50% were actually males. The male rat pups had been feminized, and this resultant finding constituted the discovery of the powerful antiandrogenic activity of CPA. A year after patent approval in 1965, Neumann published additional evidence of CPA's antiandrogenic effect in rats; he reported an "organizational effect of CPA on the brain". CPA started being used in animal experiments around the world to investigate how antiandrogens affected fetal sexual differentiation.
The first clinical use of CPA in the treatment of sexual deviance and prostate cancer occurred in 1966. It was first studied in the treatment of androgen-dependent skin and hair symptoms, specifically acne, hirsutism, seborrhea, and scalp hair loss, in 1969. CPA was first approved for medical use in 1973 in Europe under the brand name Androcur. In 1977, a formulation of CPA was introduced for use by intramuscular injection. CPA was first marketed as a birth control pill in 1978 in combination with ethinylestradiol under the brand name Diane. Following phase III clinical trials, CPA was approved for the treatment of prostate cancer in Germany in 1980. CPA became available in Canada as Androcur in 1987, as Androcur Depot in 1990, and as Diane-35 in 1998. Conversely, CPA was never introduced in any form in the United States. This was reportedly due to concerns about breast tumors observed with high-dose pregnane progestogens in beagle dogs as well as concerns about potential teratogenicity in pregnant women. Use of CPA in transgender women, an off-label indication, was reported as early as 1977. The use of CPA in transgender women was well-established by the early 1990s.
The history of CPA, including its discovery, development, and marketing, has been reviewed.
Society and culture
Generic names
The English and generic name of CPA is cyproterone acetate and this is its , , and . The English and generic name of unacetylated cyproterone is cyproterone and this is its and , while cyprotérone is the and French name and ciproterone is the and Italian name. The name of unesterified cyproterone in Latin is cyproteronum, in German is cyproteron, and in Spanish is ciproterona. These names of cyproterone correspond for CPA to acétate de cyprotérone in French, acetato de ciproterona in Spanish, ciproterone acetato in Italian, cyproteronacetat in German, siproteron asetat in Turkish, and cyproteronacetaat in Dutch. CPA is also known by the developmental code names SH-80714 and SH-714, while unacetylated cyproterone is known by the developmental code names SH-80881 and SH-881.
Brand names
CPA is marketed under brand names including Androcur, Androcur Depot, Androcur-100, Androstat, Asoteron, Cyprone, Cyproplex, Cyprostat, Cysaxal, Imvel, and Siterone. When CPA is formulated in combination with ethinylestradiol, it is also known as co-cyprindiol, and brand names for this formulation include Andro-Diane, Bella HEXAL 35, Chloe, Cypretil, Cypretyl, Cyproderm, Diane, Diane Mite, Diane-35, Dianette, Dixi 35, Drina, Elleacnelle, Estelle, Estelle-35, Ginette, Linface, Minerva, Vreya, and Zyrona. CPA is also marketed in combination with estradiol valerate as Climen, Climene, Elamax, and Femilar.
Availability
CPA is widely available throughout the world, and is marketed in almost every developed country, with the notable major exceptions of the United States and Japan. In almost all countries in which CPA is marketed, it is available both alone and in combination with an estrogen in birth control pills. CPA is marketed widely in combination with both ethinylestradiol and estradiol valerate. CPA-containing birth control pills are available in South Korea, but CPA as a standalone medication is not marketed in this country. In Japan and South Korea, the closely related antiandrogen and progestin chlormadinone acetate, as well as other medications, are used instead of CPA. Specific places in which CPA is marketed include the United Kingdom, elsewhere throughout Europe, Canada, Australia, New Zealand, South Africa, Latin America, and Asia. CPA is not marketed in most of Africa and the Middle East.
It has been said that the lack of availability of CPA in the United States explains why there are relatively few studies of it in the treatment of androgen-dependent conditions such as hyperandrogenism and hirsutism in women.
Generation
Progestins in birth control pills are sometimes grouped by generation. While the 19-nortestosterone progestins are consistently grouped into generations, the pregnane progestins that are or have been used in birth control pills are typically omitted from such classifications or are grouped simply as "miscellaneous" or "pregnanes". In any case, CPA has been described as a "first-generation" progestin similarly to closely related progestins like chlormadinone acetate, medroxyprogesterone acetate, and megestrol acetate.
Research
CPA has been studied and used in combination with low-dose diethylstilbestrol in the treatment of prostate cancer. The combination results in suppression of testosterone levels into the castrate range, which normally cannot be achieved with CPA alone. CPA has been studied as a form of androgen deprivation therapy for the treatment of benign prostatic hyperplasia (enlarged prostate). The medication has been studied in the treatment of breast cancer as well.
CPA has been studied for use as a potential male hormonal contraceptive both alone and in combination with testosterone in men. CPA was under development by Barr Pharmaceuticals in the 2000s for the treatment of hot flashes in prostate cancer patients in the United States. It reached phase III clinical trials for this indication and had the tentative brand name CyPat but development was ultimately discontinued in 2008. CPA is not satisfactorily effective as topical antiandrogen, for instance in the treatment of acne. CPA has been used to treat estrogen hypersensitivity vulvovaginitis in women.
CPA has been investigated for use in reducing aggression and self-injurious behavior via its antiandrogenic effects in conditions like autism spectrum disorders, dementias like Alzheimer's disease, and psychosis. CPA may be effective in the treatment of obsessive–compulsive disorder (OCD). CPA has been studied in the treatment of cluster headaches in men.
See also
Estradiol valerate/cyproterone acetate
Ethinylestradiol/cyproterone acetate
References
Further reading
External links
Die Geschichte des Wirkstoffs Cyproteronazetat: Von der "Pille für den Mann" zum "Hautpflegemittel mit Empfängnisschutz" [The History of Cyproterone Acetate: From the "Pill for Men" to the "Skin Care Product and Contraceptive"] - Arznei-Telegramm [Google Translate]
3β-Hydroxysteroid dehydrogenase inhibitors
Acetate esters
Anti-acne preparations
Antiandrogen esters
Antigonadotropins
Aryl hydrocarbon receptor antagonists
Organochlorides
Cyclopropanes
CYP17A1 inhibitors
Enones
Glucocorticoids
Hair loss medications
Hair removal
Hepatotoxins
Hormonal antineoplastic drugs
Ketones
Pregnane X receptor agonists
Pregnanes
Progestogen esters
Progestogens
Prolactin releasers
Prostate cancer
Steroidal antiandrogens | Cyproterone acetate | [
"Chemistry"
] | 7,244 | [
"Ketones",
"Functional groups"
] |
35,975,525 | https://en.wikipedia.org/wiki/Quantifier%20rank | In mathematical logic, the quantifier rank of a formula is the depth of nesting of its quantifiers. It plays an essential role in model theory.
Notice that the quantifier rank is a property of the formula itself (i.e. the expression in a language). Thus two logically equivalent formulae can have different quantifier ranks, when they express the same thing in different ways.
Definition
Quantifier Rank of a Formula in First-order language (FO)
Let φ be a FO formula. The quantifier rank of φ, written qr(φ), is defined as
, if φ is atomic.
.
.
.
.
Remarks
We write FO[n] for the set of all first-order formulas φ with .
Relational FO[n] (without function symbols) is always of finite size, i.e. contains a finite number of formulas
Notice that in Prenex normal form the Quantifier Rank of φ is exactly the number of quantifiers appearing in φ.
Quantifier Rank of a higher order Formula
For Fixpoint logic, with a least fix point operator LFP:
Examples
A sentence of quantifier rank 2:
A formula of quantifier rank 1:
A formula of quantifier rank 0:
A sentence in prenex normal form of quantifier rank 3:
A sentence, equivalent to the previous, although of quantifier rank 2:
See also
Prenex normal form
Ehrenfeucht–Fraïssé_game
Quantifier
References
.
.
External links
Quantifier Rank Spectrum of L-infinity-omega BA Thesis, 2000
Finite model theory
Model theory
Predicate logic
Quantifier (logic) | Quantifier rank | [
"Mathematics"
] | 365 | [
"Predicate logic",
"Mathematical logic",
"Basic concepts in set theory",
"Finite model theory",
"Model theory",
"Mathematical logic stubs",
"Quantifier (logic)"
] |
35,976,444 | https://en.wikipedia.org/wiki/Wehrl%20entropy | In quantum information theory, the Wehrl entropy, named after Alfred Wehrl, is a classical entropy of a quantum-mechanical density matrix. It is a type of quasi-entropy defined for the Husimi Q representation of the phase-space quasiprobability distribution. See for a comprehensive review of basic properties of classical, quantum and Wehrl entropies, and their implications in statistical mechanics.
Definitions
The Husimi function is a "classical phase-space" function of position and momentum , and in one dimension is defined for any quantum-mechanical density matrix by
where is a "(Glauber) coherent state", given by
(It can be understood as the Weierstrass transform of the Wigner quasi-probability distribution.)
The Wehrl entropy is then defined as
The definition can be easily generalized to any finite dimension.
Properties
Such a definition of the entropy relies on the fact that the Husimi Q representation remains non-negative definite, unlike other representations of quantum quasiprobability distributions in phase space. The Wehrl entropy has several important properties:
It is always positive, like the full quantum von Neumann entropy, but unlike the classical differential entropy which can be negative at low temperature. In fact, the minimum value of the Wehrl entropy is 1, i.e. as discussed below in the section "Werhl's conjecture".
The entropy for the tensor product of two systems is always greater than the entropy of one system. In other words, for a state on a Hilbert space , we have , where . Note that the quantum von Neumann entropy, , does not have this property, as can be clearly seen for a pure maximally entangled state.
The Wehrl entropy is strictly lower bounded by a von Neumann entropy, . There is no known upper or lower bound (other than zero) for the difference .
The Wehrl entropy is not invariant under all unitary transformations, unlike the von Neumann entropy. In other words, for a general unitary . It is, however, invariant under certain unitary transformations.
Wehrl's conjecture
In his original paper Wehrl posted a conjecture that the smallest possible value of Wehrl entropy is 1, and it occurs if and only if the density matrix is a pure state projector onto any coherent state, i.e. for all choices of ,
.
Soon after the conjecture was posted, E. H. Lieb proved that the minimum of the Wehrl entropy is 1, and it occurs when the state is a projector onto any coherent state.
In 1991 E. Carlen proved the uniqueness of the minimizer, i.e. the minimum of the Wehrl entropy occurs only when the state is a projector onto any coherent state.
The analog of the Wehrl conjecture for systems with a classical phase space isomorphic to the sphere (rather than the plane) is the Lieb conjecture.
Discussion
However, it is not the fully quantum von Neumann entropy in the Husimi representation in phase space, : all the requisite star-products ★ in that entropy have been dropped here. In the Husimi representation, the star products read
and are isomorphic to the Moyal products of the Wigner–Weyl representation.
The Wehrl entropy, then, may be thought of as a type of heuristic semiclassical approximation to the full quantum von Neumann entropy, since it retains some dependence (through Q) but not all of it.
Like all entropies, it reflects some measure of non-localization, as the Gauss transform involved in generating and the sacrifice of the star operators have effectively discarded information. In general, as indicated, for the same state, the Wehrl entropy exceeds the von Neumann entropy (which vanishes for pure states).
Wehrl entropy for Bloch coherent states
Wehrl entropy can be defined for other kinds of coherent states. For example, it can be defined for Bloch coherent states, that is, for angular momentum representations of the group for quantum spin systems.
Bloch coherent states
Consider a space with . We consider a single quantum spin of fixed angular momentum , and shall denote by the usual angular momentum operators that satisfy the following commutation relations: and cyclic permutations.
Define , then and .
The eigenstates of are
For the state satisfies: and .
Denote the unit sphere in three dimensions by
,
and by the space of square integrable function on with the measure
.
The Bloch coherent state is defined by
.
Taking into account the above properties of the state , the Bloch coherent state can also be expressed as
where , and
is a normalised eigenstate of satisfying .
The Bloch coherent state is an eigenstate of the rotated angular momentum operator with a maximum eigenvalue. In other words, for a rotation operator
,
the Bloch coherent state satisfies
.
Wehrl entropy for Bloch coherent states
Given a density matrix , define the semi-classical density distribution
.
The Wehrl entropy of for Bloch coherent states is defined as a classical entropy of the density distribution ,
,
where is a classical differential entropy.
Wehrl's conjecture for Bloch coherent states
The analogue of the Wehrl's conjecture for Bloch coherent states was proposed in in 1978. It suggests the minimum value of the Werhl entropy for Bloch coherent states,
,
and states that the minimum is reached if and only if the state is a pure Bloch coherent state.
In 2012 E. H. Lieb and J. P. Solovej proved a substantial part of this conjecture, confirming the minimum value of the Wehrl entropy for Bloch coherent states, and the fact that it is reached for any pure Bloch coherent state. The uniqueness of the minimizers was proved in 2022 by R. L. Frank and A. Kulikov, F. Nicola, J. Ortega-Cerda' and P. Tilli.
Generalized Wehrl's conjecture
In E. H. Lieb and J. P. Solovej proved Wehrl's conjecture for Bloch coherent states by generalizing it in the following manner.
Generalized Wehrl's conjecture
For any concave function (e.g. as in the definition of the Wehrl entropy), and any density matrix , we have
,
where 0 is a pure coherent state defined in the section "Wehrl conjecture".
Generalized Wehrl's conjecture for Bloch coherent states
Generalized Wehrl's conjecture for Glauber coherent states was proved as a consequence of the similar statement for Bloch coherent states. For any concave function , and any density matrix we have
,
where is any point on a sphere.
The uniqueness of the minimizers was proved in the aforementioned papers and.
See also
Coherent state
Entropy
Information theory and measure theory
Lieb conjecture
Quantum information
Quantum mechanics
Spin
Statistical mechanics
Von Neumann entropy
References
Quantum mechanical entropy
Mathematical physics
Quantum mechanics | Wehrl entropy | [
"Physics",
"Mathematics"
] | 1,428 | [
"Physical quantities",
"Applied mathematics",
"Theoretical physics",
"Entropy",
"Quantum mechanical entropy",
"Mathematical physics"
] |
31,775,773 | https://en.wikipedia.org/wiki/Brjuno%20number | In mathematics, a Brjuno number (sometimes spelled Bruno or Bryuno) is a special type of irrational number named for Russian mathematician Alexander Bruno, who introduced them in .
Formal definition
An irrational number is called a Brjuno number when the infinite sum
converges to a finite number.
Here:
is the denominator of the th convergent of the continued fraction expansion of .
is a Brjuno function
Examples
Consider the golden ratio :
Then the nth convergent can be found via the recurrence relation:
It is easy to see that for , as a result
and since it can be proven that for any irrational number, is a Brjuno number. Moreover, a similar method can be used to prove that any irrational number whose continued fraction expansion ends with a string of 1's is a Brjuno number.
By contrast, consider the constant with defined as
Then , so we have by the ratio test that diverges. is therefore not a Brjuno number.
Importance
The Brjuno numbers are important in the one-dimensional analytic small divisors problems. Bruno improved the diophantine condition in Siegel's Theorem by showing that germs of holomorphic functions with linear part are linearizable if is a Brjuno number. showed in 1987 that Brjuno's condition is sharp; more precisely, he proved that for quadratic polynomials, this condition is not only sufficient but also necessary for linearizability.
Properties
Intuitively, these numbers do not have many large "jumps" in the sequence of convergents, in which the denominator of the ()th convergent is exponentially larger than that of the th convergent. Thus, in contrast to the Liouville numbers, they do not have unusually accurate diophantine approximations by rational numbers.
Brjuno function
Brjuno sum
The Brjuno sum or Brjuno function is
where:
is the denominator of the th convergent of the continued fraction expansion of .
Real variant
The real Brjuno function is defined for irrational numbers
and satisfies
for all irrational between 0 and 1.
Yoccoz's variant
Yoccoz's variant of the Brjuno sum defined as follows:
where:
is irrational real number:
is the fractional part of
is the fractional part of
This sum converges if and only if the Brjuno sum does, and in fact their difference is bounded by a universal constant.
See also
Irrationality measure
Markov constant
References
Notes
Dynamical systems
Number theory | Brjuno number | [
"Physics",
"Mathematics"
] | 519 | [
"Discrete mathematics",
"Mechanics",
"Number theory",
"Dynamical systems"
] |
31,778,638 | https://en.wikipedia.org/wiki/File%20dynamics | The term file dynamics is the motion of many particles in a narrow channel.
In science: in chemistry, physics, mathematics and related fields, file dynamics (sometimes called, single file dynamics) is the diffusion of N (N → ∞) identical Brownian hard spheres in a quasi-one-dimensional channel of length L (L → ∞), such that the spheres do not jump one on top of the other, and the average particle's density is approximately fixed. The most famous statistical properties of this process is that the mean squared displacement (MSD) of a particle in the file follows, , and its probability density function (PDF) is Gaussian in position with a variance MSD.
Results in files that generalize the basic file include:
In files with a density law that is not fixed, but decays as a power law with an exponent a with the distance from the origin, the particle in the origin has a MSD that scales like, , with a Gaussian PDF.
When, in addition, the particles' diffusion coefficients are distributed like a power law with exponent γ (around the origin), the MSD follows, , with a Gaussian PDF.
In anomalous files that are renewal, namely, when all particles attempt a jump together, yet, with jumping times taken from a distribution that decays as a power law with an exponent, −1 − α, the MSD scales like the MSD of the corresponding normal file, in the power of α.
In anomalous files of independent particles, the MSD is very slow and scales like, . Even more exciting, the particles form clusters in such files, defining a dynamical phase transition. This depends on the anomaly power α: the percentage of particles in clusters ξ follows, .
Other generalizations include: when the particles can bypass each other with a constant probability upon encounter, an enhanced diffusion is seen. When the particles interact with the channel, a slower diffusion is observed. Files in embedded in two-dimensions show similar characteristics of files in one dimension.
Generalizations of the basic file are important since these models represent reality much more accurately than the basic file. Indeed, file dynamics are used in modeling numerous microscopic processes: the diffusion within biological and synthetic pores and porous material, the diffusion along 1D objects, such as in biological roads, the dynamics of a monomer in a polymer, etc.
Mathematical formulation
Simple files
In simple Brownian files, , the joint probability density function (PDF) for all the particles in file, obeys a normal diffusion equation:
In , is the set of particles' positions at time and is the set of the particles' initial positions at the initial time (set to zero). Equation (1) is solved with the appropriate boundary conditions, which reflect the hard-sphere nature of the file:
and with the appropriate initial condition:
In a simple file, the initial density is fixed, namely,, where is a parameter that represents a microscopic length. The PDFs' coordinates must obey the order: .
Heterogeneous files
In such files, the equation of motion follows,
with the boundary conditions:
and with the initial condition, Eq. (), where the particles’ initial positions obey:
The file diffusion coefficients are taken independently from the PDF,
where Λ has a finite value that represents the fastest diffusion coefficient in the file.
Renewal, anomalous, heterogeneous files
In renewal-anomalous files, a random period is taken independently from a waiting time probability density function (WT-PDF; see Continuous-time Markov process for more information) of the form: , where k is a parameter. Then, all the particles in the file stand still for this random period, where afterwards, all the particles attempt jumping in accordance with the rules of the file. This procedure is carried on over and over again. The equation of motion for the particles’ PDF in a renewal-anomalous file is obtained when convoluting the equation of motion for a Brownian file with a kernel :
Here, the kernel and the WT-PDF are related in Laplace space, . (The Laplace transform of a function reads, .) The reflecting boundary conditions accompanied Eq. () are obtained when convoluting the boundary conditions of a Brownian file with the kernel , where here and in a Brownian file the initial conditions are identical.
Anomalous files with independent particles
When each particle in the anomalous file is assigned with its own jumping time drawn form ( is the same for all the particles), the anomalous file is not a renewal file. The basic dynamical cycle in such a file consists of the following steps: a particle with the fastest jumping time in the file, say, for particle i, attempts a jump. Then, the waiting times for all the other particles are adjusted: we subtract from each of them. Finally, a new waiting time is drawn for particle i. The most crucial difference among renewal anomalous files and anomalous files that are not renewal is that when each particle has its own clock, the particles are in fact connected also in the time domain, and the outcome is further slowness in the system (proved in the main text). The equation of motion for the PDF in anomalous files of independent particles reads:
Note that the time argument in the PDF is a vector of times: , and . Adding all the coordinates and performing the integration in the order of faster times first (the order is determined randomly from a uniform distribution in the space of configurations) gives the full equation of motion in anomalous files of independent particles (averaging of the equation over all configurations is therefore further required). Indeed, even Eq. () is very complicated, and averaging further complicates things.
Mathematical analysis
Simple files
The solution of Eqs. ()-() is a complete set of permutations of all initial coordinates appearing in the Gaussians,
Here, the index goes on all the permutations of the initial coordinates, and contains permutations. From Eq. (), the PDF of a tagged particle in the file, , is calculated
In Eq. (), , ( is the initial condition of the tagged particle), and . The MSD for the tagged particle is obtained directly from Eq. ():
Heterogeneous files
The solution of Eqs. ()-() is approximated with the expression,
Starting from Eq. (), the PDF of the tagged particle in the heterogeneous file follows,
The MSD of a tagged particle in a heterogeneous file is taken from Eq. ():
Renewal anomalous heterogeneous files
The results of renewal-anomalous files are simply derived from the results of Brownian files. Firstly, the PDF in Eq. () is written in terms of the PDF that solves the un-convoluted equation, that is, the Brownian file equation; this relation is made in Laplace space:
(The subscript nrml stands for normal dynamics.) From Eq. (), it is straightforward relating the MSD of Brownian heterogeneous files and renewal-anomalous heterogeneous files,
From Eq. (), one finds that the MSD of a file with normal dynamics in the power of is the MSD of the corresponding renewal-anomalous file,
Anomalous files with independent particles
The equation of motion for anomalous files with independent particles, (), is very complicated. Solutions for such files are reached while deriving scaling laws and with numerical simulations.
Scaling laws for anomalous files of independent particles
Firstly, we write down the scaling law for the mean absolute displacement (MAD) in a renewal file with a constant density:
Here, is the number of particles in the covered-length , and is the MAD of a free anomalous particle, . In Eq. (), enters the calculations since all the particles within the distance from the tagged one must move in the same direction in order that the tagged particle will reach a distance from its initial position. Based on Eq. (), we write a generalized scaling law for anomalous files of independent particles:
The first term on the right hand side of Eq. () appears also in renewal files; yet, the term f(n) is unique. f(n) is the probability that accounts for the fact that for moving n anomalous independent particles in the same direction, when these particles indeed try jumping in the same direction (expressed with the term, (), the particles in the periphery must move first so that the particles in the middle of the file will have the free space for moving, demanding faster jumping times for those in the periphery. f(n) appears since there is not a typical timescale for a jump in anomalous files, and the particles are independent, and so a particular particle can stand still for a very long time, substantially limiting the options of progress for the particles around him, during this time. Clearly,, where f(n) = 1 for renewal files since the particles jump together, yet also in files of independent particles with , since in such files there is a typical timescale for a jump, considered the time for a synchronized jump. We calculate f(n) from the number of configurations in which the order of the particles’ jumping times enables motion; that is, an order where the faster particles are always located towards the periphery. For n particles, there are n! different configurations, where one configuration is the optimal one; so, . Yet, although not optimal, propagation is also possible in many other configurations; when m is the number of particles that move, then,
where counts the number of configurations in which those m particles around the tagged one have the optimal jumping order. Now, even when m~n/2, . Using in Eq. (), ( a small number larger than 1), we see,
(In Eq. (), we use, .) Equation () shows that asymptotically the particles are extremely slow in anomalous files of independent particles.
Numerical studies of anomalous files of independent particles
With numerical studies, one sees that anomalous files of independent particles form clusters. This phenomenon defines a dynamical phase transition. At steady state, the percentage of particles in cluster, , follows,
In Figure 1 we show trajectories from 9 particles in a file of 501 particles. (It is recommended opening the file in a new window). The upper panels show trajectories for and the lower panels show trajectories for . For each value of shown are trajectories in the early stages of the simulations (left) and in all stages of the simulation (right). The panels exhibit the phenomenon of the clustering, where the trajectories attract each other and then move pretty much together.
See also
Brownian motion
Langevin dynamics
System dynamics
References
Diffusion
Statistical mechanics
Stochastic processes | File dynamics | [
"Physics",
"Chemistry"
] | 2,280 | [
"Transport phenomena",
"Physical phenomena",
"Statistical mechanics",
"Diffusion"
] |
31,781,064 | https://en.wikipedia.org/wiki/Spin%20engineering | Spin engineering describes the control and manipulation of quantum spin systems to develop devices and materials. This includes the use of the spin degrees of freedom as a probe for spin based phenomena.
Because of the basic importance of quantum spin for physical and chemical processes, spin engineering is relevant for a wide range of scientific and technological applications. Current examples range from Bose–Einstein condensation to spin-based data storage and reading in state-of-the-art hard disk drives, as well as from powerful analytical tools like nuclear magnetic resonance spectroscopy and electron paramagnetic resonance spectroscopy to the development of magnetic molecules as qubits and magnetic nanoparticles. In addition, spin engineering exploits the functionality of spin to design materials with novel properties as well as to provide a better understanding and advanced applications of conventional material systems. Many chemical reactions are devised to create bulk materials or single molecules with well defined spin properties, such as a single-molecule magnet.
The aim of this article is to provide an outline of fields of research and development where the focus is on the properties and applications of quantum spin.
Introduction
As spin is one of the fundamental quantum properties of elementary particles it is relevant for a large range of physical and chemical phenomena. For instance, the spin of the electron plays a key role in the electron configuration of atoms which is the basis of the periodic table of elements. The origin of ferromagnetism is also closely related to the magnetic moment associated with the spin and the spin-dependent Pauli exclusion principle. Thus, the engineering of ferromagnetic materials like mu-metals or Alnico at the beginning of the last century can be considered as early examples of spin engineering, although the concept of spin was not yet known at that time. Spin engineering in its generic sense became possible only after the first experimental characterization of spin in the Stern–Gerlach experiment in 1922 followed by the development of relativistic quantum mechanics by Paul Dirac. This theory was the first to accommodate the spin of the electron and its magnetic moment.
Whereas the physics of spin engineering dates back to the groundbreaking findings of quantum chemistry and physics within the first decades of the 20th century, the chemical aspects of spin engineering have received attention especially within the last twenty years. Today, researchers focus on specialized topics, such as the design and synthesis of molecular magnets or other model systems in order to understand and harness the fundamental principles behind phenomena such as the relation between magnetism and chemical reactivity as well as microstructure related mechanical properties of metals and the biochemical impact of spin (e. g. photoreceptor proteins) and spin transport.
Research fields of spin engineering
Spintronics
Spintronics is the exploitation of both the intrinsic spin of the electron and its fundamental electronic charge in solid-state devices and is thus a part of spin engineering. Spintronics is probably one of the most advanced fields of spin engineering with many important inventions which can be found in end-user devices like the reading heads for magnetic hard disk drives. This section is divided in basic spintronic phenomena and their applications.
Basic spintronic phenomena
Giant magnetoresistance (GMR), Tunnel magnetoresistance (TMR), Spin valve
Spin transfer torque (STT)
Spin injection
Pure spin currents
Spin pumping
Spin waves, magnonics
(inverse) Spin Hall effect
Spin calorics, Spin Seebeck effect
Applications of spintronics
this section is devoted to current and possible future applications of spintronics which make use of one or the combination of several basic spintronic phenomena:
Hard disk drive read heads
Magnetoresistive random-access memory (MRAM)
Racetrack memory
Spin transistor
Spin quantum computing
Magnon-based spintronics
Spin materials
Materials which properties are determined or strongly influenced by quantum spin:
Magnetic alloys, i.e. Heusler compounds
Graphene systems
Organic spin materials
Molecular nanomagnets
Magnetic molecules
Organic radicals
Metamaterials with artificial magnetism
Spin based detection
methods to characterize materials and physical or chemical processes via spin based phenomena:
Magneto-optic Kerr effect (MOKE)
Nuclear magnetic resonance (NMR)
Neutron scattering
Spin polarized photoemission
Brillouin Light Scattering (BLS)
X-ray magnetic circular dichroism (XMCD)
References
External links
Albert Fert (Nobel Prize in Physics (2007)), "The origin, the development and the future of spintronics", Nobel Lecture as pdf at nobelprize.org
Peter Grünberg (Nobel Prize in Physics (2007)), "From spinwaves to Giant Magnetoresistance (GMR) and beyond", Nobel Lecture as pdf at nobelprize.org
Scientific background of the discovery of the Giant Magnetoresistance, compiled by the Class for Physics of the Royal Swedish Academy of Sciences
Animations of GMR Sensors at the IBM Research Homepage
Albert Fert (Nobel Prize in Physics (2007)) video answer of the question: "What is spin?"
Creation of a pure spin current in Graphene, article from Physorg.com
Quantum chemistry
Materials science
Applied and interdisciplinary physics
Spintronics
Organic chemistry
Inorganic chemistry | Spin engineering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,048 | [
"Applied and interdisciplinary physics",
"Quantum chemistry",
"Spintronics",
"Materials science",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
37,395,056 | https://en.wikipedia.org/wiki/American%20Society%20of%20Professional%20Estimators | ASPE is the American Society of Professional Estimators. It was founded in 1956 by about 20 cost estimators in Los Angeles, California. In 1974, there were 10 chapters totalling 600 members. By 1977, there were 23 chapters totalling 1500 members. The society's web page states that there are thousands of members in 65 chapters across the US. The society does not publish the exact number of members, but an estimate from observation in 2012 is approximately 3000. A published goal for 2012 was to have 3500 members.
ASPE is the publisher of Estimating Today, a periodical technical journal. They also publish an estimating manual named Standard Estimating Practice.
ASPE offers a professional certification for building estimators named Certified Professional Estimator, or CPE. This program began in 1976, when 233 CPEs were awarded in 11 of the then 16 divisions of building construction defined by the Construction Specifications Institute in their MasterFormat system.
References
External links
ASPE American Society of Professional Estimators
Construction organizations
Cost engineering
1956 establishments in California | American Society of Professional Estimators | [
"Engineering"
] | 215 | [
"Cost engineering",
"Construction organizations",
"Construction"
] |
37,395,415 | https://en.wikipedia.org/wiki/Peniche%20%28fluid%20dynamics%29 | A peniche (or stand-off) is material inserted between a half-model, often of an airplane, and the wall of a wind tunnel. Péniche is a French nautical term meaning barge. The purpose of the peniche is to remove or reduce the influence of the boundary layer on the half-model. The effect of the peniche itself in fluid dynamics is not fully understood.
Half-models are used in wind-tunnel testing in aerodynamics, as larger scale half-models in constant pressure tunnels operate at increased Reynolds numbers closer to those of real aircraft. One trade-off is the interaction between the central part of the half-model and the wall boundary layer. Inserting a peniche between the centre line of the half-model and the wall of the wind tunnel attempts to eliminate or reduce that boundary layer effect by creating distance between the model and the wall. Varying widths and shapes of peniches have been used; a peniche that follows the longitudinal cross section contour of the half-model is the simplest.
The peniche itself affects the fluid dynamics around the half-model. It increases the local angle of attack on an inboard wing, while having no influence on an outboard wing. The blocking of the peniche in the flow field leads to further displacement of the flow, which in turn leads to higher flow speeds and local angles of attack. How strong of an effect the peniche has is a function of the angle of attack, with the effect present at all angles.
References
Fluid dynamics | Peniche (fluid dynamics) | [
"Chemistry",
"Engineering"
] | 310 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
37,397,029 | https://en.wikipedia.org/wiki/Point%E2%80%93line%E2%80%93plane%20postulate | In geometry, the point–line–plane postulate is a collection of assumptions (axioms) that can be used in a set of postulates for Euclidean geometry in two (plane geometry), three (solid geometry) or more dimensions.
Assumptions
The following are the assumptions of the point-line-plane postulate:
Unique line assumption. There is exactly one line passing through two distinct points.
Number line assumption. Every line is a set of points which can be put into a one-to-one correspondence with the real numbers. Any point can correspond with 0 (zero) and any other point can correspond with 1 (one).
Dimension assumption. Given a line in a plane, there exists at least one point in the plane that is not on the line. Given a plane in space, there exists at least one point in space that is not in the plane.
Flat plane assumption. If two points lie in a plane, the line containing them lies in the plane.
Unique plane assumption. Through three non-collinear points, there is exactly one plane.
Intersecting planes assumption. If two different planes have a point in common, then their intersection is a line.
The first three assumptions of the postulate, as given above, are used in the axiomatic formulation of the Euclidean plane in the secondary school geometry curriculum of the University of Chicago School Mathematics Project (UCSMP).
History
The axiomatic foundation of Euclidean geometry can be dated back to the books known as Euclid's Elements (circa 300 B.C.). These five initial axioms (called postulates by the ancient Greeks) are not sufficient to establish Euclidean geometry. Many mathematicians have produced complete sets of axioms which do establish Euclidean geometry. One of the most notable of these is due to Hilbert who created a system in the same style as Euclid. Unfortunately, Hilbert's system requires 21 axioms. Other systems have used fewer (but different) axioms. The most appealing of these, from the viewpoint of having the fewest axioms, is due to G.D. Birkhoff (1932) which has only four axioms. These four are: the Unique line assumption (which was called the Point-Line Postulate by Birkhoff), the Number line assumption, the Protractor postulate (to permit the measurement of angles) and an axiom that is equivalent to Playfair's axiom (or the parallel postulate). For pedagogical reasons, a short list of axioms is not desirable and starting with the New math curricula of the 1960s, the number of axioms found in high school level textbooks has increased to levels that even exceed Hilbert's system.
References
External links
The Point-Line-Plane Postulate as described in the Oracle Education Foundation's "ThinkQuest" online description of basic geometry postulates and theorems.
The Point-Line-Plane Postulate as described in Professor Calkins' (Andrews University) online listing of basic geometry concepts.
Foundations of geometry | Point–line–plane postulate | [
"Mathematics"
] | 619 | [
"Foundations of geometry",
"Mathematical axioms"
] |
37,400,498 | https://en.wikipedia.org/wiki/Iliopectineal%20bursa | The iliopectineal bursa or the iliopsoas bursa is a large synovial bursa that separates the external surface of the hip joint capsule from the tendon of the iliopsoas muscle.
The most proximal of part the iliopectineal bursa lies on the iliopubic eminence of the superior pubic ramus. The iliopectineal bursa passes across the front of the capsule of the hip joint and extends distally downwards almost as far as the lesser trochanter.
The iliopectineal bursa frequently communicates by a circular aperture with the cavity of the hip joint.
In 13% of all cases the iliopectineal bursa is partly separated by a septum into two cavities. Here the tendon of the psoas major muscle passes over the medial chamber and the tendon of the iliacus muscle runs over the lateral chamber.
Inflammation of the iliopectineal bursa is called iliopectineal bursitis or iliopsoas bursitis.
References
Further reading
Synovial bursae
Lower limb anatomy
Soft tissue
Musculoskeletal system | Iliopectineal bursa | [
"Biology"
] | 258 | [
"Organ systems",
"Musculoskeletal system"
] |
37,401,799 | https://en.wikipedia.org/wiki/Hiroomi%20Umezawa | (September 20, 1924 – March 24, 1995) was a physicist and Distinguished Professor in the Department of Physics at the University of Wisconsin–Milwaukee and later at the University of Alberta. He is known for his fundamental contributions to quantum field theory and for his work on quantum phenomena in relation to the mind.
Education, career and work
Umezawa obtained his PhD from Nagoya University, Japan in 1952. He worked at the University of Tokyo, Japan, and the University of Naples, Italy, and took up a position as professor at the University of Wisconsin–Milwaukee (UWM) in 1966, already considered a famous physicist at the time. He joined the University of Alberta, Canada, in 1975 when he took the Killam Memorial Chair as Professor of Physics, a position which he held until his retirement in 1992.
Umezawa is recognized as one of the eminent quantum field theorists of his generation. He applied his results in quantum field theory also to high energy physics, condensed matter physics, nuclear physics and statistical physics, as well as his considerations of quantum theory and the mind.
In 1967, together with L.M. Ricciardi, he proposed a quantum theory of the brain which posits a spatially distributed charge formation exhibiting spontaneous breakdowns at micro levels as the basis for processing at macro levels. In this model, the information resides in the virtual field associated with the dynamics of the cellular matter. This model was subsequently expanded by Stuart, Takahashi and Umezawa with their proposal of the development of long range correlations among neurons due to the interaction of two quantum fields. The approach was built upon by many others, including Karl H. Pribram, and was later expanded by Giuseppe Vitiello to a dissipative quantum model of brain.
Umezawa's scientific work has been characterized by his colleagues at UWM as "marked by extreme originality".
Memorial fund
After his death in 1995, Umezawa's family, friends and students set up the Umezawa Fund at the University of Alberta in his memory, dedicated to support studies in physics; among the Memorial Distinguished Visitors has been physicist Gordon Baym of the University of Illinois in 2007.
Publications
Books on quantum theory
H Umezawa; A Arimitsu, et al. (eds.): Selected papers of Hiroomi Umezawa, Tokyo, Japan (Published by Editorial Committee for Selected Papers of Hiroomi Umezawa), 2001
Hiroomi Umezawa: Advanced Field Theory: Micro, Macro, and Thermal Physics, American Institute of Physics, 1993,
Hiroomi Umezawa, Giuseppe Vitiello: Quantum Mechanics, Bibliopolis, 1985
Hiroomi Umezawa, H. Matsumoto, Masashi Tachiki: Thermo field dynamics and condensed states, North-Holland Publishing Company, 1982,
Hiroomi Umezawa: Quantum Field Theory, North-Holland Publishing Co., 1956
Articles on the quantum theory of mind:
C.I.J. Stuart, Y. Takahashi, H. Umezawa (1979): Mixed system brain dynamics: neural memory as a macroscopic ordered state, Foundations of Physics, vol. 9, pp. 301–307
C.I.J. Stuart, Y. Takahashi, H. Umezawa (1978): On the stability and non-local properties of memory, Journal of Theoretical Biology, vol. 31, pp. 605–618
L.M. Ricciardi, U. Umezawa (1967): Brain physics and many-body problems, Kibernetik, vol. 4, pp. 44–48
References
Further reading
Ferdinando Mancini (ed.): Quantum field theory: proceedings of the international symposium in honour of Hiroomi Umezawa, held in Positano, Salerno, Italy, June 5–7, 1985
External links
Hiroomi Umezawa at the Mathematics Genealogy Project
Hiroomi Umezawa, article search at the Falvey Memorial Library, Villanova University
1924 births
1995 deaths
Nagoya University alumni
Academic staff of the University of Alberta
University of Wisconsin–Milwaukee faculty
Theoretical physicists
20th-century Japanese physicists
Fellows of the American Physical Society
University of Alberta alumni | Hiroomi Umezawa | [
"Physics"
] | 863 | [
"Theoretical physics",
"Theoretical physicists"
] |
37,401,922 | https://en.wikipedia.org/wiki/C22H26O8 | {{DISPLAYTITLE:C22H26O8}}
The molecular formula C22H26O8 may refer to:
Sekikaic acid
Syringaresinol | C22H26O8 | [
"Chemistry"
] | 39 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
38,838,646 | https://en.wikipedia.org/wiki/Non-autonomous%20system%20%28mathematics%29 | In mathematics, an autonomous system is a dynamic equation on a smooth manifold. A non-autonomous system is a dynamic equation on a smooth fiber bundle over . For instance, this is the case of non-autonomous mechanics.
An r-order differential equation on a fiber bundle is represented by a closed subbundle of a jet bundle of . A dynamic equation on is a differential equation which is algebraically solved for a higher-order derivatives.
In particular, a first-order dynamic equation on a fiber bundle is a kernel of the covariant differential of some connection on . Given bundle coordinates on and the adapted coordinates on a first-order jet manifold , a first-order dynamic equation reads
For instance, this is the case of Hamiltonian non-autonomous mechanics.
A second-order dynamic equation
on is defined as a holonomic
connection on a jet bundle . This
equation also is represented by a connection on an affine jet bundle . Due to the canonical
embedding , it is equivalent to a geodesic equation
on the tangent bundle of . A free motion equation in non-autonomous mechanics exemplifies a second-order non-autonomous dynamic equation.
See also
Autonomous system (mathematics)
Non-autonomous mechanics
Free motion equation
Relativistic system (mathematics)
References
De Leon, M., Rodrigues, P., Methods of Differential Geometry in Analytical Mechanics (North Holland, 1989).
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ().
Differential equations
Classical mechanics
Dynamical systems | Non-autonomous system (mathematics) | [
"Physics",
"Mathematics"
] | 333 | [
"Mathematical objects",
"Classical mechanics",
"Equations",
"Differential equations",
"Mechanics",
"Dynamical systems"
] |
38,840,063 | https://en.wikipedia.org/wiki/Cluster%20Innovation%20Centre | The Cluster Innovation Centre (DU CIC) is a Government of India funded centre established under the aegis of the University of Delhi. It was founded in 2011 by Prof. Dinesh Singh, the then Vice Chancellor of the University of Delhi, and introduced Innovation as a credit-based course for the first time in India.
Establishment
The Cluster Innovation Centre was conceptualized to foster innovation and connect academic research with practical applications. It was established during the tenure of Prof. Dinesh Singh, then Vice Chancellor of the University of Delhi. The National Innovation Council proposed the development of 20 University Innovation Centres across the country, with CIC serving as the prototype for this initiative.
Objectives
CIC aims to develop a culture of innovation within the academic system and to connect research with societal needs. Its primary objectives include promoting innovative degree programs, educating students and faculty through innovation-focused schemes, supporting application-oriented research, and facilitating collaborations with industries, academia, and other stakeholders. It also focuses on commercializing innovations to make them accessible to end users, addressing real-world problems through student projects, and developing affordable and sustainable innovations that benefit a broad audience.
Academic Programs
CIC offers interdisciplinary academic programs spanning undergraduate, postgraduate, and doctoral levels.
Undergraduate Programs
The Bachelor of Technology (B.Tech.) in Information Technology and Mathematical Innovations is a four-year program that integrates mathematics and information technology to cultivate an innovation-driven mindset. Students in this program can earn a minor degree in fields such as electronics, management, or computational biology.
The Bachelor of Arts (Honours) in Humanities and Social Sciences, offered under the "Meta College" concept, is another four-year interdisciplinary program. It enables students to design their own degree by majoring in fields like environmental science, tourism, geography, literature, media and communication studies, natural sciences, or psychology. This program encourages projects addressing societal issues, often in collaboration with organizations like Delhi Jal Board, Delhi Police, National Association of the Deafand other non-governmental organizations.
Postgraduate Program
The Master of Science (M.Sc.) in Mathematics Education is a two-year program jointly offered by the University of Delhi and Jamia Millia Islamia. This program aims to innovate mathematics pedagogy by integrating interdisciplinary methods, such as storytelling, projects, and participatory learning, with a focus on using technology to enhance education.
Doctoral Research
CIC offers Ph.D. programs under the Faculty of Technology with an interdisciplinary approach. Research areas include mathematics, information technology, computer science, physics, computational biology, nanotechnology, environmental sciences, humanities and social sciences, media and communication studies, and psychology.
Admissions
Before 2022, Delhi University conducted a Delhi University Entrance Test (DUET) for admission into Cluster Innovation Centre.
In present day, admissions to CIC’s programs are conducted through the Common University Entrance Test (CUET) for undergraduate and postgraduate courses. Ph.D. admissions are based on the National Eligibility Test (NET) scores conducted by the National Testing Agency (NTA).
Fee structure
The fee structure for CIC’s programs varies. The B.Tech program costs ₹40,000 per year, while the BA (Hons.) and M.Sc. programs each cost ₹22,375 per year. The Ph.D. program has an annual fee of ₹4,450.
Campus
The Cluster Innovation Centre is currently located inside the University Stadium building of the University of Delhi. This stadium is located inside the Delhi University Sports Complex, that in-turn is adjoint to the Viceregal Lodge (Vice Chancellor's Office) and Faculty of Science of the University's North Campus. The campus area of this block is 67 acres (27 ha). The campus is connected to the Delhi Metro through Vishwavidyalaya metro station of Yellow Line.
Design Innovation Centre (DIC)
CIC administers a Design Innovation Centre (DIC), established under the Ministry of MSME. DIC operates within the framework proposed by the National Innovation Council, which envisioned the establishment of similar design innovation centres in prominent educational institutions like IIT Delhi, IIT Bombay, IIT Guwahati, and IISc Bangalore. These centres, operate on a "Hub and Spoke" model, promoting collaboration among academia, industry, and society to develop impactful innovations. The centre includes facilities such as a Technology Business Incubator, which provides lab access, workshops, and mentorship to startups in a PPP model.
The DIC operates within the framework proposed by the National Innovation Council, which envisioned the establishment of similar design innovation centres in prominent educational institutions like IIT Delhi, IIT Bombay, IIT Guwahati, and IISc Bangalore. These centres, following a 'Hub and Spoke' model, are designed to bring together academia, industry, and society to identify and develop impactful innovations.
References
Notes
Citations
External links
DU launches CIC on MSN News
Office of the Advisor to Prime Minister on Public Information Infrastructure & Innovations
Prime Minister's speech on CIC at Press Information Bureau
CSIR Linkage on India EduNews
DRDO, Ministry of Defence and CIC MOU
National Innovation Council members
University Innovation Cluster
India to set up Cluster Innovation Centres - SIID
Concept Note on Establishment of Design Innovation Centres - HRD Ministry
Delhi University
Innovation in India
Engineering colleges in Delhi
Engineering universities and colleges
Engineering universities and colleges in India
Educational institutions established in 2011
Education in Delhi | Cluster Innovation Centre | [
"Engineering"
] | 1,108 | [
"Engineering universities and colleges"
] |
38,840,645 | https://en.wikipedia.org/wiki/Paramagnetic%20nuclear%20magnetic%20resonance%20spectroscopy | Paramagnetic nuclear magnetic resonance spectroscopy refers to nuclear magnetic resonance (NMR) spectroscopy of paramagnetic compounds. Although most NMR measurements are conducted on diamagnetic compounds, paramagnetic samples are also amenable to analysis and give rise to special effects indicated by a wide chemical shift range and broadened signals. Paramagnetism diminishes the resolution of an NMR spectrum to the extent that coupling is rarely resolved. Nonetheless spectra of paramagnetic compounds provide insight into the bonding and structure of the sample. For example, the broadening of signals is compensated in part by the wide chemical shift range (often 200 ppm in 1H NMR). Since paramagnetism leads to shorter relaxation times (T1), the rate of spectral acquisition can be high.
Chemical shifts in diamagnetic compounds are described using the Ramsey equation, which describes so-called diamagnetic and paramagnetic contributions. In this equation, paramagnetic refers to excited state contributions, not to contributions from truly paramagnetic species.
Hyperfine shift
The difference between the chemical shift of a given nucleus in a diamagnetic vs. a paramagnetic environment is called the hyperfine shift. In solution the isotropic hyperfine chemical shift for nickelocene is −255 ppm, which is the difference between the observed shift (ca. −260 ppm) and the shift observed for a diamagnetic analogue ferrocene (ca. 5 ppm). The hyperfine shift contains contributions from the pseudocontact (also called dipolar) and contact (also called scalar) terms. The isotropic hyperfine shift can be small or even close to zero for nuclei far away from the paramagnetic center, or in the range of several hundreds of ppm for nuclei in close proximity. Directly bound nuclei have hyperfine shifts of thousands of ppm but are usually not oberservable due to extremely fast relaxation and line broadening.
Contact vs. pseudocontact shifts
Hyperfine shifts result from two mechanisms, contact shifts and pseudocontact shifts. Both effects operate simultaneously but one or the other term can be dominant. Contact shifts result from spin delocalization through molecular orbitals of the molecule and from spin polarization. Pseudocontact shifts result from magnetic anisotropy of the paramagnetic molecule. Pseudocontact shifts follow a 1/r3 and an angular dependence. They are large for many lanthanide complexes due to their strong magnetic anisotropy. NMR shift reagents such as EuFOD can interact in fast exchange with Lewis-basic organic compounds (such as alcohols) and are therefore able to shift the NMR signals of the diamagnetic compound in dependance of its concentration and spatial distance.
The effect of the contact term arises from transfer of unpaired spin density to the observed nucleus. This coupling, also known by EPR spectroscopists as hyperfine coupling, is in the order of MHz, as compared with the usual internuclear (J) coupling observed in conventional NMR spectra, which are in the order of a few Hz. This difference reflects the large magnetic moment of an electron (−1.00 μB), which is much greater than any nuclear magnetic moment (e.g. for 1H: 1.52×10−3 μB). Owing to rapid spin relaxation, the electron-nuclear coupling is not observed in the NMR spectrum, so the affected nuclear resonance appears at the average of the two coupled energy states, weighted according to their spin populations. Given the magnitude of the coupling, the Boltzmann distribution of these spin states is not close to 1:1, leading to net spin polarization on the affected NMR nucleus, hence relatively large contact shifts.
The effect of the pseudocontact term arises from magnetic anisotropy of the paramagnetic center (reflected in g-anisotropy in the EPR spectrum). This anisotropy creates a magnetic field which supplements that of the instrument's magnet. The magnetic field exerts its effect with both angular and a 1/r3 geometric dependences.
See also
Electron paramagnetic resonance – a related technique for studying paramagnetic materials
References
Nuclear magnetic resonance spectroscopy | Paramagnetic nuclear magnetic resonance spectroscopy | [
"Physics",
"Chemistry"
] | 900 | [
"Nuclear magnetic resonance",
"Spectroscopy",
"Spectrum (physical sciences)",
"Nuclear magnetic resonance spectroscopy"
] |
38,841,583 | https://en.wikipedia.org/wiki/Program%20on%20Energy%20Efficiency%20in%20Artisanal%20Brick%20Kilns%20in%20Latin%20America%20to%20Mitigate%20Climate%20Change | The Program on Energy Efficiency in Artisanal Brick Kilns in Latin America to Mitigate Climate Change (EELA) is a program of the Swiss Agency for Development and Cooperation (SDC) which is implemented by Swiss contact in conjunction with its partners in nine countries in Latin America. The objective is to mitigate climate change through the reduction of greenhouse gas emissions in Latin America and to improve the quality of life of the population in the areas of intervention.
The problem
Artisanal brick producers in Latin America use fuel with high environmental impact in kilns with low energy efficiency. Wood, tires and plastics, among other fuels, are used to fire bricks, contributing to air pollution and deforestation as well as increasing the causes of climate change. Despite their contribution to the construction industry and the generation of jobs, artisanal brick producers largely operate informally and are generally excluded from social, environmental and economic public policies.
The approach
The EELA program is focused on developing management models for artisanal brick producers, and includes activities that range from the adoption of more efficient production processes that require less fuel and emit less greenhouse gases, to the creation of new products that use less raw materials.
EELA activities
Innovation through the introduction of technologies that reduce greenhouse gas emissions and are financially viable for artisanal brick producers.
Working in coordination with public bodies to incorporate the artisanal brick production sector within the national climate change mitigation agenda.
Taking account of lessons learned within the countries, and promoting the exchange of experiences among them.
Areas of operation
EELA began its first phase working with 970 artisanal brick producers in seven areas located in San Juan (Argentina), Cochabamba (Bolivia), Serido (Brazil), Nemocon (Colombia), Cuenca (Ecuador), Leon (Mexico) and Cusco (Peru). In 2012, Nicaragua and Honduras joined the initiative. The experience gained in the pilot areas will serve as a basis to expand EELA’s intervention at a national level.
Aims
EELA hopes to reduce the GHG emissions of artisanal brick producers in the countries of operation by 30% and increase their income by 10%.
Artisanal brick producers in San Jeronimo, Cusco, Peru
In conjunction with the Ministry of Production, the Regional Directorate of Production and the Municipality of San Jeronimo, EELA works to promote good manufacturing practices to better employ heat as well as provide technical assistance in the following areas:
Mixers designed to optimize time and resources
Extrusion machinery to use less raw materials and diversify production
Fans to make kiln combustion more effective
EELA constructed a downdraft kiln that is more efficient than a traditional kiln and that is affordable for an artisanal brick producer. The kiln is being replicated by brick producers in San Jerónimo, Cuenca (Ecuador) and in Mexico.
Challenges
In a second phase, EELA will seek to improve the management models through the use of knowledge gained from its first phase.
Joint effort
EELA is supported by a group of partners that includes both public and private entities with experience in the brick sector, in each of the countries in which it operates.
See also
References
Brickworks
Brick manufacturers
Brick Kilns
Sustainable building | Program on Energy Efficiency in Artisanal Brick Kilns in Latin America to Mitigate Climate Change | [
"Chemistry",
"Engineering"
] | 662 | [
"Sustainable building",
"Emissions reduction",
"Building engineering",
"Construction",
"Greenhouse gases"
] |
29,329,394 | https://en.wikipedia.org/wiki/Spurion | In theoretical physics, a spurion is a fictitious, auxiliary field in a quantum field theory that can be used to parameterize any symmetry breaking and to determine all operators invariant under the symmetry.
The procedure begins with finding a parameter that measures the amount of symmetry breaking. This parameter is promoted to a field, i.e. to a function of the spacetime coordinates. With this new fictitious field, operators that are invariant under the symmetry may be found by the usual group-theoretical considerations.
The list of operators found in this way is complete as long as all sources of the breaking are included. The operators in the actual theory are ultimately found by setting the spurious field equal to the constant value of the parameter.
Applications
In the theory of pions, physics often uses the chiral perturbation theory. Here, the relevant symmetry is the isospin SU(2) symmetry. It is broken by the different masses of u and d quarks as well as by their different charges. The chiral Lagrangian may be extended to an exactly SU(2)-symmetric Lagrangian by promoting these parameters (mass and charge) to fields that break the symmetry spontaneously. Calculations of observables to higher orders may be done with the spurion fields. The final result, at any order of accuracy, is obtained by substituting the right masses and charges.
In the standard electroweak theory, the spurion is replaced by an actual field, the Higgs boson. However, in alternative theories of electroweak symmetry breaking, e.g. those based on Technicolor, the spurion techniques are important to derive the physical predictions.
Quantum field theory | Spurion | [
"Physics"
] | 344 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
29,330,054 | https://en.wikipedia.org/wiki/Environmental%20error | An environmental error is an error in calculations that are being a part of observations due to environment. Any experiment performing anywhere in the universe has its surroundings, from which we cannot eliminate our system. The study of environmental effects has primary advantage of being able us to justify the fact that environment has impact on experiments and feasible environment will not only rectify our result but also amplify it.
Causes
The environmental errors have different causes, which are widening with the passage of time, as the research works telling us, including; temperature, humidity, magnetic field, constantly vibrating earth surface, wind and improper lighting.
Minimizing
In high precision laboratories, where a slightest bug can destroy the whole system, removal or at least minimizing
References
Measurement | Environmental error | [
"Physics",
"Mathematics"
] | 149 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
40,158,142 | https://en.wikipedia.org/wiki/Nonlinear%20system%20identification | System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and include industrial processes, control systems, economic data, biology and the life sciences, medicine, social systems and many more.
A nonlinear system is defined as any system that is not linear, that is any system that does not satisfy the superposition principle. This negative definition tends to obscure that there are very many different types of nonlinear systems. Historically, system identification for nonlinear systems has developed by focusing on specific classes of system and can be broadly categorized into five basic approaches, each defined by a model class:
Volterra series models,
Block-structured models,
Neural network models,
NARMAX models, and
State-space models.
There are four steps to be followed for system identification: data gathering, model postulate, parameter identification, and model validation. Data gathering is considered as the first and essential part in identification terminology, used as the input for the model which is prepared later. It consists of selecting an appropriate data set, pre-processing and processing. It involves the implementation of the known algorithms together with the transcription of flight tapes, data storage and data management, calibration, processing, analysis, and presentation. Moreover, model validation is necessary to gain confidence in, or reject, a particular model. In particular, the parameter estimation and the model validation are integral parts of the system identification. Validation refers to the process of confirming the conceptual model and demonstrating an adequate correspondence between the computational results of the model and the actual data.
Volterra series methods
The early work was dominated by methods based on the Volterra series, which in the discrete time case can be expressed as
where u(k), y(k); k = 1, 2, 3, ... are the measured input and output respectively and is the lth-order Volterra kernel, or lth-order nonlinear impulse response. The Volterra series is an extension of the linear convolution integral. Most of the earlier identification algorithms assumed that just the first two, linear and quadratic, Volterra kernels are present and used special inputs such as Gaussian white noise and correlation methods to identify the two Volterra kernels. In most of these methods the input has to be Gaussian and white which is a severe restriction for many real processes. These results were later extended to include the first three Volterra kernels, to allow different inputs, and other related developments including the Wiener series. A very important body of work was developed by Wiener, Lee, Bose and colleagues at MIT from the 1940s to the 1960s including the famous Lee and Schetzen method. While these methods are still actively studied today there are several basic restrictions. These include the necessity of knowing the number of Volterra series terms a priori, the use of special inputs, and the large number of estimates that have to be identified. For example, for a system where the first order Volterra kernel is described by say 30 samples, 30x30 points will be required for the second order kernel, 30x30x30 for the third order and so on and hence the amount of data required to provide good estimates becomes excessively large. These numbers can be reduced by exploiting certain symmetries but the requirements are still excessive irrespective of what algorithm is used for the identification.
Block-structured systems
Because of the problems of identifying Volterra models other model forms were investigated as a basis for system identification for nonlinear systems. Various forms of block structured nonlinear models have been introduced or re-introduced. The Hammerstein model consists of a static single valued nonlinear element followed by a linear dynamic element. The Wiener model is the reverse of this combination so that the linear element occurs before the static nonlinear characteristic. The Wiener-Hammerstein model consists of a static nonlinear element sandwiched between two dynamic linear elements, and several other model forms are available. The Hammerstein-Wiener model consists of a linear dynamic block sandwiched between two static nonlinear blocks. The Urysohn model is different from other block models, it does not consists of sequence linear and nonlinear blocks, but describes both dynamic and static nonlinearities in the expression of the kernel of an operator. All these models can be represented by a Volterra series but in this case the Volterra kernels take on a special form in each case. Identification consists of correlation based and parameter estimation methods. The correlation methods exploit certain properties of these systems, which means that if specific inputs are used, often white Gaussian noise, the individual elements can be identified one at a time. This results in manageable data requirements and the individual blocks can sometimes be related to components in the system under study.
More recent results are based on parameter estimation and neural network based solutions. Many results have been introduced and these systems continue to be studied in depth. One problem is that these methods are only applicable to a very special form of model in each case and usually this model form has to be known prior to identification.
Neural networks
Artificial neural networks try loosely to imitate the network of neurons in the brain where computation takes place through a large number of simple processing elements. A typical neural network consists of a number of simple processing units interconnected to form a complex network. Layers of such units are arranged so that data is entered at the input layer and passes through either one or several intermediate layers before reaching the output layer. In supervised learning the network is trained by operating on the difference between the actual output and the desired output of the network, the prediction error, to change the connection strengths between the nodes. By iterating, the weights are modified until the output error reaches an acceptable level. This process is called machine learning because the network adjusts the weights so that the output pattern is reproduced.
Neural networks have been extensively studied and there are many excellent textbooks devoted to this topic in general, and more focused textbooks which emphasise control and systems applications,.
There are two main problem types that can be studied using neural networks: static problems, and dynamic problems. Static problems include pattern recognition, classification, and approximation. Dynamic problems involve lagged variables and are more appropriate for system identification and related applications. Depending on the architecture of the network the training problem can be either nonlinear-in-the-parameters which involves optimisation or linear-in-the-parameters which can be solved using classical approaches. The training algorithms can be categorised into supervised, unsupervised, or reinforcement learning. Neural networks have excellent approximation properties but these are usually based on standard function approximation results using for example the Weierstrass Theorem that applies equally well to polynomials, rational functions, and other well-known models.
Neural networks have been applied extensively to system identification problems which involve nonlinear and dynamic relationships. However, classical neural networks are purely gross static approximating machines. There is no dynamics within the network. Hence when fitting dynamic models all the dynamics arise by allocating lagged inputs and outputs to the input layer of the network. The training procedure then produces the best static approximation that relates the lagged variables assigned to the input nodes to the output. There are more complex network architectures, including recurrent networks, that produce dynamics by introducing increasing orders of lagged variables to the input nodes. But in these cases it is very easy to over specify the lags and this can lead to over fitting and poor generalisation properties.
Neural networks have several advantages; they are conceptually simple, easy to train and to use, have excellent approximation properties, the concept of local and parallel processing is important and this provides integrity and fault tolerant behaviour. The biggest criticism of the classical neural network models is that the models produced are completely opaque and usually cannot be written down or analysed. It is therefore very difficult to know what is causing what, to analyse the model, or to compute dynamic characteristics from the model. Some of these points will not be relevant to all applications but they are for dynamic modelling.
NARMAX methods
The nonlinear autoregressive moving average model with exogenous inputs (NARMAX model) can represent a wide class of nonlinear systems, and is defined as
where y(k), u(k) and e(k) are the system output, input, and noise sequences respectively; , , and are the maximum lags for the system output, input and noise; F[•] is some nonlinear function, d is a time delay typically set to d = 1.The model is essentially an expansion of past inputs, outputs and noise terms. Because the noise is modelled explicitly, unbiased estimates of the system model can be obtained in the presence of unobserved highly correlated and nonlinear noise.
The Volterra, the block structured models and many neural network architectures can all be considered as subsets of the NARMAX model. Since NARMAX was introduced, by proving what class of nonlinear systems can be represented by this model, many results and algorithms have been derived based around this description. Most of the early work was based on polynomial expansions of the NARMAX model. These are still the most popular methods today but other more complex forms based on wavelets and other expansions have been introduced to represent severely nonlinear and highly complex nonlinear systems. A significant proportion of nonlinear systems can be represented by a NARMAX model including systems with exotic behaviours such as chaos, bifurcations, and subharmonics.
While NARMAX started as the name of a model it has now developed into a philosophy of nonlinear system identification,. The NARMAX approach consists of several steps:
Structure detection: which terms are in the model
Parameter estimation: determine the model coefficients
Model validation: is the model unbiased and correct
Prediction: what is the output at some future time
Analysis: what are the dynamical properties of the system
Structure detection forms the most fundamental part of NARMAX. For example, a NARMAX model which consists of one lagged input and one lagged output term, three lagged noise terms, expanded as a cubic polynomial would consist of eighty two possible candidate terms. This number of candidate terms arises because the expansion by definition includes all possible combinations within the cubic expansion. Naively proceeding to estimate a model which includes all these terms and then pruning will cause numerical and computational problems and should always be avoided. However, only a few terms are often important in the model. Structure detection, which aims to select terms one at a time, is therefore critically important. These objectives can easily be achieved by using the Orthogonal Least Squares algorithm and its derivatives to select the NARMAX model terms one at a time. These ideas can also be adapted for pattern recognition and feature selection and provide an alternative to principal component analysis but with the advantage that the features are revealed as basis functions that are easily related back to the original problem.
NARMAX methods are designed to do more than find the best approximating model. System identification can be divided into two aims. The first involves approximation where the key aim is to develop a model that approximates the data set such that good predictions can be made. There are many applications where this approach is appropriate, for example in time series prediction of the weather, stock prices, speech, target tracking, pattern classification etc. In such applications the form of the model is not that important. The objective is to find an approximation scheme which produces the minimum prediction errors. A second objective of system identification, which includes the first objective as a subset, involves much more than just finding a model to achieve the best mean squared errors. This second aim is why the NARMAX philosophy was developed and is linked to the idea of finding the simplest model structure. The aim here is to develop models that reproduce the dynamic characteristics of the underlying system, to find the simplest possible model, and if possible to relate this to components and behaviours of the system under study. The core aim of this second approach to identification is therefore to identify and reveal the rule that represents the system. These objectives are relevant to model simulation and control systems design, but increasingly to applications in medicine, neuro science, and the life sciences. Here the aim is to identify models, often nonlinear, that can be used to understand the basic mechanisms of how these systems operate and behave so that we can manipulate and utilise these. NARMAX methods have also been developed in the frequency and spatio-temporal domains.
Stochastic nonlinear models
In a general situation, it might be the case that some exogenous uncertain disturbance passes through the nonlinear dynamics and influence the outputs. A model class that is general enough to capture this situation is the class of stochastic nonlinear state-space models. A state-space model is usually obtained using first principle laws, such as mechanical, electrical, or thermodynamic physical laws, and the parameters to be identified usually have some physical meaning or significance.
A discrete-time state-space model may be defined by the difference equations:
in which is a positive integer referring to time. The functions and are general nonlinear functions. The first equation is known as the state equation and the second is known as the output equation. All the signals are modeled using stochastic processes. The process is known as the state process, and are usually assumed independent and mutually independent such that . The parameter is usually a finite-dimensional (real) parameter to be estimated (using experimental data). Observe that the state process does not have to be a physical signal, and it is normally unobserved (not measured). The data set is given as a set of input-output pairs for for some finite positive integer value .
Unfortunately, due to the nonlinear transformation of unobserved random variables, the likelihood function of the outputs is analytically intractable; it is given in terms of a multidimensional marginalization integral. Consequently, commonly used parameter estimation methods such as the Maximum Likelihood Method or the Prediction Error Method based on the optimal one-step ahead predictor are analytically intractable. Recently, algorithms based on sequential Monte Carlo methods have been used to approximate the conditional mean of the outputs or, in conjunction with the Expectation-Maximization algorithm, to approximate the maximum likelihood estimator. These methods, albeit asymptotically optimal, are computationally demanding and their use is limited to specific cases where the fundamental limitations of the employed particle filters can be avoided. An alternative solution is to apply the prediction error method using a sub-optimal predictor. The resulting estimator can be shown to be strongly consistent and asymptotically normal and can be evaluated using relatively simple algorithms.
See also
Grey box model
Statistical Model
References
Further reading
Lennart Ljung: System Identification — Theory For the User, 2nd ed, PTR Prentice Hall, Upper Saddle River, N. J., 1999.
R. Pintelon, J. Schoukens, System Identification: A Frequency Domain Approach, IEEE Press, New York, 2001.
T. Söderström, P. Stoica, System Identification, Prentice Hall, Upper Saddle River, N.J., 1989.
R. K. Pearson: Discrete-Time Dynamic Models. Oxford University Press, 1999.
P. Marmarelis, V. Marmarelis, V. Analysis of Physiological Systems, Plenum, 1978.
K. Worden, G. R. Tomlinson, Nonlinearity in Structural Dynamics, Institute of Physics Publishing, 2001.
Dynamical systems | Nonlinear system identification | [
"Physics",
"Mathematics"
] | 3,169 | [
"Nonlinear systems",
"Mechanics",
"Dynamical systems"
] |
40,159,314 | https://en.wikipedia.org/wiki/Relativistic%20chaos | In physics, relativistic chaos is the application of chaos theory to dynamical systems described primarily by general relativity, and also special relativity.
Barrow (1982) showed that the Einstein equations exhibit chaotic behaviour and modelled the Mixmaster universe as a dynamical system. Later work showed that relativistic chaos is coordinate invariant (Motter 2003).
See also
Quantum chaos
References
Chaos theory
General relativity
Mathematical physics | Relativistic chaos | [
"Physics",
"Mathematics"
] | 84 | [
"Applied mathematics",
"Theoretical physics",
"General relativity",
"Relativity stubs",
"Theory of relativity",
"Mathematical physics"
] |
3,992,145 | https://en.wikipedia.org/wiki/HAZWOPER | Hazardous Waste Operations and Emergency Response (HAZWOPER; ) is a set of guidelines produced and maintained by the Occupational Safety and Health Administration which regulates hazardous waste operations and emergency services in the United States and its territories. With these guidelines, the U.S. government regulates hazardous wastes and dangerous goods from inception to disposal.
HAZWOPER applies to five groups of employers and their employees. This includes employees who are exposed (or potentially exposed) to hazardous substances (including hazardous waste) and who are engaged in one of the following operations as specified by OSHA regulations 1910.120(a)(1)(i-v) and 1926.65(a)(1)(i-v):
Cleanup operations required by a governmental body (federal, state, local or other) involving hazardous substances conducted at uncontrolled hazardous-waste sites
Corrective actions involving clean-up operations at sites covered by the Resource Conservation and Recovery Act of 1976 (RCRA) as amended (42 U.S.C. 6901 et seq.)
Voluntary cleanup operations at sites recognized by a federal, state, local, or other governmental body as uncontrolled hazardous-waste sites
Operations involving hazardous waste which are conducted at treatment, storage and disposal facilities regulated by Title 40 of the Code of Federal Regulations, parts 264 and 265 pursuant to the RCRA, or by agencies under agreement with the U.S. Environmental Protection Agency to implement RCRA regulations
Emergency response operations for releases of, or substantial threats of release of, hazardous substances (regardless of the hazard's location)
The most commonly used manual for HAZWOPER activities is Department of Health and Human Services Publication 85–115, Occupational Safety and Health Guidance Manual for Hazardous Waste Site Activities. Written for government contractors and first responders, the manual lists safety requirements for cleanups and emergency-response operations.
History
Although its acronym predates OSHA, HAZWOPER describes OSHA-required regulatory training. Its relevance dates to World War II, when waste accumulated during construction of the atomic bomb at the Hanford Site. Years later, high-profile environmental mishaps (such as Love Canal in 1978 and the attempted 1979 Valley of the Drums cleanup) spurred federal legislative action, awakening the U.S. to the need to control and contain hazardous waste. Two programs—CERCLA, the Comprehensive Environmental Response, Compensation, and Liability Act and the Resource Conservation and Recovery Act (RCRA) of 1976—were implemented to deal with these wastes. CERCLA (the Superfund) was designed to deal with existing waste sites, and RCRA addressed newly generated waste. The acronym HAZWOPER originally derived from the Department of Defense's Hazardous Waste Operations (HAZWOP), implemented on military bases slated for the disposal of hazardous waste left on-site after World War II. In 1989 production ended at the Hanford Site, and work shifted to the cleanup of portions of the site contaminated with hazardous substances including radionuclides and chemical waste. OSHA created HAZWOPER, with input from the Coast Guard, the National Institute for Occupational Safety and Health and the Environmental Protection Agency (EPA). In 1984, the combined-agency effort resulted in the Hazardous Waste Operations and Emergency Response Guidance Manual. On March 6, 1990, OSHA published Hazardous Waste Operations and Emergency Response 1910.120, the HAZWOPER standard codifying the health-and-safety requirements companies must meet to perform hazardous-waste cleanup or respond to emergencies.
Scope
Hazardous waste, as defined by the standard, is a waste (or combination of wastes) according to 40 CFR §261.3 or substances defined as hazardous wastes in 49 CFR §171.8.
Training levels
OSHA recognizes several levels of training, based on the work the employee performs and the degree of hazard faced. Each level requires a training program, with OSHA-specified topics and minimum training time.
General site workers initially require 40 hours of instruction, three days of supervised hands-on training and eight hours refresher training annually.
Workers limited to a specific task, or workers on fully characterized sites with no hazards above acceptable levels, require HAZWOPER 24-Hour initial training, one day of supervised hands-on training and eight hours of refresher training annually.
Managers and supervisors require the same level of training as those they supervise, plus eight hours.
Workers at a treatment, storage or disposal facility handling RCRA waste require 24 hours of initial training, best practice two days of supervised hands-on training and eight hours of refresher training annually. 1910.120(p)(8)(iii)(B) Employee members of TSD facility emergency response organizations shall be trained to a level of competence in the recognition of health and safety hazards to protect themselves and other employees. This would include training in the methods used to minimize the risk from safety and health hazards; in the safe use of control equipment; in the selection and use of appropriate personal protective equipment; in the safe operating procedures to be used at the incident scene; in the techniques of coordination with other employees to minimize risks; in the appropriate response to over exposure from health hazards or injury to themselves and other employees; and in the recognition of subsequent symptoms which may result from over exposures.
The First Responder Awareness Level requires sufficient training to demonstrate competence in assigned duties.
The First Responder Operations Level requires Awareness-Level training plus eight hours.
Hazardous Materials Technicians require 24 hours training plus additional training to achieve competence in specialized areas.
Hazardous Materials Specialist requires 24 hours training at the Technician level, plus additional specialized training.
On-scene Incident Commander requires 24 hours training plus additional training to achieve competence in designated areas.
In some instances, training levels overlap; other levels are not authorized by OSHA because their training is not sufficiently specific. A site safety supervisor (or officer) and a competent industrial hygienist or other technically qualified, HAZWOPER-trained person should be consulted.
Training and certification sources
An employer must ensure that the training provider covers the areas of knowledge required by the standard and provides certification to students that they have passed the training. Since the certification is for the student, not the employer, the trainer must cover all aspects of HAZWOPER operations and not only those at the current site. OSHA training requires cleanup workers to focus on personal protective equipment separately from emergency-response equipment. There are 4 levels of PPE that range from A-D that HAZWOPER training will cover that vary in skin, respiratory and eye protection.
See also
Dangerous goods
Firefighter
References
External links
Department of Health and Human Services Publication 85–115, "Occupational Safety and Health Guidance Manual for Hazardous Waste Site Activities"
OSHA HAZWOPER FAQ
OSHA Federal Registers: Hazardous Waste Operations
The Centers for Disease Control and Prevention
Occupational Safety and Health Administration
Hazardous materials
Rules | HAZWOPER | [
"Physics",
"Chemistry",
"Technology"
] | 1,401 | [
"Materials",
"Hazardous materials",
"Matter"
] |
3,992,146 | https://en.wikipedia.org/wiki/Zone%20valve | A zone valve is a specific type of valve used to control the flow of water or steam in a hydronic heating or cooling system.
In the interest of improving efficiency and occupant comfort, such systems are commonly divided up into multiple zones. For example, in a house, the main floor may be served by one heating zone while the upstairs bedrooms are served by another. In this way, the heat can be directed principally to the main floor during the day and principally to the bedrooms at night, allowing the unoccupied areas to cool down.
This zoning can be accomplished in one of two ways:
Multiple circulator pumps, or
A single circulator pump and zone valves.
Zone valve construction and operation
Zone valves as used in home hydronic systems are usually electrically powered. In large commercial installations, vacuum or compressed air may be used instead. In either case, the motor is usually connected to the water valve via a mechanical coupling.
For electrical zone valves, the motor is often a small shaded-pole synchronous motor combined with a rotary switch that can disconnect the motor at either of the two stopping points ("valve open" or "valve closed"). In this way, applying power to the "open valve" terminal causes the motor to run until the valve is open while applying power at the "close valve" terminal causes the motor to run until the valve is closed. The motor is commonly powered from the same 24 volt ac power source that is used for the rest of the control system. This allows the zone valves to be directly controlled by low-voltage thermostats and wired with low-voltage wiring. This style of valve requires the use of an SPDT thermostat or relay.
A simpler variant of the motorized design omits the switch that detects the valve position. The motor is simply driven until the valve hits a mechanical stop, which stalls the motor. In an alternative design, the motor continues to turn, while a slip clutch allows the valve to be pushed against a mechanical stop. Usually, the valve remains open as long as power is supplied, and a strong spring closes it when power is cut. This simpler design consumes electrical power whenever the valve is open. There is no feedback to verify the state of the valve, which is assumed to do what has been commanded.
Zone valves can also be constructed using wax motors and a spring-return mechanism. In this case, the valve is normally closed by the force of the spring but can be opened by the force of the wax motor. Removal of electrical power re-closes the valve. This style of zone valve operates with a perfectly ordinary SPST thermostat.
For vacuum- or pneumatically operated zone valves, the thermostat usually switches the pressure or vacuum on or off, causing a spring-loaded rubber diaphragm or ball valve to move and actuate the valve. Unlike the switch-monitored electrical zone valves, these valves automatically return to the default position without the application of any power, and the default position is usually "open", allowing heat to flow.
Highly sophisticated systems may use some form of building automation such as BACnet or LonWorks to control the zone valves.
Comparison with multiple circulator pumps
Multiple zones can be implemented using either multiple, individually controlled circulator pumps or a single pump and multiple zone valves. Each approach has advantages and disadvantages.
Multiple pump system
Advantages:
Lower total cost of ownership when zone valve failure and repair costs are taken into account.
More robust and reliable system.
Simple mechanical and control design ("SPST thermostats")
Redundancy: If one zone pump fails, the others can remain working
Far superior method of linking multiple heat sources. Such as gas and solid fuel in one system.
Disadvantages:
Higher initial installation cost. Circulator pumps cost more than zone valves
Higher power consumption. Operating circulators draw more power any time the zone is actively heating. Zone valves, by comparison, draw little power at any time and many designs only draw power while in transition from open to close or vice versa.
Zone valve system
Advantages:
Lower initial installation cost.
Lower power consumption.
Ease of maintenance certain models.
Disadvantages:
Zone valves operated by electric timing motors aren't "fail safe" (failing to the "open" condition).
No inherent redundancy for the pump. A zone-valved system is dependent upon a single circulator pump. If it fails, the system becomes completely inoperable.
The system can be harder to design, requiring both "SPDT" thermostats or relays and the ability of the system to withstand the fault condition whereby all zone valves are closed simultaneously.
Zone valves increase the pressure drop through the system, to varying degrees. Care is required during system design to achieve proper flow rates with zone valves.
Selected manufacturers
Honeywell
Siemens
See also
Zone damper
References
Hydraulics
Plumbing valves | Zone valve | [
"Physics",
"Chemistry"
] | 1,008 | [
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
3,994,748 | https://en.wikipedia.org/wiki/Carrier-to-noise%20ratio | In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation. For example, with FM radio, the strength of the 100 MHz carrier with modulations would be considered for CNR, whereas the audio frequency analogue message signal would be for SNR; in each case, compared to the apparent noise. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.
Digitally modulated signals (e.g. QAM or PSK) are basically made of two CW carriers (the I and Q components, which are out-of-phase carriers). In fact, the information (bits or symbols) is carried by given combinations of phase and/or amplitude of the I and Q components. It is for this reason that, in the context of digital modulations, digitally modulated signals are usually referred to as carriers. Therefore, the term carrier-to-noise-ratio (CNR), instead of signal-to-noise-ratio (SNR), is preferred to express the signal quality when the signal has been digitally modulated.
High C/N ratios provide good quality of reception, for example low bit error rate (BER) of a digital message signal, or high SNR of an analog message signal.
Definition
The carrier-to-noise ratio is defined as the ratio of the received modulated carrier signal power C to the received noise power N after the receiver filters:
.
When both carrier and noise are measured across the same impedance, this ratio can equivalently be given as:
,
where and are the root mean square (RMS) voltage levels of the carrier signal and noise respectively.
C/N ratios are often specified in decibels (dB):
or in term of voltage:
Measurements and estimation
The C/N ratio is measured in a manner similar to the way the signal-to-noise ratio (S/N) is measured, and both specifications give an indication of the quality of a communications channel.
In the famous Shannon–Hartley theorem, the C/N ratio is equivalent to the S/N ratio. The C/N ratio resembles the carrier-to-interference ratio (C/I, CIR), and the carrier-to-noise-and-interference ratio, C/(N+I) or CNIR.
C/N estimators are needed to optimize the receiver performance. Typically, it is easier to measure the total power than the ratio of signal power to noise power (or noise power spectral density), and that is why CNR estimation techniques are timely and important.
Carrier-to-noise density ratio
In satellite communications, carrier-to-noise-density ratio (C/N0) is the ratio of the carrier power C to the noise power density N0, expressed in dB-Hz.
When considering only the receiver as a source of noise, it is called carrier-to-receiver-noise-density ratio.
It determines whether a receiver can lock on to the carrier and if the information encoded in the signal can be retrieved, given the amount of noise present in the received signal. The carrier-to-receiver noise density ratio is usually expressed in dB-Hz.
The noise power density, N0=kT, is the receiver noise power per hertz, which can be written in terms of the Boltzmann constant k (in joules per kelvin) and the noise temperature T (in kelvins).
See also
C/I: carrier-to-interference ratio
Eb/N0 (energy per bit relative to noise power spectral density)
Es/N0 (energy per symbol relative to noise power spectral density)
Signal-to-interference ratio (SIR or S/I)
Signal-to-noise ratio (SNR or S/N)
SINAD (ratio of signal-plus-noise-plus-distortion to noise-plus-distortion)
References
Further reading
Measuring GNSS Signal Strength
Noise (electronics)
Engineering ratios
Radio frequency propagation
Radio resource management
Interference | Carrier-to-noise ratio | [
"Physics",
"Mathematics",
"Engineering"
] | 874 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Metrics",
"Radio frequency propagation",
"Engineering ratios",
"Quantity",
"Electromagnetic spectrum",
"Waves"
] |
3,997,849 | https://en.wikipedia.org/wiki/Uranyl%20sulfate | Uranyl sulfate describes a family of inorganic compounds with the formula UO2SO4(H2O)n. These salts consist of sulfate, the uranyl ion, and water. They are lemon-yellow solids. Uranyl sulfates are intermediates in some extraction methods used for uranium ores. These compounds can also take the form of an anhydrous salt.
Structure
The structure of UO2(SO4)(H2O)3.5 is illustrative of the uranyl sulfates. The trans-UO22+ centers are encased in a pentagonal bipyramidal coordination sphere. In the pentagonal plane are five oxygen ligands derived from sulfate and aquo ligands. The compound is a coordination polymer.
Uses
Aside from the large scale use in mining, uranyl sulfate finds some use as a negative stain in microscopy and tracer in biology. The Aqueous Homogeneous Reactor experiment, constructed in 1951, circulated a fuel composed of 565 grams of U-235 enriched to 14.7% in the form of uranyl sulfate.
The acid process of milling uranium ores involves precipitating uranyl sulfate from the pregnant leaching solution to produce the semi-refined product referred to as yellowcake.
Related compounds
the hydrogensulfate.
potassium uranyl sulfate, K2UO2(SO4)2, is a double salt used by Henri Becquerel in his discovery of radioactivity.
References
Uranyl compounds
Sulfates
Nuclear materials | Uranyl sulfate | [
"Physics",
"Chemistry"
] | 318 | [
"Sulfates",
"Salts",
"Materials",
"Nuclear materials",
"Matter"
] |
3,998,066 | https://en.wikipedia.org/wiki/Scalar%E2%80%93vector%E2%80%93tensor%20decomposition | In cosmological perturbation theory, the scalar–vector–tensor decomposition is a decomposition of the most general linearized perturbations of the Friedmann–Lemaître–Robertson–Walker metric into components according to their transformations under spatial rotations. It was first discovered by E. M. Lifshitz in 1946. It follows from Helmholtz's Theorem (see Helmholtz decomposition.) The general metric perturbation has ten degrees of freedom. The decomposition states that the evolution equations for the most general linearized perturbations of the Friedmann–Lemaître–Robertson–Walker metric can be decomposed into four scalars, two divergence-free spatial vector fields (that is, with a spatial index running from 1 to 3), and a traceless, symmetric spatial tensor field with vanishing doubly and singly longitudinal components. The vector and tensor fields each have two independent components, so this decomposition encodes all ten degrees of freedom in the general metric perturbation. Using gauge invariance four of these components (two scalars and a vector field) may be set to zero.
If the perturbed metric where is the perturbation, then the decomposition is as follows,
where the Latin indices i and j run over spatial components (1,...,3). The tensor field is traceless under the spatial part of the background metric (i.e. ). The spatial vector and tensor undergo further decomposition. The vector is written
where and ( is the covariant derivative defined with respect to the spatial metric ). The notation is used because in Fourier space, these equations indicate that the vector points parallel and perpendicular to the direction of the wavevector, respectively. The parallel component can be expressed as the gradient of a scalar, . Thus can be written as a combination of a scalar and a divergenceless, two-component vector.
Finally, an analogous decomposition can be performed on the traceless tensor field . It can be written
where
where is a scalar (the combination of derivatives is set by the condition that be traceless), and
where is a divergenceless spatial vector. This leaves only two independent components of , corresponding to the two polarizations of gravitational waves. (Since the graviton is massless, the two polarizations are orthogonal to the direction of propagation, just like the photon.)
The advantage of this formulation is that the scalar, vector and tensor evolution equations are decoupled. In representation theory, this corresponds to decomposing perturbations under the group of spatial rotations. Two scalar components and one vector component can further be eliminated by gauge transformations. However, the vector components are generally ignored, as there are few known physical processes in which they can be generated. As indicated above, the tensor components correspond to gravitational waves. The tensor is gauge invariant: it does not change under infinitesimal coordinate transformations.
See also
Helmholtz decomposition
Notes
References
Physical cosmology
Mathematical methods in general relativity | Scalar–vector–tensor decomposition | [
"Physics",
"Astronomy"
] | 625 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
3,998,079 | https://en.wikipedia.org/wiki/Servo%20drive | A servo drive is an electronic amplifier used to power electric servomechanisms.
A servo drive monitors the feedback signal from the servomechanism and continually adjusts for deviation from expected behavior.
Function
A servo drive receives a command signal from a control system, amplifies the signal, and transmits electric current to a servo motor in order to produce motion proportional to the command signal. Typically, the command signal represents a desired velocity, but can also represent a desired torque or position. A sensor attached to the servo motor reports the motor's actual status back to the servo drive. The servo drive then compares the actual motor status with the commanded motor status. It then alters the voltage, frequency or pulse width to the motor so as to correct for any deviation from the commanded status.
In a properly configured control system, the servo motor rotates at a velocity that very closely approximates the velocity signal being received by the servo drive from the control system. Several parameters, such as stiffness (also known as proportional gain), damping (also known as derivative gain), and feedback gain, can be adjusted to achieve this desired performance. The process of adjusting these parameters is called performance tuning.
Although many servo motors require a drive specific to that particular motor brand or model, many drives are now available that are compatible with a wide variety of motors..
Digital and analog
All servo drives used in industry are digital, analog, or both. Digital drives differ from analog drives by having a microprocessor, or computer, which analyses incoming signals while controlling the mechanism. The microprocessor receives a pulse stream from an encoder which can determine parameters such as velocity. Varying the pulse, or blip, allows the mechanism to adjust speed essentially creating a speed controller effect. The repetitive tasks performed by a processor allows a digital drive to be quickly self-adjusting. In cases where mechanisms must adapt to many conditions, this can be convenient because a digital drive can adjust quickly with little effort. A drawback to digital drives is the large amount of energy that is consumed. However, many digital drives install capacity batteries to monitor battery life. The overall feedback system for a digital servo drive is like an analog, except that a microprocessor uses algorithms to predict system conditions.
Analog drives control velocity through various electrical inputs usually ±10 volts. Often adjusted with potentiometers, analog drives have plug in “personality cards” which are preadjusted to specific conditions. Most analog drives work by using a tach generator to measure incoming signals and produce a resulting torque demand. These torque demands request current in the mechanism depending on the feedback loop. This amplifier is referred as a four-quadrant drive because can accelerate, decelerate and brake in either rotating direction. Traditional analog drives consume less energy than digital drives and can offer very high performance in certain cases. When conditions are met, analog drives offer consistency with minimal “jitter” at standstills. Some analog servo drives do not need a torque amplifier and rely on velocity amplifiers for situation where speed is more important.
Use in industry
Servo systems can be used in CNC machining, factory automation, and robotics, among other uses. Their main advantage over traditional DC or AC motors is the addition of motor feedback. This feedback can be used to detect unwanted motion, or to ensure the accuracy of the commanded motion. The feedback is generally provided by an encoder of some sort. Servos, in constant speed changing use, have a better life cycle than typical AC wound motors. Servo motors can also act as a brake by shunting off generated electricity from the motor itself.
See also
Control theory
Motion control
References
Control devices | Servo drive | [
"Engineering"
] | 763 | [
"Control devices",
"Control engineering"
] |
3,998,149 | https://en.wikipedia.org/wiki/Cosmological%20perturbation%20theory | In physical cosmology, cosmological perturbation theory is the theory by which the evolution of structure is understood in the Big Bang model. Cosmological perturbation theory may be broken into two categories: Newtonian or general relativistic. Each case uses its governing equations to compute gravitational and pressure forces which cause small perturbations to grow and eventually seed the formation of stars, quasars, galaxies and clusters. Both cases apply only to situations where the universe is predominantly homogeneous, such as during cosmic inflation and large parts of the Big Bang. The universe is believed to still be homogeneous enough that the theory is a good approximation on the largest scales, but on smaller scales more involved techniques, such as N-body simulations, must be used. When deciding whether to use general relativity for perturbation theory, note that Newtonian physics is only applicable in some cases such as for scales smaller than the Hubble horizon, where spacetime is sufficiently flat, and for which speeds are non-relativistic.
Because of the gauge invariance of general relativity, the correct formulation of cosmological perturbation theory is subtle.
In particular, when describing an inhomogeneous spacetime, there is often not a preferred coordinate choice. There are currently two distinct approaches to perturbation theory in classical general relativity:
gauge-invariant perturbation theory based on foliating a space-time with hyper-surfaces, and
1+3 covariant gauge-invariant perturbation theory based on threading a space-time with frames.
Newtonian perturbation theory
In this section, we will focus on the effect of matter on structure formation in the hydrodynamical fluid regime. This regime is useful because dark matter has dominated structure growth for most of the universe's history. In this regime, we are on sub-Hubble scales (where is the Hubble parameter) so we can take spacetime to be flat, and ignore general relativistic corrections. But these scales are above a cut-off, such that perturbations in pressure and density are sufficiently linear Next we assume low pressure so that we can ignore radiative effects and low speeds so we are in the non-relativistic regime.
The first governing equation follows from matter conservation – the continuity equation
where is the scale factor and is the peculiar velocity. Although we don't explicitly write it, all variables are evaluated at time and the divergence is in comoving coordinates. Second, momentum conservation gives us the Euler equation
where is the gravitational potential. Lastly, we know that for Newtonian gravity, the potential obeys the Poisson equation
So far, our equations are fully nonlinear, and can be hard to interpret intuitively. It's therefore useful to consider a perturbative expansion and examine each order separately. We use the following decomposition
where is a comoving coordinate.
At linear order, the continuity equation becomes
where is the velocity divergence. And the linear Euler equation is
By combining the linear continuity, Euler, and Poisson equations, we arrive at a simple master equation governing evolution
where we defined a sound speed to give us a closure relation. This master equation admits wave solutions in which tell us how matter fluctuations grow over time due to a combination of competing effects – the fluctuation's self-gravity, pressure forces, the universe's expansion, and the background gravitational field.
Gauge-invariant perturbation theory
The gauge-invariant perturbation theory is based on developments by Bardeen (1980), Kodama and Sasaki (1984) building on the work of Lifshitz (1946). This is the standard approach to perturbation theory of general relativity for cosmology. This approach is widely used for the computation of anisotropies in the cosmic microwave background radiation as part of the physical cosmology program and focuses on predictions arising from linearisations that preserve gauge invariance with respect to Friedmann-Lemaître-Robertson-Walker (FLRW) models. This approach draws heavily on the use of Newtonian like analogue and usually has as it starting point the FRW background around which perturbations are developed. The approach is non-local and coordinate dependent but gauge invariant as the resulting linear framework is built from a specified family of background hyper-surfaces which are linked by gauge preserving mappings to foliate the space-time. Although intuitive this approach does not deal well with the nonlinearities natural to general relativity.
1+3 covariant gauge-invariant perturbation theory
In relativistic cosmology using the Lagrangian threading dynamics of Ehlers (1971) and Ellis (1971) it is usual to use the gauge-invariant covariant perturbation theory developed by Hawking (1966) and Ellis and Bruni (1989). Here rather than starting with a background and perturbing away from that background one starts with full general relativity and systematically reduces the theory down to one that is linear around a particular background. The approach is local and both covariant as well as gauge invariant but can be non-linear because the approach is built around the local comoving observer frame (see frame bundle) which is used to thread the entire space-time. This approach to perturbation theory produces differential equations that are of just the right order needed to describe the true physical degrees of freedom and as such no non-physical gauge modes exist. It is usual to express the theory in a coordinate free manner. For applications of kinetic theory, because one is required to use the full tangent bundle, it becomes convenient to use the tetrad formulation of relativistic cosmology. The application of this approach to the computation of anisotropies in cosmic microwave background radiation requires the linearization of the full relativistic kinetic theory developed by Thorne (1980) and Ellis, Matravers and Treciokas (1983).
Gauge freedom and frame fixing
In relativistic cosmology there is a freedom associated with the choice of threading frame; this frame choice is distinct from the choice associated with coordinates. Picking this frame is equivalent to fixing the choice of timelike world lines mapped into each other. This reduces the gauge freedom; it does not fix the gauge but the theory remains gauge invariant under the remaining gauge freedoms. In order to fix the gauge a specification of correspondences between the time surfaces in the real universe (perturbed) and the background universe are required along with the correspondences between points on the initial spacelike surfaces in the background and in the real universe. This is the link between the gauge-invariant perturbation theory and the gauge-invariant covariant perturbation theory. Gauge invariance is only guaranteed if the choice of frame coincides exactly with that of the background; usually this is trivial to ensure because physical frames have this property.
Newtonian-like equations
Newtonian-like equations emerge from perturbative general relativity with the choice of the Newtonian gauge; the Newtonian gauge provides the direct link between the variables typically used in the gauge-invariant perturbation theory and those arising from the more general gauge-invariant covariant perturbation theory.
See also
Primordial fluctuations
Cosmic microwave background spectral distortions
References
Bibliography
See physical cosmology textbooks.
External links
Physical cosmology
General relativity | Cosmological perturbation theory | [
"Physics",
"Astronomy"
] | 1,515 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"General relativity",
"Theory of relativity",
"Physical cosmology"
] |
3,998,467 | https://en.wikipedia.org/wiki/Resorcinarene | In chemistry, a resorcinarene (also resorcarene or calix[4]resorcinarene) is a macrocycle, or a cyclic oligomer, based on the condensation of resorcinol (1,3-dihydroxybenzene) and an aldehyde. Resorcinarenes are a type of calixarene. Other types of resorcinarenes include the related pyrogallolarenes and octahydroxypyridines, derived from pyrogallol and 2,6-dihydroxypyridine, respectively.
Resorcinarenes interact with other molecules forming a host–guest complex. Resorcinarenes and pyrogallolarenes self-assemble into larger supramolecular structures. Both in the crystalline state and in organic solvents, six resorcinarene molecules are known to form hexamers with an internal volume of around one cubic nanometer (nanocapsules) and shapes similar to the Archimedean solids. Hydrogen bonds appear to hold the assembly together. A number of solvent or other molecules reside inside. The resorcinarene is also the basic structural unit for other molecular recognition scaffolds, typically formed by bridging the phenolic oxygens with alkyl or aromatic spacers. A number of molecular structures are based on this macrocycle, namely cavitands and carcerands.
Synthesis
The resorcinarenes are typically prepared by condensation of resorcinol and an aldehyde in acid solution. This reaction was first described by Adolf von Baeyer who described the condensation of resorcinol and benzaldehyde but was unable to elucidate the nature of the product(s). The methods have since been refined. Recrystallization typically gives the desired isomer in quite pure form. However, for certain aldehydes, the reaction conditions lead to significant by-products. Alternative condensation conditions have been developed, including the use of Lewis acid catalysts.
A green chemistry procedure uses solvent-free conditions: resorcinol, an aldehyde, and p-toluenesulfonic acid are ground together in a mortar and pestle at low temperature.
Structure
Resorcinarenes can be characterized by a wide upper rim and a narrow lower rim. The upper rim includes eight hydroxyl groups that can participate in hydrogen bonding interactions. Depending on the aldehyde starting material, the lower rim includes four appending groups, usually chosen to give optimal solubility. The resorcin[n]arene nomenclature is analogous to that of calix[n]arenes, in which 'n' represents the number of repeating units in the ring. Pyrogallolarenes are related macrocycles derived from the condensation of pyrogallol (1,2,3-trihydroxybenzene) with an aldehyde.
Resorcinarenes and pyrogallolarenes self-assemble to give supramolecular assemblies. Both in the crystalline state and in solution, they are known to form hexamers that are akin to certain Archimedean solids with an internal volume of around one cubic nanometer (nanocapsules). (Isobutylpyrogallol[4]arene)6 is held together by 48 intermolecular hydrogen bonds. The remaining 24 hydrogen bonds are intramolecular. The cavity is filled by solvent.
Catalysis
The resorcinarene hexamer has been described as a yoctolitre reaction vessel. Within the confines of the container, terpene cyclizations and iminium catalyzed reactions have been observed.
References
Supramolecular chemistry
Macrocycles
Cyclophanes | Resorcinarene | [
"Chemistry",
"Materials_science"
] | 803 | [
"Organic compounds",
"Macrocycles",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
3,998,660 | https://en.wikipedia.org/wiki/Max%20q | The max q, or maximum dynamic pressure, condition is the point when an aerospace vehicle's atmospheric flight reaches the maximum difference between the fluid dynamics total pressure and the ambient static pressure. For an airplane, this occurs at the maximum speed at minimum altitude corner of the flight envelope. For a space vehicle launch, this occurs at the crossover point between dynamic pressure increasing with speed and static pressure decreasing with increasing altitude. This is an important design factor of aerospace vehicles, since the aerodynamic structural load on the vehicle is proportional to dynamic pressure.
Dynamic pressure
Dynamic pressure q is defined in incompressible fluid dynamics as
where ρ is the local air density, and v is the vehicle's velocity. The dynamic pressure can be thought of as the kinetic energy density of the air with respect to the vehicle, and for incompressible flow equals the difference between total pressure and static pressure.
This quantity appears notably in the lift and drag equations.
For a car traveling at at sea level (where the air density is about ,) the dynamic pressure on the front of the car is , about 0.38% of the static pressure ( at sea level).
For an airliner cruising at at an altitude of (where the air density is about ), the dynamic pressure on the front of the plane is , about 41% of the static pressure ().
In rocket launches
For a launch of a space vehicle from the ground, dynamic pressure is:
zero at lift-off, when the air density ρ is high but the vehicle's speed v = 0;
zero outside the atmosphere, where the speed v is high, but the air density ρ = 0;
always non-negative, given the quantities involved.
During the launch, the vehicle speed increases but the air density decreases as the vehicle rises. Therefore, by Rolle's theorem, there is a point where the dynamic pressure is maximal.
In other words, before reaching max q, the dynamic pressure increase due to increasing velocity is greater than the dynamic pressure decrease due to decreasing air density such that the net dynamic pressure (opposing kinetic energy) acting on the craft continues to increase. After passing max q, the opposite is true. The net dynamic pressure acting against the craft decreases faster as the air density decreases with altitude than it increases from increasing velocity, ultimately reaching 0 when the air density becomes zero.
This value is significant, since it is one of the constraints that determines the structural load that the vehicle must bear. For many vehicles, if launched at full throttle, the aerodynamic forces would be higher than what they can withstand. For this reason, they are often throttled down before approaching max q and back up afterwards, so as to reduce the speed and hence the maximum dynamic pressure encountered along the flight.
Examples
During a normal Space Shuttle launch, for example, max q value of 0.32 atmospheres occurred at an altitude of approximately , about one minute after launch. The three Space Shuttle Main Engines were throttled back to about 65–72% of their rated thrust (depending on payload) as the dynamic pressure approached max q. Combined with the propellant grain design of the solid rocket boosters, which reduced the thrust at max q by one third after 50 seconds of burn, the total stresses on the vehicle were kept to a safe level.
During a typical Apollo mission, the max q (also just over 0.3 atmospheres) occurred between of altitude; approximately the same values occur for the SpaceX Falcon 9.
The point of max q is a key milestone during a space vehicle launch, as it is the point at which the airframe undergoes maximum mechanical stress.
See also
Prandtl–Glauert singularity
Ideal gas law
Gravity turn
Gravity loss
Equivalent airspeed – max q for a spacecraft corresponds to maximum equivalent airspeed for an aircraft
References
Aerospace engineering
Fluid dynamics | Max q | [
"Chemistry",
"Engineering"
] | 770 | [
"Piping",
"Chemical engineering",
"Aerospace engineering",
"Fluid dynamics"
] |
24,701,090 | https://en.wikipedia.org/wiki/P%C3%B3lya%E2%80%93Szeg%C5%91%20inequality | In mathematical analysis, the Pólya–Szegő inequality (or Szegő inequality) states that the Sobolev energy of a function in a Sobolev space does not increase under symmetric decreasing rearrangement. The inequality is named after the mathematicians George Pólya and Gábor Szegő.
Mathematical setting and statement
Given a Lebesgue measurable function the symmetric decreasing rearrangement is the unique function such that for every the sublevel set is an open ball centred at the origin that has the same Lebesgue measure as
Equivalently, is the unique radial and radially nonincreasing function, whose strict sublevel sets are open and have the same measure as those of the function .
The Pólya–Szegő inequality states that if moreover then and
Applications of the inequality
The Pólya–Szegő inequality is used to prove the Rayleigh–Faber–Krahn inequality, which states that among all the domains of a given fixed volume, the ball has the smallest first eigenvalue for the Laplacian with Dirichlet boundary conditions. The proof goes by restating the problem as a minimization of the Rayleigh quotient.
The isoperimetric inequality can be deduced from the Pólya–Szegő inequality with .
The optimal constant in the Sobolev inequality can be obtained by combining the Pólya–Szegő inequality with some integral inequalities.
Equality cases
Since the Sobolev energy is invariant under translations, any translation of a radial function achieves equality in the Pólya–Szegő inequality. There are however other functions that can achieve equality, obtained for example by taking a radial nonincreasing function that achieves its maximum on a ball of positive radius and adding to this function another function which is radial with respect to a different point and whose support is contained in the maximum set of the first function. In order to avoid this obstruction, an additional condition is thus needed.
It has been proved that if the function achieves equality in the Pólya–Szegő inequality and if the set is a null set for Lebesgue's measure, then the function is radial and radially nonincreasing with respect to some point .
Generalizations
The Pólya–Szegő inequality is still valid for symmetrizations on the sphere or the hyperbolic space.
The inequality also holds for partial symmetrizations defined by foliating the space into planes (Steiner symmetrization) and into spheres (cap symmetrization).
There are also Pólya−Szegő inequalities for rearrangements with respect to non-Euclidean norms and using the dual norm of the gradient.
Proofs of the inequality
Original proof by a cylindrical isoperimetric inequality
The original proof by Pólya and Szegő for was based on an isoperimetric inequality comparing sets with cylinders and an asymptotics expansion of the area of the area of the graph of a function. The inequality is proved for a smooth function that vanishes outside a compact subset of the Euclidean space For every , they define the sets
These sets are the sets of points who lie between the domain of the functions and and their respective graphs. They use then the geometrical fact that since the horizontal slices of both sets have the same measure and those of the second are balls, to deduce that the area of the boundary of the cylindrical set cannot exceed the one of . These areas can be computed by the area formula yielding the inequality
Since the sets and have the same measure, this is equivalent to
The conclusion then follows from the fact that
Coarea formula and isoperimetric inequality
The Pólya–Szegő inequality can be proved by combining the coarea formula, Hölder’s inequality and the classical isoperimetric inequality.
If the function is smooth enough, the coarea formula can be used to write
where denotes the –dimensional Hausdorff measure on the Euclidean space . For almost every each , we have by Hölder's inequality,
Therefore, we have
Since the set is a ball that has the same measure as the set , by the classical isoperimetric inequality, we have
Moreover, recalling that the sublevel sets of the functions and have the same measure,
and therefore,
Since the function is radial, one has
and the conclusion follows by applying the coarea formula again.
Rearrangement inequalities for convolution
When , the Pólya–Szegő inequality can be proved by representing the Sobolev energy by the heat kernel. One begins by observing that
where for , the function is the heat kernel, defined for every by
Since for every the function is radial and radially decreasing, we have by the Riesz rearrangement inequality
Hence, we deduce that
References
Sobolev spaces
Geometric inequalities
Rearrangement inequalities | Pólya–Szegő inequality | [
"Mathematics"
] | 1,004 | [
"Theorems in geometry",
"Geometric inequalities",
"Inequalities (mathematics)",
"Rearrangement inequalities"
] |
24,702,673 | https://en.wikipedia.org/wiki/Fermi%20motion | The Fermi motion is the quantum motion of nucleons bound inside a nucleus. It was once posited as an explanation for the EMC effect.
References
Nuclear physics
Particle physics | Fermi motion | [
"Physics"
] | 39 | [
"Particle physics stubs",
"Nuclear and atomic physics stubs",
"Particle physics",
"Nuclear physics"
] |
24,703,611 | https://en.wikipedia.org/wiki/Superconducting%20tunnel%20junction | The superconducting tunnel junction (STJ) – also known as a superconductor–insulator–superconductor tunnel junction (SIS) – is an electronic device consisting of two superconductors separated by a very thin layer of insulating material. Current passes through the junction via the process of quantum tunneling. The STJ is a type of Josephson junction, though not all the properties of the STJ are described by the Josephson effect.
These devices have a wide range of applications, including high-sensitivity detectors of electromagnetic radiation, magnetometers, high speed digital circuit elements, and quantum computing circuits.
Quantum tunneling
All currents flowing through the STJ pass through the insulating layer via the process of quantum tunneling. There are two components to the tunneling current. The first is from the tunneling of Cooper pairs. This supercurrent is described by the ac and dc Josephson relations, first predicted by Brian David Josephson in 1962. For this prediction, Josephson received the Nobel Prize in Physics in 1973. The second is the quasiparticle current, which, in the limit of zero temperature, arises when the energy from the bias voltage exceeds twice the value of superconducting energy gap Δ. At finite temperature, a small quasiparticle tunneling current – called the subgap current – is present even for voltages less than twice the energy gap due to the thermal promotion of quasiparticles above the gap.
If the STJ is irradiated with photons of frequency , the dc current-voltage curve will exhibit both Shapiro steps and steps due to photon-assisted tunneling. Shapiro steps arise from the response of the supercurrent and occur at voltages equal to , where is the Planck constant, is the electron charge, and is an integer. Photon-assisted tunneling arises from the response of the quasiparticles and gives rise to steps displaced in voltage by relative to the gap voltage.
Device fabrication
The device is typically fabricated by first depositing a thin film of a superconducting metal such as aluminum on an insulating substrate such as silicon. The deposition is performed inside a vacuum chamber. Oxygen gas is then introduced into the chamber, resulting in the formation of an insulating layer of aluminum oxide (AlO) with a typical thickness of several nanometres. After the vacuum is restored, an overlapping layer of superconducting metal is deposited, completing the STJ. To create a well-defined overlap region, a procedure known as the Niemeyer-Dolan technique is commonly used. This technique uses a suspended bridge of resist with a double-angle deposition to define the junction.
Aluminum is widely used for making superconducting tunnel junctions because of its unique ability to form a very thin (2–3 nm) insulating oxide layer with no defects that short-circuit the insulating layer. The superconducting critical temperature of aluminum is approximately 1.2 K. For many applications, it is convenient to have a device that is superconducting at a higher temperature, in particular at a temperature above the boiling point of liquid helium, which is 4.2 K at atmospheric pressure. One approach to achieving this is to use niobium, which has a superconducting critical temperature in bulk form of 9.3 K. Niobium, however, does not form an oxide that is suitable for making tunnel junctions. To form an insulating oxide, the first layer of niobium can be coated with a very thin layer (approximately 5 nm) of aluminum, which is then oxidized to form a high quality aluminum oxide tunnel barrier before the final layer of niobium is deposited. The thin aluminum layer is proximitized by the thicker niobium, and the resulting device has a superconducting critical temperature above 4.2 K. Early work used lead-lead oxide-lead tunnel junctions. Lead has a superconducting critical temperature of 7.2 K in bulk form, but lead oxide tends to develop defects (sometimes called pinhole defects) that short-circuit the tunnel barrier when the device is thermally cycled between cryogenic temperatures and room temperature, so lead is no longer widely used to make STJs.
Applications
Radio astronomy
STJs are the most sensitive heterodyne receivers in the 100 GHz to 1000 GHz frequency range, and hence are used for radio astronomy at these frequencies. In this application, the STJ is dc biased at a voltage just below the gap voltage (). A high frequency signal from an astronomical object of interest is focused onto the STJ, along with a local oscillator source. Photons absorbed by the STJ allow quasiparticles to tunnel via the process of photon-assisted tunneling. This photon-assisted tunneling changes the current-voltage curve, creating a nonlinearity that produces an output at the difference frequency of the astronomical signal and the local oscillator. This output is a frequency down-converted version of the astronomical signal. These receivers are so sensitive that an accurate description of the device performance must take into account the effects of quantum noise.
Single-photon detection
In addition to heterodyne detection, STJs can also be used as direct detectors. In this application, the STJ is biased with a dc voltage less than the gap voltage. A photon absorbed in the superconductor breaks Cooper pairs and creates quasiparticles. The quasiparticles tunnel across the junction in the direction of the applied voltage, and the resulting tunneling current is proportional to the photon energy. STJ devices have been employed as single-photon detectors for photon frequencies ranging from X-rays to the infrared.
SQUIDs
The superconducting quantum interference device or SQUID is based on a superconducting loop containing Josephson junctions. SQUIDs are the world's most sensitive magnetometers, capable of measuring a single magnetic flux quantum.
Quantum computing
Superconducting quantum computing utilizes STJ-based circuits, including charge qubits, flux qubits and phase qubits.
RSFQ
The STJ is the primary active element in rapid single flux quantum or RSFQ fast logic circuits.
Josephson voltage standard
When a high frequency current is applied to a Josephson junction, the ac Josephson current will synchronize with the applied frequency giving rise to regions of constant voltage in the I–V curve of the device (Shapiro steps). For the purpose of voltage standards, these steps occur at the voltages where is an integer, is the applied frequency and the Josephson constant = is a constant that is equal to . These steps provide an exact conversion from frequency to voltage. Because frequency can be measured with very high precision, this effect is used as the basis of the Josephson voltage standard, which implements the SI definition of the volt.
Josephson diode
In the case that the STJ shows asymmetric Josephson tunneling, the junction can become a Josephson diode.
See also
Superconductivity
Josephson effect
Macroscopic quantum phenomena
Quantum tunneling
Superconducting quantum interference device (SQUID)
Superconducting quantum computing
Rapid single flux quantum (RSFQ)
Cryogenic particle detectors
References
Superconductivity
Josephson effect
Quantum electronics
Superconducting detectors
Sensors
Radio astronomy
Mesoscopic physics | Superconducting tunnel junction | [
"Physics",
"Materials_science",
"Astronomy",
"Technology",
"Engineering"
] | 1,504 | [
"Josephson effect",
"Astronomical sub-disciplines",
"Physical quantities",
"Quantum electronics",
"Superconductivity",
"Quantum mechanics",
"Measuring instruments",
"Materials science",
"Radio astronomy",
"Superconducting detectors",
"Condensed matter physics",
"Nanotechnology",
"Sensors",
... |
24,706,093 | https://en.wikipedia.org/wiki/Photonic%20metamaterial | A photonic metamaterial (PM), also known as an optical metamaterial, is a type of electromagnetic metamaterial, that interacts with light, covering terahertz (THz), infrared (IR) or visible wavelengths. The materials employ a periodic, cellular structure.
The subwavelength periodicity distinguishes photonic metamaterials from photonic band gap or photonic crystal structures. The cells are on a scale that is magnitudes larger than the atom, yet much smaller than the radiated wavelength, are on the order of nanometers.
In a conventional material, the response to electric and magnetic fields, and hence to light, is determined by atoms. In metamaterials, cells take the role of atoms in a material that is homogeneous at scales larger than the cells, yielding an effective medium model.
Some photonic metamaterials exhibit magnetism at high frequencies, resulting in strong magnetic coupling. This can produce a negative index of refraction in the optical range.
Potential applications include cloaking and transformation optics.
Photonic crystals differ from PM in that the size and periodicity of their scattering elements are larger, on the order of the wavelength. Also, a photonic crystal is not homogeneous, so it is not possible to define values of ε (permittivity) or u (permeability).
History
While researching whether or not matter interacts with the magnetic component of light, Victor Veselago (1967) envisioned the possibility of refraction with a negative sign, according to Maxwell's equations. A refractive index with a negative sign is the result of permittivity, ε < 0 (less than zero) and magnetic permeability, μ < 0 (less than zero). Veselago's analysis has been cited in over 1500 peer-reviewed articles and many books. In the mid-1990s, metamaterials were first seen as potential technologies for applications such as nanometer-scale imaging and cloaking objects. For example, in 1995, Guerra fabricated a transparent grating with 50 nm lines and spaces, and then coupled this (what would be later called) photonic metamaterial with an immersion objective to resolve a silicon grating having 50 nm lines and spaces, far beyond the diffraction limit for the 650 nm wavelength illumination in air. And in 2002, Guerra et al. published their demonstrated use of subwavelength nano-optics (photonic metamaterials) for optical data storage at densities well above the diffraction limit. As of 2015, metamaterial antennas were commercially available.
Negative permeability was achieved with a split-ring resonator (SRR) as part of the subwavelength cell. The SRR achieved negative permeability within a narrow frequency range. This was combined with a symmetrically positioned electric conducting post, which created the first negative index metamaterial, operating in the microwave band. Experiments and simulations demonstrated the presence of a left-handed propagation band, a left-handed material. The first experimental confirmation of negative index of refraction occurred soon after, also at microwave frequencies.
Negative permeability and negative permittivity
Natural materials, such as precious metals, can achieve ε < 0 up to the visible frequencies. However, at terahertz, infrared and visible frequencies, natural materials have a very weak magnetic coupling component, or permeability. In other words, susceptibility to the magnetic component of radiated light can be considered negligible.
Negative index metamaterials behave contrary to the conventional "right-handed" interaction of light found in conventional optical materials. Hence, these are dubbed left-handed materials or negative index materials (NIMs), among other nomenclatures.
Only fabricated NIMs exhibit this capability. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. However, negative refraction does not occur in these systems.
Naturally occurring ferromagnetic and antiferromagnetic materials can achieve magnetic resonance, but with significant losses. In natural materials such as natural magnets and ferrites, resonance for the electric (coupling) response and magnetic (coupling) response do not occur at the same frequency.
Optical frequency
Photonic metamaterial SRRs have reached scales below 100 nanometers, using electron beam and nanolithography. One nanoscale SRR cell has three small metallic rods that are physically connected. This is configured as a U shape and functions as a nano-inductor. The gap between the tips of the U-shape function as a nano-capacitor. Hence, it is an optical nano-LC resonator. These "inclusions" create local electric and magnetic fields when externally excited. These inclusions are usually ten times smaller than the vacuum wavelength of the light c0 at the resonance frequency. The inclusions can then be evaluated by using an effective medium approximation.
PMs display a magnetic response with useful magnitude at optical frequencies. This includes negative permeability, despite the absence of magnetic materials. Analogous to ordinary optical material, PMs can be treated as an effective medium that is characterized by effective medium parameters ε(ω) and μ(ω), or similarly, εeff and μeff.
The negative refractive index of PMs in the optical frequency range was experimentally demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time.
Effective medium model
An effective (transmission) medium approximation describes material slabs that, when reacting to an external excitation, are "effectively" homogeneous, with corresponding "effective" parameters that include "effective" ε and μ and apply to the slab as a whole. Individual inclusions or cells may have values different from the slab. However, there are cases where the effective medium approximation does not hold and one needs to be aware of its applicability.
Coupling magnetism
Negative magnetic permeability was originally achieved in a left-handed medium at microwave frequencies by using arrays of split-ring resonators. In most natural materials, the magnetically coupled response starts to taper off at frequencies in the gigahertz range, which implies that significant magnetism does not occur at optical frequencies. The effective permeability of such materials is unity, μeff = 1. Hence, the magnetic component of a radiated electromagnetic field has virtually no effect on natural occurring materials at optical frequencies.
In metamaterials the cell acts as a meta-atom, a larger scale magnetic dipole, analogous to the picometer-sized atom. For meta-atoms constructed from gold, μ < 0 can be achieved at telecommunication frequencies but not at visible frequencies. The visible frequency has been elusive because the plasma frequency of metals is the ultimate limiting condition.
Design and fabrication
Optical wavelengths are much shorter than microwaves, making subwavelength optical metamaterials more difficult to realize. Microwave metamaterials can be fabricated from circuit board materials, while lithography techniques must be employed to produce PMs.
Successful experiments used a periodic arrangement of short wires or metallic pieces with varied shapes. In a different study the whole slab was electrically connected.
Fabrication techniques include electron beam lithography, nanostructuring with a focused ion beam and interference lithography.
In 2014 a polarization-insensitive metamaterial prototype was demonstrated to absorb energy over a broad band (a super-octave) of infrared wavelengths. The material displayed greater than 98% measured average absorptivity that it maintained over a wide ±45° field-of-view for mid-infrared wavelengths between 1.77 and 4.81 μm. One use is to conceal objects from infrared sensors. Palladium provided greater bandwidth than silver or gold. A genetic algorithm randomly modified an initial candidate pattern, testing and eliminating all but the best. The process was repeated over multiple generations until the design became effective.
The metamaterial is made of four layers on a silicon substrate. The first layer is palladium, covered by polyimide (plastic) and a palladium screen on top. The screen has sub-wavelength cutouts that block the various wavelengths. A polyimide layer caps the whole absorber. It can absorb 90 percent of infrared radiation at up to a 55 degree angle to the screen. The layers do not need accurate alignment. The polyimide cap protects the screen and helps reduce any impedance mismatch that might occur when the wave crosses from the air into the device.
Research
One-way transmission
In 2015 visible light joined microwave and infrared NIMs in propagating light in only one direction. ("mirrors" instead reduce light transmission in the reverse direction, requiring low light levels behind the mirror to work.)
The material combined two optical nanostructures: a multi-layered block of alternating silver and glass sheets and metal grates. The silver-glass structure is a "hyperbolic" metamaterial, which treats light differently depending on which direction the waves are traveling. Each layer is tens of nanometers thick—much thinner than visible light's 400 to 700 nm wavelengths, making the block opaque to visible light, although light entering at certain angles can propagate inside the material.
Adding chromium grates with sub-wavelength spacings bent incoming red or green light waves enough that they could enter and propagate inside the block. On the opposite side of the block, another set of grates allowed light to exit, angled away from its original direction. The spacing of the exit grates was different from that of the entrance grates, bending incident light so that external light could not enter the block from that side. Around 30 times more light passed through in the forward direction than in reverse. The intervening blocks reduced the need for precise alignment of the two grates with respect to each other.
Such structures hold potential for applications in optical communication—for instance, they could be integrated into photonic computer chips that split or combine signals carried by light waves. Other potential applications include biosensing using nanoscale particles to deflect light to angles steep enough to travel through the hyperbolic material and out the other side.
Lumped circuit elements
By employing a combination of plasmonic and non-plasmonic nanoparticles, lumped circuit element nanocircuits at infrared and optical frequencies appear to be possible. Conventional lumped circuit elements are not available in a conventional way.
Subwavelength lumped circuit elements proved workable in the microwave and radio frequency (RF) domain. The lumped element concept allowed for element simplification and circuit modularization. Nanoscale fabrication techniques exist to accomplish subwavelength geometries.
Cell design
Metals such as gold, silver, aluminum and copper conduct currents at RF and microwave frequencies. At optical frequencies characteristics of some noble metals are altered. Rather than normal current flow, plasmonic resonances occur as the real part of the complex permittivity becomes negative. Therefore, the main current flow is actually the electric displacement current density ∂D / ∂t, and can be termed as the “flowing optical current".
At subwavelength scales the cell's impedance becomes dependent on shape, size, material and the optical frequency illumination. The particle's orientation with the optical electric field may also help determine the impedance. Conventional silicon dielectrics have the real permittivity component εreal > 0 at optical frequencies, causing the nanoparticle to act as a capacitive impedance, a nanocapacitor. Conversely, if the material is a noble metal such as gold or silver, with εreal < 0, then it takes on inductive characteristics, becoming a nanoinductor. Material loss is represented as a nano-resistor.
Tunability
The most commonly applied scheme to achieve a tunable index of refraction is electro-optical tuning. Here the change in refractive index is proportional to either the applied electric field, or is proportional to the square modulus of the electric field. These are the Pockels effect and Kerr effects, respectively.
An alternative is to employ a nonlinear optical material and depend on the optical field intensity to modify the refractive index or magnetic parameters.
Layering
Stacking layers produces NIMs at optical frequencies. However, the surface configuration (non-planar, bulk) of the SRR normally prevents stacking. Although a single-layer SRR structure can be constructed on a dielectric surface, it is relatively difficult to stack these bulk structures due to alignment tolerance requirements. A stacking technique for SRRs was published in 2007 that uses dielectric spacers to apply a planarization procedure to flatten the SRR layer. It appears that arbitrary many layers can be made this way, including any chosen number of unit cells and variant spatial arrangements of individual layers.
Frequency doubling
In 2014 researchers announced a 400 nanometer thick frequency-doubling non-linear mirror that can be tuned to work at near-infrared to mid-infrared to terahertz frequencies. The material operates with much lower intensity light than traditional approaches. For a given input light intensity and structure thickness, the metamaterial produced approximately one million times higher intensity output. The mirrors do not require matching the phase velocities of the input and output waves.
It can produce giant nonlinear response for multiple nonlinear optical processes, such as second harmonic, sum- and difference-frequency generation, as well a variety of four-wave mixing processes. The demonstration device converted light with a wavelength of 8000 to 4000 nanometers.
The device is made of a stack of thin layers of indium, gallium and arsenic or aluminum, indium and arsenic. 100 of these layers, each between one and twelve nanometers thick, were faced on top by a pattern of asymmetrical, crossed gold nanostructures that form coupled quantum wells and a layer of gold on the bottom.
Potential applications include remote sensing and medical applications that call for compact laser systems.
Other
Dyakonov surface waves (DSW) relate to birefringence related to photonic crystals, metamaterial anisotropy. Recently photonic metamaterial operated at 780 nanometer (near-infrared), 813 nm and 772 nm.
See also
Terahertz gap
Negative index metamaterials
History of metamaterials
Metamaterial cloaking
Metamaterial
Metamaterial antennas
Nonlinear metamaterials
Photonic crystal
Seismic metamaterials
Split-ring resonator
Acoustic metamaterials
Metamaterial absorber
Plasmonic metamaterials
Terahertz metamaterials
Tunable metamaterials
Mechanical metamaterial
Transformation optics
Theories of cloaking
Metamaterials (journal)
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
References
General references
Shalaev, Vladimir M., et al. Negative Index of Refraction in Optical Metamaterials arXiv.org. 17 pages.
Shalaev, Vladimir M., et al. Negative index of refraction in optical metamaterials Opt. Lett. Vol. 30. 2005-12-30. 3 pages
External links
Optics and photonics: Physics enhancing our lives
OPAL: A Computational Tool For Photonics
Experimental Verification of Reversed Cherenkov Radiation...
Oriented Assembly of Metamaterials Particle self-assembly suggested for assembly of metamaterials at optical wavelengths.
Subpicosecond Optical Switching with a Negative Index Metamaterial
Metamaterials
Photonics | Photonic metamaterial | [
"Materials_science",
"Engineering"
] | 3,223 | [
"Metamaterials",
"Materials science"
] |
24,708,433 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20%CE%B2%20function | In mathematical logic, Gödel's β function is a function used to permit quantification over finite sequences of natural numbers in formal theories of arithmetic. The β function is used, in particular, in showing that the class of arithmetically definable functions is closed under primitive recursion, and therefore includes all primitive recursive functions.
The β function was introduced without the name in the proof of the first of Gödel's incompleteness theorems (Gödel 1931). The β function lemma given below is an essential step of that proof. Gödel gave the β function its name in (Gödel 1934).
Definition
The function takes three natural numbers as arguments. It is defined as
where denotes the remainder after integer division of by (Mendelson 1997:186).
Special schema without parameters
The β function is arithmetically definable in an obvious way, because it uses only arithmetic operations and the remainder function which is arithmetically definable. It is therefore representable in Robinson arithmetic and stronger theories such as Peano arithmetic. By fixing the first two arguments appropriately, one can arrange that the values obtained by varying the final argument from 0 to n run through any specified (n+1)-tuple of natural numbers (the β lemma described in detail below). This allows simulating the quantification over sequences of natural numbers of arbitrary length, which cannot be done directly in the language of arithmetic, by quantification over just two numbers, to be used as the first two arguments of the β function.
For example, if f is a function defined by primitive recursion on a recursion variable n, say by f(0) = c and f(n+1) = g(n, f(n)), then to express f(n) = y one would like to say: there exists a sequence a0, a1, ..., an such that a0 = c, an = y and for all i < n one has g(i, ai) = ai+1. While that is not possible directly, one can say instead: there exist natural numbers a and b such that β(a,b,0) = c, β(a,b,n) = y and for all i < n one has g(i, β(a,b,i)) = β(a,b,i+1).
General schema with parameters
The primitive recursion schema as given may be replaced by one which makes use of fewer parameters. Let be an elementary pairing function, and be its projection functions for inversion.
Theorem: Any function constructible via the clauses of primitive recursion using the standard primitive recursion schema is constructible when the schema is replaced with the following.
This is proven by providing two intermediate schemata for primitive recursion, starting with a function defined via the standard schema, and translating the definition into terms of each intermediate schema and finally into terms of the above schema. The first intermediate schemata is as follows:
Translation of the standard definition of a primitive recursive function to the intermediate schema is done inductively, where an elementary pairing function is used to reinterpret the definition of a -ary primitive recursive function into a -ary primitive recursive function, terminating the induction at .
The second intermediate schema is as follows, with the parameter eliminated.
Translation to it is accomplished by pairing and together to use one parameter for handling both, namely by setting , , and recovering from these paired images by taking .
Finally, translation of the intermediate schema into the parameter-eliminated schema is done with a similar pairing and unpairing of and . Composing these three translations gives a definition in the original parameter-free schema.
This allows primitive recursion to be formalized in Peano arithmetic, due to its lack of extra n-ary function symbols.
The β function lemma
The utility of the β function comes from the following result (Gödel 1931, Hilfssatz 1, p. 192-193), which is the purpose of the β function in Gödel's incompleteness proof. This result is explained in more detail than in Gödel's proof in (Mendelson 1997:186) and (Smith 2013:113-118).
β function lemma. For any sequence of natural numbers (k0, k1, ..., kn), there are natural numbers b and c such that, for every natural number 0 ≤ i ≤ n, β(b, c, i) = ki.
As an example, the sequence (2,0,2,1) can be encoded by b= and c=24, since
The proof of the β function lemma makes use of the Chinese remainder theorem.
See also
Gödel numbering for sequences
Gödel's incompleteness theorems
Diagonal lemma
References
in (Gödel 1986)
Notes taken by Stpehen C. Kleene and John B. Rosser during lectures given at the Institute for Advanced Study. Reprinted in (Davis 1965)
Mathematical logic | Gödel's β function | [
"Mathematics"
] | 1,056 | [
"Mathematical logic"
] |
24,710,118 | https://en.wikipedia.org/wiki/Pine%20bolete | Pine bolete is a common name for several mushrooms and may refer to:
Suillus bellinii
Boletus pinophilus | Pine bolete | [
"Biology"
] | 29 | [
"Set index articles on fungus common names",
"Set index articles on organisms"
] |
24,710,843 | https://en.wikipedia.org/wiki/C22H24N2O8 | The molecular formula C22H24N2O8 (molar mass: 444.43 g/mol, exact mass: 444.1533 u) may refer to:
Tetracycline
Doxycycline
Molecular formulas | C22H24N2O8 | [
"Physics",
"Chemistry"
] | 53 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,711,006 | https://en.wikipedia.org/wiki/C10H11NO3 | {{DISPLAYTITLE:C10H11NO3}}
The molecular formula C10H11NO3 (molar mass: 193.20 g/mol, exact mass: 193.0739 u) may refer to:
Actarit
Betamipron
Methylenedioxycathinone
Methylhippuric acid | C10H11NO3 | [
"Chemistry"
] | 70 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
24,711,058 | https://en.wikipedia.org/wiki/Relay%20network | A relay network is a broad class of network topology commonly used in wireless networks, where the source and destination are interconnected by means of some nodes. In such a network the source and destination cannot communicate to each other directly because the distance between the source and destination is greater than the transmission range of both of them, hence the need for intermediate node(s) to relay.
A relay network is a type of network used to send information between two devices, for e.g. server and computer, that are too far away to send the information to each other directly. Thus the network must send or "relay" the information to different devices, referred to as nodes, that pass on the information to its destination. A well-known example of a relay network is the Internet. A user can view a web page from a server halfway around the world by sending and receiving the information through a series of connected nodes.
In many ways, a relay network resembles a chain of people standing together. One person has a note he needs to pass to the girl at the end of the line. He is the sender, she is the recipient, and the people in between them are the messengers, or the nodes. He passes the message to the first node, or person, who passes it to the second and so on until it reaches the girl and she reads it.
The people might stand in a circle, however, instead of a line. Each person is close enough to reach the person on either side of him and across from him. Together the people represent a network and several messages can now pass around or through the network in different directions at once, as opposed to the straight line that could only run messages in a specific direction. This concept, the way a network is laid out and how it shares data, is known as network topology. Relay networks can use many different topologies, from a line to a ring to a tree shape, to pass along information in the fastest and most efficient way possible.
Often the relay network is complex and branches off in multiple directions to connect many servers and computers. Where two lines from two different computers or servers meet forms the nodes of the relay network. Two computer lines might run into the same router, for example, making this the node.
Wireless networks also take advantage of the relay network system. A laptop, for example, might connect to a wireless network which sends and receives information through another network and another until it reaches its destination. Even though not all parts of the network have physical wires, they still connect to other devices that function as the nodes.
This type of network holds several advantages. Information can travel long distances, even if the sender and receiver are far apart. It also speeds up data transmission by choosing the best path to travel between nodes to the receiver's computer. If one node is too busy, the information is simply routed to a different one. Without relay networks, sending an email from one computer to another would require the two computers be hooked directly together before it could work.
Neural Networks
An array of adaptive units receives its input signals through a relaying network.
Examples
The TOR Network is an example of a relay network as data transfer on the TOR network takes place over the TOR relay such that the data is transmitted over multiple relay nodes before it reaches the client node.
References
External links
TOR Project Website
Computer networks engineering | Relay network | [
"Technology",
"Engineering"
] | 679 | [
"Computer networks engineering",
"Computer engineering"
] |
33,334,030 | https://en.wikipedia.org/wiki/Duocarmycin | The duocarmycins are members of a series of related natural products first isolated from Streptomyces bacteria in 1978. They are notable for their extreme cytotoxicity and thus represent a class of exceptionally potent antitumour antibiotics.
Biological activity
As small-molecule, synthetic, DNA minor groove binding alkylating agents, duocarmycins are suitable to target solid tumors. They bind to the minor groove of DNA and alkylate the nucleobase adenine at the N3 position. The irreversible alkylation of DNA disrupts the nucleic acid architecture, which eventually leads to tumor cell death. Analogues of naturally occurring antitumour agents, such as duocarmycins, represent a new class of highly potent antineoplastic compounds.
The work of Dale L. Boger and others created a better understanding of the pharmacophore and mechanism of action of the duocarmycins. This research has led to synthetic analogs including adozelesin, bizelesin, and carzelesin which progressed into clinical trials for the treatment of cancer. Similar research that Boger utilized for comparison to his results involving elimination of cancerous tumors and antigens was centered around the use of similar immunoconjugates that were introduced to cancerous colon cells. These studies related to Boger's research involving antigen-specificity that is necessary to the success of the duocarmycins as antitumor treatments.
Duocarmycin analogues vs tubulin binders
The duocarmycin have shown activity in a variety of multi-drug resistant (MDR) models. Agents that are part of this class of duocarmycins have the potency in the low picomolar range. This makes them suitable for maximizing the cell-killing potency of antibody-drug conjugates to which they are attached.
Duocarmycins
Antibody-drug conjugates
The DNA modifying agents such as duocarmycin are being used in the development of antibody-drug conjugate or ADCs. Scientists at The Netherlands-based Byondis (formerly Synthon) have combined a unique linkers with duocarmycin derivatives that have a hydroxyl group which is crucial for biological activity. Using this technology scientists aim to create ADCs having an optimal therapeutic window, balancing the effect of potent cell-killing agents on tumor cells versus healthy cells.
Synthetic analogs
The synthetic analogs of duocarmycins include adozelesin, bizelesin, and carzelesin. As members of the cyclopropylpyrroloindole family, these investigational drugs have progressed into clinical trials for the treatment of cancer.
Bizelesin
Bizelesin is antineoplastic antibiotic which binds to the minor groove of DNA and induces interstrand cross-linking of DNA, thereby inhibiting DNA replication and RNA synthesis. Bizelesin also enhances p53 and p21 induction and triggers G2/M cell-cycle arrest, resulting in cell senescence without apoptosis.
References
Experimental cancer drugs
Alkylating antineoplastic agents
Alkaloids
Antineoplastic drugs
Biotechnology
Chemotherapeutic adjuvants
Antibiotics | Duocarmycin | [
"Chemistry",
"Biology"
] | 675 | [
"Biomolecules by chemical classification",
"Natural products",
"Biotechnology products",
"Biotechnology",
"Organic compounds",
"Antibiotics",
"nan",
"Biocides",
"Alkaloids"
] |
33,335,738 | https://en.wikipedia.org/wiki/Saturate%2C%20aromatic%2C%20resin%20and%20asphaltene | Saturate, Aromatic, Resin and Asphaltene (SARA) is an analysis method that divides crude oil components according to their polarizability and polarity. The saturate fraction consists of nonpolar material including linear, branched, and cyclic saturated hydrocarbons (paraffins). Aromatics, which contain one or more aromatic rings, are slightly more polarizable. The remaining two fractions, resins and asphaltenes, have polar substituents. The distinction between the two is that asphaltenes are insoluble in an excess of heptane (or pentane) whereas resins are miscible with heptane (or pentane).
Method description
There are three main methods to obtain SARA results. One has lately emerged as the most popular. That technology is known as the Iatroscan TLC-FID, and it combines thin-layer chromatography (TLC) with flame ionization detection (FID). It is referred to as IP-143. Other analysis giving SARA numbers might not correspond to the numbers obtained in IP-143. It is therefore always important to know the analysis method when comparing SARA numbers.
TLC-FID is the only method that is 100 times more sensitive than any of the older methods and faster, taking 30 seconds for 1 sample instead of 1 day. In comparison, IP-143 method can take up to 3 days, and it does not offer any analytical precision.
See also
Crude oil assay
PONA number
PIONA
PNA analysis
References
Petroleum technology | Saturate, aromatic, resin and asphaltene | [
"Chemistry",
"Engineering"
] | 320 | [
"Petroleum engineering",
"Petroleum technology"
] |
43,092,207 | https://en.wikipedia.org/wiki/Formal%20ball | In topology, a branch of mathematics, a formal ball is an extension of the notion of ball to allow unbounded and negative radius. The concept of formal ball was introduced by Weihrauch and Schreiber in 1981 and the negative radius case (the generalized formal ball) by Tsuiki and Hattori in 2008.
Specifically, if is a metric space then an element of is a formal ball, where is the set of nonnegative real numbers. Elements of are known as generalized formal balls.
Formal balls possess a partial order defined by if .
Generalized formal balls are interesting because this partial order works just as well for as for , even though a generalized formal ball with negative radius does not correspond to a subset of .
Formal balls possess the Lawson topology and the Martin topology.
References
K. Weihrauch and U. Schreiber 1981. "Embedding metric spaces into CPOs". Theoretical computer science, 16:5-24.
H. Tsuiki and Y. Hattori 2008. "Lawson topology of the space of formal balls and the hyperbolic topology of a metric space". Theoretical computer science, 405:198-205
Y. Hattori 2010. "Order and topological structures of posets of the formal balls on metric spaces". Memoirs of the Faculty of Science and Engineering. Shimane University. Series B 43:13-26
Topology | Formal ball | [
"Physics",
"Mathematics"
] | 282 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
43,092,670 | https://en.wikipedia.org/wiki/Batch%20cryptography | Batch cryptography is the area of cryptology where cryptographic protocols are studied and developed for doing cryptographic processes like encryption/decryption, key exchange, authentication, etc. in a batch way instead of one by one. The concept of batch cryptography was introduced by Amos Fiat in 1989.
References
Cryptography | Batch cryptography | [
"Mathematics",
"Engineering"
] | 65 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
43,095,698 | https://en.wikipedia.org/wiki/Multifocal%20diffractive%20lens | A multifocal diffractive lens is a diffractive optical element (DOE) that allows a single incident beam to be focused simultaneously at several positions along the propagation axis.
Principle of operation
An incident laser beam is deflected by grooved diffraction pattern into axial diffraction orders along its optical axis. The foci appear around the far field position. With an additional focusing lens, foci from multifocal lens will appear at certain distances from the focal point of the lens.
Theory
The multifocal spots location is a function of refractive focal length fRefractive and predetermined diffractive focal length fDiffractive The focal spot at the "zero" order refers to the refractive focal length of the lens being used.
The distance between the focal spots can be described by the equation
,
where fm is the focal length for the mth diffractive order,
fRefractive is the focal length of the refractive lens, and
fDiffractive is the focal length of the diffractive lens.
Applications
Laser cutting
Laser drilling
Microscopy
Ophthalmology: Multifocal contact lenses and multifocal intraocular lenses
External links
HOLOOR Application note for Multifocal Lenses
Interactive Optical calculator for Multifocal Lenses
Beam Propagation through multifocal lens (Movie)
References
Diffraction
Lenses | Multifocal diffractive lens | [
"Physics",
"Chemistry",
"Materials_science"
] | 277 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
43,099,469 | https://en.wikipedia.org/wiki/Loop%20representation%20in%20gauge%20theories%20and%20quantum%20gravity | Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states (Gauss gauge invariant states). The idea is well known in the context of lattice Yang–Mills theory (see lattice gauge theory). Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.
The introduction by Ashtekar of a new set of variables (Ashtekar variables) cast general relativity in the same language as gauge theories and allowed one to apply loop techniques as a natural nonperturbative description of Einstein's theory. In canonical quantum gravity the difficulties in using the continuous loop representation are cured by the spatial diffeomorphism invariance of general relativity. The loop representation also provides a natural solution of the spatial diffeomorphism constraint, making a connection between canonical quantum gravity and knot theory. Surprisingly there were a class of loop states that provided exact (if only formal) solutions to Ashtekar's original (ill-defined) Wheeler–DeWitt equation. Hence an infinite set of exact (if only formal) solutions had been identified for all the equations of canonical quantum general gravity in this representation! This generated a lot of interest in the approach and eventually led to loop quantum gravity (LQG).
The loop representation has found application in mathematics. If topological quantum field theories are formulated in terms of loops, the resulting quantities should be what are known as knot invariants. Topological field theories only involve a finite number of degrees of freedom and so are exactly solvable. As a result, they provide concrete computable expressions that are invariants of knots. This was precisely the insight of Edward Witten who noticed that computing loop dependent quantities in Chern–Simons and other three-dimensional topological quantum field theories one could come up with explicit, analytic expressions for knot invariants. For his work in this, in 1990 he was awarded the Fields Medal. He is the first and so far the only physicist to be awarded the Fields Medal, often viewed as the greatest honour in mathematics.
Gauge invariance of Maxwell's theory
The idea of gauge symmetries was introduced in Maxwell's theory. Maxwell's equations are
where is the charge density and the current density. The last two equations can be solved by writing fields in terms of a scalar potential, , and a vector potential, :
.
The potentials uniquely determine the fields, but the fields do not uniquely determine the potentials - we can make the changes:
without affecting the electric and magnetic fields, where is an arbitrary function of space-time . These are called gauge transformations. There is an elegant relativistic notation: the gauge field is
and the above gauge transformations read,
.
The so-called field strength tensor is introduced,
which is easily shown to be invariant under gauge transformations. In components,
.
Maxwell's source-free action is given by:
.
The ability to vary the gauge potential at different points in space and time (by changing ) without changing the physics is called a local invariance. Electromagnetic theory possess the simplest kind of local gauge symmetry called (see unitary group). A theory that displays local gauge invariance is called a gauge theory. In order to formulate other gauge theories we turn the above reasoning inside out. This is the subject of the next section.
The connection and gauges theories
The connection and Maxwell's theory
We know from quantum mechanics that if we replace the wave-function, , describing the electron field by
that it leaves physical predictions unchanged. We consider the imposition of local invariance on the phase of the electron field,
The problem is that derivatives of are not covariant under this transformation:
.
In order to cancel out the second unwanted term, one introduces a new derivative operator that is covariant. To construct , one introduces a new field, the connection :
.
Then
The term is precisely cancelled out by requiring the connection field transforms as
.
We then have that
.
Note that is equivalent to
which looks the same as a gauge transformation of the gauge potential of Maxwell's theory. It is possible to construct an invariant action for the connection field itself. We want an action that only has two derivatives (since actions with higher derivatives are not unitary). Define the quantity:
.
The unique action with only two derivatives is given by:
.
Therefore, one can derive electromagnetic theory from arguments based solely on symmetry.
The connection and Yang-Mills gauge theory
We now generalize the above reasoning to general gauge groups. One begins with the generators of some Lie algebra:
Let there be a fermion field that transforms as
Again the derivatives of are not covariant under this transformation. We introduce a covariant derivative
with connection field given by
We require that transforms as:
.
We define the field strength operator
.
As is covariant, this means that the tensor is also covariant:
Note that is only invariant under gauge transformations if is a scalar, that is, only in the case of electromagnetism.
We can now construct an invariant action out of this tensor. Again we want an action that only has two derivatives. The simplest choice is the trace of the commutator:
The unique action with only two derivatives is given by:
This is the action for Yang-mills theory.
The loop representation of the Maxwell theory
We consider a change of representation in the quantum Maxwell gauge theory. The idea is to introduce a basis of states labeled by loops whose inner product with the connection states is given by
The loop functional is the Wilson loop for the abelian case.
The loop representation of Yang–Mills theory
We consider for simplicity (and because later we will see this is the relevant gauge group in LQG) an Yang–Mills theory in four dimensions. The field variable of the continuous theory is an connection (or gauge potential) , where is an index in the Lie algebra of . We can write for this field
where are the generators, that is the Pauli matrices multiplied by . note that unlike with Maxwell's theory, the connections are matrix-valued and don't commute, that is they are non-Abelian gauge theories. We must take this into account when defining the corresponding version of the holonomy for Yang–Mills theory.
We first describe the quantum theory in terms of connection variable.
The connection representation
In the connection representation the configuration variable is and its conjugate momentum is the (densitized) triad . It is most natural to consider wavefunctions . This is known as the connection representation. The canonical variables get promoted to quantum operators:
(analogous to the position representation ) and the triads are functional derivatives,
(analogous to )
The holonomy and Wilson loop
Let us return to the classical Yang–Mills theory. It is possible to encode the gauge invariant information of the theory in terms of `loop-like' variables.
We need the notion of a holonomy. A holonomy is a measure of how much the initial and final values of a spinor or vector differ after parallel transport around a closed loop ; it is denoted
Knowledge of the holonomies is equivalent to knowledge of the connection, up to gauge equivalence. Holonomies can also be associated with an edge; under a Gauss Law these transform as
For a closed loop if we take the trace of this, that is, putting and summing we obtain
or
Thus the trace of an holonomy around a closed loop is gauge invariant. It is denoted
and is called a Wilson loop. The explicit form of the holonomy is
where is the curve along which the holonomy is evaluated, and is a parameter along the curve, denotes path ordering meaning factors for smaller values of appear to the left, and are matrices that satisfy the algebra
The Pauli matrices satisfy the above relation. It turns out that there are infinitely many more examples of sets of matrices that satisfy these relations, where each set comprises matrices with , and where none of these can be thought to `decompose' into two or more examples of lower dimension. They are called different irreducible representations of the algebra. The most fundamental representation being the Pauli matrices. The holonomy is labelled by a half integer according to the irreducible representation used.
Giles' Reconstruction theorem of gauge potentials from Wilson loops
An important theorem about Yang–Mills gauge theories is Giles' theorem, according to which if one gives the trace of the holonomy of a connection for all possible loops on a manifold one can, in principle, reconstruct all the gauge invariant information of the connection. That is, Wilson loops constitute a basis of gauge invariant functions of the connection. This key result is the basis for the loop representation for gauge theories and gravity.
The loop transform and the loop representation
The use of Wilson loops explicitly solves the Gauss gauge constraint. As Wilson loops form a basis we can formally expand any Gauss gauge invariant function as,
.
This is called the loop transform. We can see the analogy with going to the momentum representation in quantum mechanics. There one has a basis of states labelled by a number and one expands
and works with the coefficients of the expansion .
The inverse loop transform is defined by
This defines the loop representation. Given an operator in the connection representation,
one should define the corresponding operator on in the loop representation via,
where is defined by the usual inverse loop transform,
A transformation formula giving the action of the operator on in terms of the action of the operator on is then obtained by equating the R.H.S. of with the R.H.S. of with substituted into , namely
or
where by we mean the operator but with the reverse factor ordering (remember from simple quantum mechanics where the product of operators is reversed under conjugation). We evaluate the action of this operator on the Wilson loop as a calculation in the connection representation and rearranging the result as a manipulation purely in terms of loops (one should remember that when considering the action on the Wilson loop one should choose the operator one wishes to transform with the opposite factor ordering to the one chosen for its action on wavefunctions ).
The loop representation of quantum gravity
Ashtekar–Barbero variables of canonical quantum gravity
The introduction of Ashtekar variables cast general relativity in the same language as gauge theories. It was in particular the inability to have good control over the space of solutions to the Gauss' law and spatial diffeomorphism constraints that led Rovelli and Smolin to consider a new representation – the loop representation.
To handle the spatial diffeomorphism constraint we need to go over to the loop representation. The above reasoning gives the physical meaning of the operator . For example, if corresponded to a spatial diffeomorphism, then this can be thought of as keeping the connection field of where it is while performing a spatial diffeomorphism on instead. Therefore, the meaning of is a spatial diffeomorphism on , the argument of .
In the loop representation we can then solve the spatial diffeomorphism constraint by considering functions of loops that are invariant under spatial diffeomorphisms of the loop . That is, we construct what mathematicians call knot invariants. This opened up an unexpected connection between knot theory and quantum gravity.
The loop representation and eigenfunctions of geometric quantum operators
The easiest geometric quantity is the area. Let us choose coordinates so that the surface is characterized by . The area of small parallelogram of the surface is the product of length of each side times where is the angle between the sides. Say one edge is given by the vector and the other by then,
From this we get the area of the surface to be given by
where and is the determinant of the metric induced on . This can be rewritten as
The standard formula for an inverse matrix is
Note the similarity between this and the expression for . But in Ashtekar variables we have . Therefore,
According to the rules of canonical quantization we should promote the triads to quantum operators,
It turns out that the area can be promoted to a well defined quantum operator despite the fact that we are dealing with product of two functional derivatives and worse we have a square-root to contend with as well. Putting , we talk of being in the J-th representation. We note that . This quantity is important in the final formula for the area spectrum. We simply state the result below,
where the sum is over all edges of the Wilson loop that pierce the surface .
The formula for the volume of a region is given by
The quantization of the volume proceeds the same way as with the area. As we take the derivative, and each time we do so we bring down the tangent vector , when the volume operator acts on non-intersecting Wilson loops the result vanishes. Quantum states with non-zero volume must therefore involve intersections. Given that the anti-symmetric summation is taken over in the formula for the volume we would need at least intersections with three non-coplanar lines. Actually it turns out that one needs at least four-valent vertices for the volume operator to be non-vanishing.
Mandelstam identities: su(2) Yang–Mills
We now consider Wilson loops with intersections. We assume the real representation where the gauge group is . Wilson loops are an over complete basis as there are identities relating different Wilson loops. These come about from the fact that Wilson loops are based on matrices (the holonomy) and these matrices satisfy identities, the so-called Mandelstam identities. Given any two matrices and it is easy to check that,
This implies that given two loops and that intersect, we will have,
where by we mean the loop traversed in the opposite direction and means the loop obtained by going around the loop and then along . See figure below. This is called a Mandelstam identity of the second kind. There is the Mandelstam identity of the first kind . Spin networks are certain linear combinations of intersecting Wilson loops designed to address the over-completeness introduced by the Mandelstam identities.
Spin network states
In fact spin networks constitute a basis for all gauge invariant functions which minimize the degree of over-completeness of the loop basis, and for trivalent intersections eliminate it entirely.
As mentioned above the holonomy tells you how to propagate test spin half particles. A spin network state assigns an amplitude to a set of spin half particles tracing out a path in space, merging and splitting. These are described by spin networks : the edges are labelled by spins together with `intertwiners' at the vertices which are prescription for how to sum over different ways the spins are rerouted. The sum over rerouting are chosen as such to make the form of the intertwiner invariant under Gauss gauge transformations.
Uniqueness of the loop representation in LQG
Theorems establishing the uniqueness of the loop representation as defined by Ashtekar et al. (i.e. a certain concrete realization of a Hilbert space and associated operators reproducing the correct loop algebra – the realization that everybody was using) have been given by two groups (Lewandowski, Okolow, Sahlmann and Thiemann) and (Christian Fleischhack). Before this result was established it was not known whether there could be other examples of Hilbert spaces with operators invoking the same loop algebra, other realizations, not equivalent to the one that had been used so far.
Knot theory and loops in topological field theory
A common method of describing a knot (or link, which are knots of several components entangled with each other) is to consider its projected image onto a plane called a knot diagram. Any given knot (or link) can be drawn in many different ways using a knot diagram. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. Given a knot diagram, one tries to find a way to assign a knot invariant to it, sometimes a polynomial – called a knot polynomial. Two knot diagrams with different polynomials generated by the same procedure necessarily correspond to different knots. However, if the polynomials are the same, it may not mean that they correspond to the same knot. The better a polynomial is at distinguishing knots the more powerful it is.
In 1984, Jones announced the discovery of a new link invariant, which soon led to a bewildering profusion of generalizations. He had found a new knot polynomial, the Jones polynomial. Specifically, it is an invariant of an oriented knot or link which assigns to each oriented knot or link a polynomial with integer coefficients.
In the late 1980s, Witten coined the term topological quantum field theory for a certain type of physical theory in which the expectation values of observable quantities are invariant under diffeomorphisms.
Witten gave a heuristic derivation of the Jones polynomial and its generalizations from Chern–Simons theory. The basic idea is simply that the vacuum expectation values of Wilson loops in Chern–Simons theory are link invariants because of the diffeomorphism-invariance of the theory. To calculate these expectation values, however, Witten needed to use the relation between Chern–Simons theory and a conformal field theory known as the Wess–Zumino–Witten model (or the WZW model).
References
Quantum gravity
Gauge theories
Knot theory | Loop representation in gauge theories and quantum gravity | [
"Physics"
] | 3,662 | [
"Quantum gravity",
"Unsolved problems in physics",
"Physics beyond the Standard Model"
] |
27,984,617 | https://en.wikipedia.org/wiki/ICI-199%2C441 | ICI-199,441 is a drug which acts as a potent and selective κ-opioid agonist, and has analgesic effects. It is a biased agonist of the KOR, and is one of relatively few KOR ligands that is G protein-biased rather than β-arrestin-biased.
See also
U-47700
U-50488
U-69,593
References
Acetamides
Biased ligands
Chloroarenes
Kappa-opioid receptor agonists
1-Pyrrolidinyl compounds
Synthetic opioids | ICI-199,441 | [
"Chemistry"
] | 121 | [
"Biased ligands",
"Signal transduction"
] |
27,984,902 | https://en.wikipedia.org/wiki/PRE-084 | PRE-084 is a sigma receptor agonist, selective for the σ1 subtype. It has nootropic and antidepressant actions in animal studies, as well as antitussive and reinforcing effects. PRE-084 increases the expression of GDNF.
References
Sigma agonists
Carboxylate esters
4-Morpholinyl compounds
Dissociative drugs | PRE-084 | [
"Chemistry"
] | 85 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
27,986,621 | https://en.wikipedia.org/wiki/Braking%20chopper | Braking choppers, sometimes also referred to as Braking units, are used in the DC voltage intermediate circuits of frequency converters to control voltage when the load feeds energy back to the intermediate circuit. This arises, for example, when a magnetized motor is being rotated by an overhauling load and so functions as a generator feeding power to the DC voltage intermediate circuit.
They are an application of the chopper principle, using the on-off control of a switching device.
Operation
A braking chopper is an magnetical switch that limits the DC bus voltage by switching the braking energy to a resistor where the braking energy is converted to heat. Braking choppers are automatically activated when the actual DC bus voltage exceeds a specified level depending on the nominal voltage of the variable-frequency drive
Benefits
Simple electrical construction and well-known technology
Low fundamental investment for chopper and resistor
The chopper works even if AC supply is lost. Braking during main power loss may be required. E.g. in elevator or other safety related applications.
Drawbacks
The braking energy is wasted if the heated air can not be used
The braking chopper and resistors require additional space
May require extra investments in the cooling and heat recovery system
Braking choppers are typically dimensioned for a certain cycle, e.g. 100% power 1/10 minutes, long braking times require more accurate dimensioning of the braking chopper
Increased risk of fire due to hot resistor and possible dust and chemical components in the ambient air space
The increased DC bus voltage level during braking causes additional voltage stress on motor insulation
Applications
Braking choppers are inappropriate when:
The braking cycle is needed only occasionally
The amount of braking energy with respect to motoring energy is extremely small
The ambient air includes substantial amounts of dust or other potentially combustible, explosive or metallic components
Braking choppers are appropriate when:
The braking is continuous or regularly repeated
The total amount of braking energy is high in respect to the motoring energy needed
The instantaneous braking power is high, e.g. several hundred kW for several minutes
Braking operation is needed during main power loss
Flux braking
Flux braking is another method, based on motor losses, for handling an overrunning load. When braking in the drive system is needed, the motor flux and thus also the magnetizing current component used in the motor are increased. The control of flux can be easily achieved through the direct torque control principle. With DTC the inverter is directly controlled to achieve the desired torque and flux for the motor. During flux braking the motor is under DTC control which guarantees that braking can be made according to the specified speed ramp. This is very different from the DC injection braking typically used in drives. In the DC injection method DC current is injected to the motor so that control of the motor flux is lost during braking. The flux braking method based on DTC enables the motor to shift quickly from braking to motoring power when requested.
In flux braking the increased current means increased losses inside the motor. The braking power is therefore also increased although the braking power delivered to the frequency converter is not increased. The increased current generates increased losses in motor resistances. The higher the resistance value the higher the braking energy dissipation inside the motor. Typically, in low power motors(below 5 kW) the resistance value of the motor is relatively large in respect to the nominal current of the motor. The higher the power or the voltage of the motor the less the resistance value of the motor in respect to motor current.
In other words, flux braking is most effective in a low power motor.
See also
Motor controller
Chopper (electronics) - the working principle
References
Electric motors
Electric power systems components
Choppers
Mechanical power transmission
Mechanical power control
Electrical power control | Braking chopper | [
"Physics",
"Technology",
"Engineering"
] | 742 | [
"Engines",
"Electric motors",
"Mechanics",
"Electrical engineering",
"Mechanical power transmission",
"Mechanical power control"
] |
27,986,832 | https://en.wikipedia.org/wiki/Macroscopic%20traffic%20flow%20model | A macroscopic traffic flow model is a mathematical traffic model that formulates the relationships among traffic flow characteristics like density, flow, mean speed of a traffic stream, etc. Such models are conventionally arrived at by integrating microscopic traffic flow models and converting the single-entity level characteristics to comparable system level characteristics. An example is the two-fluid model.
The method of modeling traffic flow at macroscopic level originated under an assumption that traffic streams as a whole are comparable to fluid streams. The first major step in macroscopic modeling of traffic was taken by Lighthill and Whitham in 1955, when they indexed the comparability of ‘traffic flow on long crowded roads’ with ‘flood movements in long rivers’. A year later, Richards (1956) complemented the idea with the introduction of ‘shock-waves on the highway’, completing the so-called LWR model. Macroscopic modeling may be primarily classified according to the type of traffic as homogeneous and heterogeneous, and further with respect to the order of the mathematical model.
References
M.J. Lighthill, G.B.Whitham, On kinematic waves II: A theory of traffic flow on long, crowded roads. Proceedings of the Royal Society of London Series A 229, 317–345, 1955
P.I. Richards, Shock waves on the highway, Operations Research 4, 42–51., 1956
M. Papageorgiou, Some remarks on macroscopic traffic flow modeling, Elsevier Science Ltd., Vol. 32, No. 5, pp. 323 to 329, 1998
C.F. Daganzo, Fundamentals of transportation and traffic operations, Elsevier Science Ltd., 1997
M. Di Francesco, M.D.Rosini, Rigorous Derivation of Nonlinear Scalar Conservation Laws from Follow-the-Leader Type Models via Many Particle Limit, Archive for Rational Mechanics and Analysis, 2015
Road traffic management
Mathematical modeling
Traffic flow | Macroscopic traffic flow model | [
"Mathematics"
] | 394 | [
"Applied mathematics",
"Mathematical modeling"
] |
27,987,124 | https://en.wikipedia.org/wiki/Health%20informatics%20tools | To provide the safe and effective delivery of medical care, virtually all clinical staff use a number of front-line health informatics tools in their day-to-day operations. The need for standardization and refined development of these tools is underscored by the HITECH Act and other efforts to develop electronic medical records. Often, the development of these electronic processes is hampered by the conversion process from older paper processes, which were developed before the stricter development guidelines required in an electronic environment.
To successfully implement each of these tools, hospitals generally must define who is responsible for, and a prescribed manner of building, testing, approving, coding, publishing, implementing/educating, and tracking the tool.
Tools
Front-line health informatics tools (sometimes informally called the "clinical informatics toolbelt") generally include one of the following:
Policies and procedures – Tools used to define organizational standards and how to achieve them.
Procedures – Documents to help learn how to achieve a goal
Clinical protocols – Tools used to standardize and automate care in a common clinical scenario.
Orders – Tools used to record and transmit detailed instructions to perform a procedure or deliver care
Order sets – Tools used to standardize and expedite the ordering process for a common clinical scenario
Clinical pathways – Groupings of order sets, used to standardize the rounding process for a common clinical diagnosis
Guidelines – Documents used to educate general care objectives for a common clinical scenario
Clinical documentation (includes Notes, Forms, and Flowsheets) – Documents used to record and transmit a patients' history, condition, responses, therapies, activities, and/or plans
Clinical user profiles – Tools used to personalize and/or gather information about clinical users
Clinical templates – Documents used to standardize and expedite the development of a clinical document
Clinical staff education modules – Documents used to educate a staff member about a common clinical subject
Clinical patient education modules – Documents used to educate a patient about a common clinical subject
Clinical staff schedules – documents used to determine who is responsible for care at a particular date and time
Clinical committee charters – Documents used to assign responsibility to a clinical committee to perform a particular task
Clinical committee minutes – documents used to record the decisions and activities of a clinical committee
Telephone number lists – Documents used to help contact a clinical staffmember
Wikis – Electronic documents used to collect information and web links for a common clinical group
Clinic Management Solutions – Tools used to record full clinical history of patients.
Emails, posters, and staff meetings – Tools used to make announcements and deliver short messages
Clinical informaticists create clinical changes by properly constructing and implementing these tools.
References
Health informatics | Health informatics tools | [
"Biology"
] | 528 | [
"Health informatics",
"Medical technology"
] |
27,989,949 | https://en.wikipedia.org/wiki/Extended%20static%20checking | Extended static checking (ESC) is a collective name in computer science for a range of techniques for statically checking the correctness of various program constraints. ESC can be thought of as an extended form of type checking. As with type checking, ESC is performed automatically at compile time (i.e. without human intervention). This distinguishes it from more general approaches to the formal verification of software, which typically rely on human-generated proofs. Furthermore, it promotes practicality over soundness, in that it aims to dramatically reduce the number of false positives (overestimated errors that are not real errors, that is, ESC over strictness) at the cost of introducing some false negatives (real ESC underestimation error, but that need no programmer's attention, or are not targeted by ESC). ESC can identify a range of errors that are currently outside the scope of a type checker, including division by zero, array out of bounds, integer overflow and null dereferences.
The techniques used in extended static checking come from various fields of computer science, including static program analysis, symbolic simulation, model checking, abstract interpretation, SAT solving and automated theorem proving and type checking. Extended static checking is generally performed only at an intraprocedural, rather than interprocedural, level in order to scale to large programs. Furthermore, extended static checking aims to report errors by exploiting user-supplied specifications, in the form of pre- and post-conditions, loop invariants and class invariants.
Extended static checkers typically operate by propagating strongest postconditions (respectively weakest preconditions) intraprocedurally through a method starting from the precondition (respectively postcondition). At each point during this process an intermediate condition is generated that captures what is known at that program point. This is combined with the necessary conditions of the program statement at that point to form a verification condition. An example of this is a statement involving a division, whose necessary condition is that the divisor be non-zero. The verification condition arising from this effectively states: given the intermediate condition at this point, it must follow that the divisor is non-zero. All verification conditions must be shown to be false (hence correct by means of excluded third) in order for a method to pass extended static checking (or "unable to find more errors"). Typically, some form of automated theorem prover is used to discharge verification conditions.
Extended static checking was pioneered in ESC/Modula-3 and, later, ESC/Java. Its roots originate from more simplistic static checking techniques, such as static debugging or lint and FindBugs. A number of other languages have adopted ESC, including Spec# and SPARKada and VHDL VSPEC. However, there is currently no widely used software programming language that provides extended static checking in its base development environment.
See also
Java Modeling Language (JML)
References
Further reading
Static program analysis tools
Formal methods | Extended static checking | [
"Engineering"
] | 628 | [
"Software engineering",
"Formal methods"
] |
27,990,286 | https://en.wikipedia.org/wiki/Octagonal%20tiling | In geometry, the octagonal tiling is a regular tiling of the hyperbolic plane. It is represented by Schläfli symbol of {8,3}, having three regular octagons around each vertex. It also has a construction as a truncated order-8 square tiling, t{4,8}.
Uniform colorings
Like the hexagonal tiling of the Euclidean plane, there are 3 uniform colorings of this hyperbolic tiling. The dual tiling V8.8.8 represents the fundamental domains of [(4,4,4)] symmetry.
Regular maps
The regular map {8,3}2,0 can be seen as a 6-coloring of the {8,3} hyperbolic tiling. Within the regular map, octagons of the same color are considered the same face shown in multiple locations. The 2,0 subscripts show the same color will repeat by moving 2 steps in a straight direction following opposite edges. This regular map also has a representation as a double covering of a cube, represented by Schläfli symbol {8/2,3}, with 6 octagonal faces, double wrapped {8/2}, with 24 edges, and 16 vertices. It was described by Branko Grünbaum in his 2003 paper Are Your Polyhedra the Same as My Polyhedra?
Related polyhedra and tilings
This tiling is topologically part of sequence of regular polyhedra and tilings with Schläfli symbol {n,3}.
And also is topologically part of sequence of regular tilings with Schläfli symbol {8,n}.
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 10 forms.
See also
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Regular tilings | Octagonal tiling | [
"Physics"
] | 507 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
44,806,520 | https://en.wikipedia.org/wiki/Valve%20exerciser | A valve exerciser is a device that operates a valve periodically in order to prevent it from becoming so stiff that it no longer works. Valves that are left in a static position for a long time may corrode, or become blocked with mineral deposits. Electronic valve exercisers can provide information on the health of a valve by monitoring the required operating torque.
References
Piping
Plumbing | Valve exerciser | [
"Physics",
"Chemistry",
"Engineering"
] | 77 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Physical systems",
"Construction",
"Valves",
"Hydraulics",
"Mechanical engineering",
"Piping"
] |
26,123,160 | https://en.wikipedia.org/wiki/Lake%20bifurcation | A lake bifurcation occurs when a lake (a bifurcating lake) has outflows into two different drainage basins. In this case, the drainage divide cannot be defined exactly, as it is situated in the middle of the lake.
Examples
Vesijako (the name Vesijako actually means "drainage divide") and Lummene in the Finnish Lakeland are two nearby lakes in Finland. Both drain in two directions: into the Kymijoki basin that drains into the Gulf of Finland and into the Kokemäenjoki basin that drains into the Gulf of Bothnia.
Similarly the lakes Isojärvi and Inhottu in the Karvianjoki basin in the Satakunta region of western Finland both have two outlets: from Inhottu the waters flow into the Gulf of Bothnia through the Eteläjoki River in Pori and into lake Isojärvi through the Pomarkunjoki River. From lake Isojärvi the waters flow to the Gulf of Bothnia through the Pohjajoki river in Pori and through the Merikarvianjoki river in Merikarvia. In the Karvianjoki basin there have formerly been two other bifurcations, both eradicated by human actions.
Another example is Bontecou Lake, a shallow, man-made bifurcation lake in Dutchess County, New York.
Lake Diefenbaker in Saskatchewan is a reservoir created by damming South Saskatchewan River and Qu'Appelle River. The lake continues to drain into the two rivers, but the Qu'Appelle receives a much enlarged flow (in essence, a diversion of flow from the South Saskatchewan) from the damming. Both rivers eventually drain into Hudson Bay via Lake Winnipeg and the Nelson River. Also located in Saskatchewan is Wollaston Lake, which is the source of Fond du Lac River draining into the Arctic Ocean and of Cochrane River draining into Hudson Bay and the Atlantic Ocean.
Isa Lake in Yellowstone National Park is a natural bifurcated lake which drains into two oceans. Its eastern drainage is to the Gulf of Mexico (part of the Atlantic Ocean) via the Firehole River, while its western drainage is to the Pacific Ocean via the Lewis River.
Peeler Lake in California's Hoover Wilderness is a natural bifurcated lake that lies along the Great Basin Divide. It has two outlets, one of which drains east into the Great Basin, and one of which drains west to the Pacific Ocean.
Lake Okeechobee in Florida is a particularly rare example of a trifurcation lake. Via the artificial Okeechobee Waterway, it flows east to the Atlantic Ocean through the St. Lucie River and west to the Gulf of Mexico through the Caloosahatchee River. Meanwhile, part of the lake's water naturally flows south through the Everglades into the Florida Bay. As a result of this artificial trifurcation, the Eastern Continental Divide of North America terminates at the lake rather than further south near Miami.
Heavenly Lake on the North Korea–People's Republic of China border.
Lake Pedder in Tasmania, as a result of damming, drains east as the Huon River and west as the Serpentine, a tributary of the Gordon.
List
Systems that cross a continental divide
The Chicago River and Calumet River artificially divert water from the Great Lakes to the Mississippi.
The artificial Gatun Lake in Panama drains into the Atlantic and Pacific.
Isa Lake in Yellowstone National Park drains to the Atlantic and Pacific.
At the summit of Athabasca Pass in the Canadian Rockies is a small lake, the Committee's Punch Bowl, that drains both east and west.
Lesjaskogsvatnet in Norway is the source of the rivers Gudbrandsdalslågen and Rauma, which eventually flow into the Skagerrak and the Norwegian Sea, respectively.
Lake Okeechobee in Florida east to the Atlantic Ocean, west to the Gulf of Mexico, and naturally south to Florida Bay.
Wollaston Lake in Canada is the world's largest bifurcation lake, draining both northwest to the Arctic Ocean and northeast to Hudson Bay.
Other systems
Bontecou Lake in New York outflows into two watersheds, those of the Hudson and Housatonic rivers.
Lake Diefenbaker – in Canada
Lac Sainte-Anne located at N50.14222° W67.86399° north of Baie Comeau Quebec is shown on Canadian government topographical maps as draining into Rivière Toulnustouc and also into Rivière Godbout Est
Inhottu, Isojärvi, Kivesjärvi, Kuolimo, Lummene, Vehkajärvi and Vesijako – in Finland
Lake Manapouri in New Zealand has been modified as part of a hydroelectric scheme such that it drains (naturally) south to Foveaux Strait via the Waiau River and (artificially) west via a constructed tunnel to Doubtful Sound
See also
River bifurcation
References
Further reading
Not Any Usual Route (About bifurcation lakes in Finland)
Kuusisto, Esko (1984). Suomen vesistöjen bifurkaatiot. (Abstract: The bifurcations of Finnish watercourses) Terra 96:4, 253–261. Helsinki: Geographical Society of Finland.
External links
Hydrology
Geomorphology
Lakes by type | Lake bifurcation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,107 | [
"Hydrology",
"Environmental engineering"
] |
30,832,132 | https://en.wikipedia.org/wiki/Total%20absorption%20spectroscopy | Total absorption spectroscopy is a measurement technique that allows the measurement of the gamma radiation emitted in the different nuclear gamma transitions that may take place in the daughter nucleus after its unstable parent has decayed by means of the beta decay process. This technique can be used for beta decay studies related to beta feeding measurements within the full decay energy window for nuclei far from stability.
It is implemented with a special type of detector, the "total absorption spectrometer" (TAS), made of a scintillator crystal that almost completely surrounds the activity to be measured, covering a solid angle of approximately 4π. Also, in an ideal case, it should be thick enough to have a peak efficiency close to 100%, in this way its total efficiency is also very close to 100% (this is one of the reasons why it is called "total" absorption spectroscopy). Finally, it should be blind to any other type of radiation. The gamma rays produced in the decay under study are collected by photomultipliers attached to the scintillator material. This technique may solve the problem of the Pandemonium effect.
There is a change in philosophy when measuring with a TAS. Instead of detecting the individual gamma rays (as high-resolution detectors do), it will detect the gamma cascades emitted in the decay. Then, the final energy spectrum will not be a collection of different energy peaks coming from the different transitions (as can be expected in the case of a germanium detector), but a collection of peaks situated at an energy that is the sum of the different energies of all the gammas of the cascade emitted from each level. This means that the energy spectrum measured with a TAS will be in reality a spectrum of the levels of the nuclei, where each peak is a level populated in the decay. Since the efficiency of these detectors is close to 100%, it is possible to see the feeding to the high excitation levels that usually can not be seen by high-resolution detectors. This makes total absorption spectroscopy the best method to measure beta feedings and provide accurate beta intensity (Iβ) distributions for complex decay schemes.
In an ideal case, the measured spectrum would be proportional to the beta feeding (Iβ). But a real TAS has limited efficiency and resolution, and also the Iβ has to be extracted from the measured spectrum, which depends on the spectrometer response. The analysis of TAS data is not simple: to obtain the strength from the measured data, a deconvolution process should be applied.
Analysis method for TAS data
The complex analysis of the data measured with the TAS can be reduced to the solution of a linear problem:
d = Ri
given that it relates the measured data (d) with the feedings (i) from which the beta intensity distribution Iβ can be obtained.
R is the response matrix of the detector (meaning the probability that a decay that feeds a certain level gives a count in certain bin of the spectrum). The function R depends on the detector but also of the particular level scheme that is being measured. To be able to extract the value of i from the data d the equation has to be inverted (this equation is also called the "inverse problem").
Unfortunately this can not be done easily because there is similar response to the feeding of adjacent levels when they are at high excitation energies where the level density is high. In other words, this is one of the so-called "ill-posed" problems, for which several sets of parameters can reproduce closely the same data set. Then, to find i, the response has to be obtained for which the branching ratios and a precise simulation of the geometry of the detector are needed. The higher the efficiency of the TAS used, the lower the dependence of the response on the branching ratios will be. Then it is possible to introduce the unknown branching ratios by hand from a plausible guess. A good guess can be calculated by means of the Statistical Model.
Then the procedure to find the feedings is iterative: using the expectation-maximization algorithm to solve the inverse problem, Then the procedure to find the feedings is iterative: using the expectation-maximization algorithm to solve the inverse problem, the feedings are extracted; if they don't reproduce the experimental data, it means that the initial guess of the branching ratios is wrong and has to be changed (of course, it is possible to play with other parameters of the analysis). Repeating this procedure iteratively in a reduced number of steps, the data is finally reproduced.
Branching ratio calculation
The best way to handle this problem is to keep a set of discrete levels at low excitation energies and a set of binned levels at high energies. The set at low energies is supposed to be known and can be taken from databases (for example, the [ENSDF] database, which has information from what has been already measured with the high resolution technique). The set at high energies is unknown and does not overlap with the known part. At the end of this calculation, the whole region of levels inside the Q value window (known and unknown) is binned.
At this stage of the analysis it is important to know the internal conversion coefficients for the transitions connecting the known levels. The internal conversion coefficient is defined as the number of de-excitations via e− emission over those via γ emission. If internal conversion takes place, the EM multipole fields of the nucleus do not result in the emission of a photon, instead, the fields interact with the atomic electrons and cause one of the electrons to be emitted from the atom. The gamma that would be emitted after the beta decay is missed, and the γ intensity decreases accordingly: IT = Iγ + Ie− = Iγ(1 + αe), so this phenomenon has to be taken into account in the calculation. Also, the x rays will be contaminated with those coming from the electron conversion process. This is important in electron capture decay, as it can affect the results of any x-ray gated spectra if the internal conversion is strong. Its probability is higher for lower energies and high multipolarities.
One of the ways to obtain the whole branching ratio matrix is to use the Statistical Nuclear Model. This model generates a binned branching ratio matrix from average level densities and average gamma strength functions. For the unknown part, average branching ratios can be calculated, for which several parameterizations may be chosen, while for the known part the information in the databases is used.
Response simulation
It is not possible to produce gamma sources that emit all the energies needed to calculate accurately the response of a TAS detector. For this reason, it is better to perform a Montecarlo simulation of the response. For this simulation to be reliable, the interactions of all the particles emitted in the decay (γ, e−/e+, Auger e, x rays, etc.) have to be modeled accurately, and the geometry and materials in the way of these particles have to be well reproduced. Also, the light production of the scintillator has to be included. The way to perform this simulation is explained in detail in paper by D. Cano-Ott et al. GEANT3 and GEANT4 are well suited for these kind of simulations.
If the scintillator material of the TAS detector suffers from a non proportionality in the light production, the peaks produced by a cascade will be displaced further for every increment in the multiplicity and the width of these peaks will be different from the width of single peaks with the same energy. This effect can be introduced in the simulation by means of a hyperbolic scintillation efficiency.
The simulation of the light production will widen the peaks of the TAS spectrum; however, this still does not reproduce the real width of the experimental peaks. During the measurement there are additional statistical processes that affect the energy collection and are not included in the Montecarlo. The effect of this is an extra widening of the TAS experimental peaks. Since the peaks reproduced with the Montecarlo do not have the correct width, a convolution with an empirical instrumental resolution distribution has to be applied to the simulated response.
Finally, if the data to be analyzed comes from electron capture events, a simulated gamma response matrix must be built using the simulated responses to individual monoenergetic γ rays of several energies. This matrix contains the information related to the dependence of the response function on the detector. To include also the dependence on the level scheme that is being measured, the above-mentioned matrix should be convoluted with the branching ratio matrix calculated previously. In this way, the final global response R is obtained.
Ancillary detectors
An important thing to have in mind when using the TAS technique is that, if nuclei with short half-lifes are measured, the energy spectrum will be contaminated with the gamma cascades of the daughter nuclei produced in the decay chain. Normally the TAS detectors have the possibility to place ancillary detectors inside of them, to measure secondary radiation like X-rays, electrons or positrons. In this way it is possible to tag the other components of the decay during the analysis, allowing to separate the contributions coming from all the different nuclei (isobaric separation).
TAS detectors in the world
TAS at ISOLDE
In 1970, a spectrometer consisting of two cylindrical NaI detectors of 15 cm diameter and 10 cm length was used at ISOLDE.
TAS at GSI
The TAS Measuring Station installed at the GSI had a tape transport system that allowed the collection of the ions coming out of the separator (they were implanted in the tape), and the transportation of those ions from the collection position to the center of the TAS for the measurement (by means of the movement of the tape). The TAS at this facility was made of a cylindrical NaI crystal of Φ = h = 35.6 cm, with a concentric cylindrical hole in the direction of the symmetry axis. This hole was filled by a plug detector (4.7x15.0 cm) with a holder that allowed the placement of ancillary detectors and two rollers for a tape.
Lucrecia measuring station
This measuring station, installed at the end of one of the ISOLDE beamlines, consists of a TAS, and a tape station.
In this station, a beam pipe is used to hold the tape. The beam is implanted in the tape outside of the TAS, which is then transported to the center of the detector for the measurement. In this station it is also possible to implant the beam directly in the center of the TAS, by changing the position of the rollers. The latter procedure allows the measurement of more exotic nuclei with very short half-lives.
Lucrecia is the TAS at this station. It is made of one piece of NaI(Tl) material cylindrically shaped with φ = h = 38 cm (the largest ever built to our knowledge). It has a cylindrical cavity of 7.5 cm diameter that goes through perpendicularly to its symmetry axis. The purpose of this hole is to allow the beam pipe to reach the measurement position so that the tape can be positioned in the center of the detector. It also allows the placement of ancillary detectors in the opposite side to measure other types of radiation emitted by the activity implanted in the tape (x rays, e−/e+, etc.). However, the presence of this hole makes this detector less efficient as compared to the GSI TAS (Lucrecia’s total efficiency is around 90% from 300 to 3000 keV). Lucrecia’s light is collected by 8 photomultipliers. During the measurements Lucrecia is kept measuring at a total counting rate not larger than 10 kHz to avoid second and higher order pileup contributions.
Surrounding the TAS there is a shielding box 19.2 cm thick made of four layers: polyethylene, lead, copper and aluminium. The purpose of it is to absorb most of the external radiation (neutrons, cosmic rays, and the room background).
See also
Pandemonium effect
Scintillation counter
References
External links
On-Line Isotope Mass Separator experimental hall, where Lucrecia is installed.
Gamma Spectroscopy Group, devoted to total absorption spectroscopy measurements.
Particle detectors
Nuclear physics
Radioactivity
Experimental physics | Total absorption spectroscopy | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,539 | [
"Measuring instruments",
"Particle detectors",
"Experimental physics",
"Nuclear physics",
"Radioactivity"
] |
30,834,082 | https://en.wikipedia.org/wiki/MENTAL%20domain | The MENTAL or MLN64 NH2-terminal domain is a membrane-spanning domain that is conserved in two late endosomal proteins in vertebrates, MLN64 and MENTHO. The domain is 170 amino acids long.
Current data indicates that this domain allows for dimerization between MLN64 and MENTHO molecules and with themselves. The domain may also direct cholesterol transport.
References
Protein domains | MENTAL domain | [
"Biology"
] | 85 | [
"Protein domains",
"Protein classification"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.