id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,474,492 | https://en.wikipedia.org/wiki/Intraspecific%20antagonism | Intraspecific antagonism means a disharmonious or antagonistic interaction between two individuals of the same species. As such, it could be a sociological term, but was actually coined by Alan Rayner and Norman Todd working at Exeter University in the late 1970s, to characterise a particular kind of zone line formed between wood-rotting fungal mycelia. Intraspecific antagonism is one of the expressions of a phenomenon known as vegetative or somatic incompatibility.
Fungal individualism
Zone lines form in wood for many reasons, including host reactions against parasitic encroachment, and inter-specific interactions, but the lines observed by Rayner and Todd when transversely-cut sections of brown-rotted birch tree trunk or branch were incubated in plastic bags appeared to be due to a reaction between different individuals of the same species of fungus.
This was a startling inference at a time when the prevailing orthodoxy within the mycological community was that of the "unit mycelium". This was the theory that when two different individuals of the same species of basidiomycete wood rotting fungi grew and met within the substratum, they fused, cooperated, and shared nuclei freely. Rayner and Todd's insight was that basidiomycete fungi individuals do, in most "adult" or dikaryotic cases anyway, retain their individuality.
A small stable of postgraduate and postdoctoral students helped elucidate the mechanisms underlying these intermycelial interactions, at Exeter University (Todd) and the University of Bath (Rayner), over the next few years.
Applications of intraspecific antagonism
Although the attribution of individual status to the mycelia confined by intraspecific zone lines is a comparatively new idea, zone lines themselves have been known since time immemorial. The term spalting is applied by woodworkers to wood showing strongly-figured zone lines, particularly those cases where the area of "no-man's land" between two antagonistic conspecific mycelia is colonised by another species of fungus. Dematiaceous hyphomycetes, with their dark-coloured mycelia, produce particularly attractive black zone lines when they colonise the areas occupied by two antagonistic basidiomycete individuals. Spalted wood can be difficult to work, since different individual wood-rotting fungi have different decay efficiencies, and thus produce zones of different softness, and the zone lines themselves are usually unrotted and hard.
Instraspecific antagonism can also sometimes be of assistance in quickly recognising the membership of clones in those fungi, particularly root-rots such as Armillarea where individual mycelia may colonise large areas, or more than one tree.
It is even the subject of a recent patent.
References
Mycology
Fungal morphology and anatomy
Wood | Intraspecific antagonism | [
"Biology"
] | 598 | [
"Mycology"
] |
4,474,775 | https://en.wikipedia.org/wiki/Duhamel%27s%20principle | In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy in . Indicating by the time derivative of , the initial value problem is
where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation,
corresponds to adding an external heat energy at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice . By linearity, one can add up (integrate) the resulting solutions through time and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.
General considerations
Formally, consider a linear inhomogeneous evolution equation for a function
with spatial domain in , of the form
where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is
where is the solution of the problem
The integrand is the retarded solution , evaluated at time , representing the effect, at the later time , of an infinitesimal force applied at time . (The operator can be thought of as an inverse of the operator for the Cauchy problem with initial condition .)
Duhamel's principle also holds for linear systems (with vector-valued functions ), and this in turn furnishes a generalization to higher t derivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on and depend on the particular application.
Examples
Wave equation
The linear wave equation models the displacement of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time and space :
The function , in natural units, represents an external force applied to string at the position . In order to be a suitable physical model for nature, it should be possible to solve it for any initial state that the string is in, specified by its initial displacement and velocity:
More generally, we should be able to solve the equation with data specified on any slice:
To evolve a solution from any given time slice to , the contribution of the force must be added to the solution. That contribution comes from changing the velocity of the string by . That is, to get the solution at time from the solution at time , we must add to it a new (forward) solution of the homogeneous (no external forces) wave equation
with the initial conditions
A solution to this equation is achieved by straightforward integration:
(The expression in parentheses is just in the notation of the general method above.) So a solution of the original initial value problem is obtained by starting with a solution to the problem with the same prescribed initial values problem but with zero initial displacement, and adding to that (integrating) the contributions from the added force in the time intervals from T to T+dT:
Constant-coefficient linear ODE
Duhamel's principle is the result that the solution to an inhomogeneous, linear, partial differential equation can be solved by first finding the solution for a step input, and then superposing using Duhamel's integral.
Suppose we have a constant coefficient, -th order inhomogeneous ordinary differential equation.
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First let G solve
Define , with being the characteristic function of the interval . Then we have
in the sense of distributions. Therefore
solves the ODE.
Constant-coefficient linear PDE
More generally, suppose we have a constant coefficient inhomogeneous partial differential equation
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First, taking the Fourier transform in we have
Assume that is an -th order ODE in . Let be the coefficient of the highest order term of .
Now for every let solve
Define . We then have
in the sense of distributions. Therefore
solves the PDE (after transforming back to ).
See also
Retarded potential
Propagator
Impulse response
Variation of parameters
References
Wave mechanics
Partial differential equations
Mathematical principles | Duhamel's principle | [
"Physics",
"Mathematics"
] | 1,105 | [
"Mathematical principles",
"Physical phenomena",
"Classical mechanics",
"Waves",
"Wave mechanics"
] |
4,476,904 | https://en.wikipedia.org/wiki/Threose%20nucleic%20acid | Threose nucleic acid (TNA) is an artificial genetic polymer in which the natural five-carbon ribose sugar found in RNA has been replaced by an unnatural four-carbon threose sugar. Invented by Albert Eschenmoser as part of his quest to explore the chemical etiology of RNA, TNA has become an important synthetic genetic polymer (XNA) due to its ability to efficiently base pair with complementary sequences of DNA and RNA. The main difference between TNA and DNA/RNA is their backbones. DNA and RNA have their phosphate backbones attached to the 5' carbon of the deoxyribose or ribose sugar ring, respectively. TNA, on the other hand, has its phosphate backbone directly attached to the 3' carbon in the ring, since it does not have a 5' carbon. This modified backbone makes TNA, unlike DNA and RNA, completely refractory to nuclease digestion, making it a promising nucleic acid analog for therapeutic and diagnostic applications.
TNA oligonucleotides were first constructed by automated solid-phase synthesis using phosphoramidite chemistry. Methods for chemically synthesized TNA monomers (phosphoramidites and nucleoside triphosphates) have been heavily optimized to support synthetic biology projects aimed at advancing TNA research. More recently, polymerase engineering efforts have identified TNA polymerases that can copy genetic information back and forth between DNA and TNA. TNA replication occurs through a process that mimics RNA replication. In these systems, TNA is reverse transcribed into DNA, the DNA is amplified by the polymerase chain reaction, and then forward transcribed back into TNA.
The availability of TNA polymerases have enabled the in vitro selection of biologically stable TNA aptamers to both small molecule and protein targets. Such experiments demonstrate that the properties of heredity and evolution are not limited to the natural genetic polymers of DNA and RNA. The high biological stability of TNA relative to other nucleic acid systems that are capable of undergoing Darwinian evolution, suggests that TNA is a strong candidate for the development of next-generation therapeutic aptamers.
The mechanism of TNA synthesis by a laboratory evolved TNA polymerase has been studied using X-ray crystallography to capture the five major steps of nucleotide addition. These structures demonstrate imperfect recognition of the incoming TNA nucleotide triphosphate and support the need for further directed evolution experiments to create TNA polymerases with improved activity. The binary structure of a TNA reverse transcriptase has also been solved by X-ray crystallography, revealing the importance of structural plasticity as a possible mechanism for template recognition.
Pre DNA system
John Chaput, a professor in the department of Pharmaceutical Sciences at the University of California, Irvine, has theorized that issues concerning the prebiotic synthesis of ribose sugars and the non-enzymatic replication of RNA may provide circumstantial evidence of an earlier genetic system more readily produced under primitive earth conditions. TNA could have been an early genetic system and a precursor to RNA. TNA is simpler than RNA and can be synthesized from a single starting material. TNA is able to transfer back and forth information with RNA and with strands of itself that are complementary to the RNA. TNA has been shown to fold into tertiary structures with discrete ligand-binding properties.
Commercial applications
Although TNA research is still in its infancy, practical applications are already apparent. Its ability to undergo Darwinian evolution, coupled with its nuclease resistance, make TNA a promising candidate for the development of diagnostic and therapeutic applications that require high biological stability. This would include the evolution of TNA aptamers that can bind to specific small molecule and protein targets, as well as the development of TNA enzymes (threozymes) that can catalyze a chemical reaction. In addition, TNA is a promising candidate for RNA therapeutics that involve gene silencing technology. For example, TNA has been evaluated in a model system for antisense technology.
See also
Abiogenesis
Glycol nucleic acid
Oligonucleotide synthesis
Peptide nucleic acid
Synthetic biology
Xeno nucleic acid
Xenobiology
References
Further reading
This works is describes in:
External links
Was simple TNA the first nucleic acid on Earth to carry a genetic code?, New Scientist (behind paywall)
ORIGIN OF LIFE: A Simpler Nucleic Acid, Leslie Orgel
Nucleic acids
Polymers | Threose nucleic acid | [
"Chemistry",
"Materials_science"
] | 892 | [
"Biomolecules by chemical classification",
"Polymers",
"Polymer chemistry",
"Nucleic acids"
] |
12,414,930 | https://en.wikipedia.org/wiki/Oxy-fuel%20welding%20and%20cutting | Oxy-fuel welding torch (commonly called oxyacetylene welding, oxy welding, or gas welding in the United States) and oxy-fuel cutting are processes that use fuel gases (or liquid fuels such as gasoline or petrol, diesel, biodiesel, kerosene, etc) and oxygen to weld or cut metals. French engineers Edmond Fouché and Charles Picard became the first to develop oxygen-acetylene welding in 1903. Pure oxygen, instead of air, is used to increase the flame temperature to allow localized melting of the workpiece material (e.g. steel) in a room environment.
A common propane/air flame burns at about , a propane/oxygen flame burns at about , an oxyhydrogen flame burns at and an acetylene/oxygen flame burns at about .
During the early 20th century, before the development and availability of coated arc welding electrodes in the late 1920s that were capable of making sound welds in steel, oxy-acetylene welding was the only process capable of making welds of exceptionally high quality in virtually all metals in commercial use at the time. These included not only carbon steel but also alloy steels, cast iron, aluminium, and magnesium. In recent decades it has been superseded in almost all industrial uses by various arc welding methods offering greater speed and, in the case of gas tungsten arc welding, the capability of welding very reactive metals such as titanium.
Oxy-acetylene welding is still used for metal-based artwork and in smaller home-based shops, as well as situations where accessing electricity (e.g., via an extension cord or portable generator) would present difficulties. The oxy-acetylene (and other oxy-fuel gas mixtures) welding torch remains a mainstay heat source for manual brazing, as well as metal forming, preparation, and localized heat treating. In addition, oxy-fuel cutting is still widely used, both in heavy industry and light industrial and repair operations.
In oxy-fuel welding, a welding torch is used to weld metals. Welding metal results when two pieces are heated to a temperature that produces a shared pool of molten metal. The molten pool is generally supplied with additional metal called filler. Filler material selection depends upon the metals to be welded.
In oxy-fuel cutting, a torch is used to heat metal to its kindling temperature. A stream of oxygen is then trained on the metal, burning it into a metal oxide that flows out of the kerf as dross.
Torches that do not mix fuel with oxygen (combining, instead, atmospheric air) are not considered oxy-fuel torches and can typically be identified by a single tank (oxy-fuel cutting requires two isolated supplies, fuel and oxygen). Most metals cannot be melted with a single-tank torch. Consequently, single-tank torches are typically suitable for soldering and brazing but not for welding.
Uses
Oxy-fuel torches are or have been used for:
Heating metal: in automotive and other industries for the purposes of loosening seized fasteners.
Neutral flame is used for joining and cutting of all ferrous and non-ferrous metals except brass.
Depositing metal to build up a surface, as in hardfacing.
Also, oxy-hydrogen flames are used:
In stone working for "flaming" where the stone is heated and a top layer crackles and breaks. A steel circular brush is attached to an angle grinder and used to remove the first layer leaving behind a bumpy surface similar to hammered bronze.
In the glass industry for "fire polishing".
In jewelry production for "water welding" using a water torch (an oxyhydrogen torch whose gas supply is generated immediately by electrolysis of water).
In automotive repair, removing a seized bolt.
Formerly, to heat lumps of quicklime to obtain a bright white light called limelight, in theatres or optical ("magic") lanterns.
Formerly, in platinum works, as platinum is fusible only in the oxyhydrogen flame and in an electric furnace.
In short, oxy-fuel equipment is quite versatile, not only because it is preferred for some sorts of iron or steel welding but also because it lends itself to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), rust, or scale removal, the loosening of corroded nuts and bolts, and is a ubiquitous means of cutting ferrous metals.
Apparatus
The apparatus used in gas welding consists basically of an oxygen source and a fuel gas source (usually contained in cylinders), two pressure regulators and two flexible hoses (one for each cylinder), and a torch. This sort of torch can also be used for soldering and brazing. The cylinders are often carried in a special wheeled trolley.
There have been examples of oxyhydrogen cutting sets with small (scuba-sized) gas cylinders worn on the user's back in a backpack harness, for rescue work, and similar.
There are also examples of both non-pressurized and pressurized liquid fuel cutting torches, usually using gasoline (petrol). These are used for their increased cutting power over gaseous fuel systems and also greater portability compared to systems requiring two high pressure tanks.
Regulator
The regulator ensures that pressure of the gas from the tanks matches the required pressure in the hose. The flow rate is then adjusted by the operator using needle valves on the torch. Accurate flow control with a needle valve relies on a constant inlet pressure.
Most regulators have two stages. The first stage is a fixed-pressure regulator, which releases gas from the cylinder at a constant intermediate pressure, despite the pressure in the cylinder falling as the gas in it is consumed. This is similar to the first stage of a scuba-diving regulator. The adjustable second stage of the regulator controls the pressure reduction from the intermediate pressure to the low outlet pressure. The regulator has two pressure gauges, one indicating cylinder pressure, the other indicating hose pressure. The adjustment knob of the regulator is sometimes roughly calibrated for pressure, but an accurate setting requires observation of the gauge.
Some simpler or cheaper oxygen-fuel regulators have only a single-stage regulator, or only a single gauge. A single-stage regulator will tend to allow a reduction in outlet pressure as the cylinder is emptied, requiring manual readjustment. For low-volume users, this is an acceptable simplification. Welding regulators, unlike simpler LPG heating regulators, retain their outlet (hose) pressure gauge and do not rely on the calibration of the adjustment knob. The cheaper single-stage regulators may sometimes omit the cylinder contents gauge, or replace the accurate dial gauge with a cheaper and less precise "rising button" gauge.
Gas hoses
The hoses are designed for use in welding and cutting metal. A double-hose or twinned design can be used, meaning that the oxygen and fuel hoses are joined. If separate hoses are used, they should be clipped together at intervals approximately apart, although that is not recommended for cutting applications, because beads of molten metal given off by the process can become lodged between the hoses where they are held together, and burn through, releasing the pressurized gas inside, which in the case of fuel gas usually ignites.
The hoses are color-coded for visual identification. The color of the hoses varies between countries. In the United States, the oxygen hose is green and the fuel hose is red. In the UK and other countries, the oxygen hose is blue (black hoses may still be found on old equipment), and the acetylene (fuel) hose is red. If liquefied petroleum gas (LPG) fuel, such as propane, is used, the fuel hose should be orange, indicating that it is compatible with LPG. LPG will damage an incompatible hose, including most acetylene hoses.
The threaded connectors on the hoses are handed to avoid accidental mis-connection: the thread on the oxygen hose is right-handed (as normal), while the fuel gas hose has a left-handed thread. The left-handed threads also have an identifying groove cut into their nuts.
Gas-tight connections between the flexible hoses and rigid fittings are made by using crimped hose clips or ferrules, often referred to as 'O' clips, over barbed spigots. The use of worm-drive hose clips or Jubilee Clips is specifically forbidden in the UK and other countries.
Non-return valve
Acetylene is not just flammable; in certain conditions it is explosive. Although it has an upper flammability limit in air of 81%, acetylene's explosive decomposition behaviour makes this irrelevant. If a detonation wave enters the acetylene tank, the tank will be blown apart by the decomposition. Ordinary check valves that normally prevent backflow cannot stop a detonation wave because they are not capable of closing before the wave passes around the gate. For that reason a flashback arrestor is needed. It is designed to operate before the detonation wave makes it from the hose side to the supply side.
Between the regulator and hose, and ideally between hose and torch on both oxygen and fuel lines, a flashback arrestor and/or non-return valve (check valve) should be installed to prevent flame or oxygen-fuel mixture being pushed back into either cylinder and damaging the equipment or causing a cylinder to explode.
European practice is to fit flashback arrestors at the regulator and check valves at the torch. US practice is to fit both at the regulator.
The flashback arrestor prevents shock waves from downstream coming back up the hoses and entering the cylinder, possibly rupturing it, as there are quantities of fuel/oxygen mixtures inside parts of the equipment (specifically within the mixer and blowpipe/nozzle) that may explode if the equipment is incorrectly shut down, and acetylene decomposes at excessive pressures or temperatures. In case the pressure wave has created a leak downstream of the flashback arrestor, it will remain switched off until someone resets it.
Check valve
A check valve lets gas flow in one direction only. It is usually a chamber containing a ball that is pressed against one end by a spring. Gas flow one way pushes the ball out of the way, and a lack of flow or a reverse flow allows the spring to push the ball into the inlet, blocking it. Not to be confused with a flashback arrestor, a check valve is not designed to block a shock wave. The shock wave could occur while the ball is so far from the inlet that the wave will get past the ball before it can reach its off position.
Torch
The torch is the tool that the welder holds and manipulates to make the weld. It has a connection and valve for the fuel gas and a connection and valve for the oxygen, a handle for the welder to grasp, and a mixing chamber (set at an angle) where the fuel gas and oxygen mix, with a tip where the flame forms. Two basic types of torches are positive pressure type and low pressure or injector type.
Welding torch
A welding torch head is used to weld metals. It can be identified by having only one or two pipes running to the nozzle, no oxygen-blast trigger, and two valve knobs at the bottom of the handle letting the operator adjust the oxygen and fuel flow respectively.
Cutting torch
A cutting torch head is used to cut materials. It is similar to a welding torch, but can be identified by the oxygen blast trigger or lever.
When cutting, the metal is first heated by the flame until it is cherry red. Once this temperature is attained, oxygen is supplied to the heated parts by pressing the oxygen-blast trigger. This oxygen reacts with the metal, producing more heat and forming an oxide which is then blasted out of the cut. It is the heat that continues the cutting process. The cutting torch only heats the metal to start the process; further heat is provided by the burning metal.
The melting point of the iron oxide is around half that of the metal being cut. As the metal burns, it immediately turns to liquid iron oxide and flows away from the cutting zone. However, some of the iron oxide remains on the workpiece, forming a hard "slag" which can be removed by gentle tapping and/or grinding.
Rose bud torch
A rose bud torch is used to heat metals for bending, straightening, etc. where a large area needs to be heated. It is so-called because the flame at the end looks like a rose bud. A welding torch can also be used to heat small areas such as rusted nuts and bolts.
Injector torch
A typical oxy-fuel torch, called an equal-pressure torch, merely mixes the two gases. In an injector torch, high-pressure oxygen comes out of a small nozzle inside the torch head which drags the fuel gas along with it, using the Venturi effect.
Fuels
Oxy-fuel processes may use a variety of fuel gases (or combustible liquids), the most common being acetylene. Other gases that may be used are propylene, liquified petroleum gas (LPG), propane, natural gas, hydrogen, and MAPP gas. Liquid fuel cutting systems use such fuels as Gasoline (Petrol) Diesel, Kerosene and possibly some aviation fuels.
Acetylene
Acetylene is the primary fuel for oxy-fuel welding and is the fuel of choice for repair work and general cutting and welding. Acetylene gas is shipped in special cylinders designed to keep the gas dissolved. The cylinders are packed with porous materials (e.g. kapok fibre, diatomaceous earth, or (formerly) asbestos), then filled to around 50% capacity with acetone, as acetylene is soluble in acetone. This method is necessary because above (absolute pressure), acetylene is unstable and may explode.
There is about pressure in the tank when full. When combined with oxygen, acetyline burns at , the highest among commonly used gaseous fuels. As a fuel, acetylene's primary disadvantage in comparison to other fuels is its high price.
As acetylene is unstable at a pressure roughly equivalent to underwater, water-submerged cutting and welding is reserved for hydrogen rather than acetylene.
Gasoline
Tests showed that an oxy-gasoline torch can cut steel plate up to thick at the same rate as oxy-acetylene. In plate thicknesses greater than the cutting rate was better than that of oxy-acetylene; at it was three times faster. Additionally the liquid fuel vapour is about 4x the density of a gaseous fuel. A high velocity cutting flame is produced by the huge volume expansion while the liquid transitions to a vapour so the cutting flame can cut across voids (air space between plates).
Oxy-gasoline torches can also cut through paint, dirt, rust and other contaminating surface materials coating old steel. This system provides almost 100% oxidation during cutting, leaving almost no molten steel in the slag to prevent "sticking" together cut material. Operating cost for a gasoline torch is typically 75-90% less than using propane or Acetylene.
The gasoline can be fed either from a pressurized tank (whose pressure can be hand-pumped or fed from a gas cylinder) or a non-pressurized tank, with the fuel being drawn into the torch by a venturi action created by the pressurized oxygen flow. Another low cost approach commonly used by jewelry makers in Asia is using air bubbled through a gasoline container by a foot-operated air pump, and burning the fuel-air mixture in a specialized welding torch.
Diesel
Diesel is a new option in the liquid fuel cutting torch market. Diesel torches claim several advantages over gaseous fuels and gasoline. Diesel is inherently safer and more powerful than gasoline or gaseous fuel such as acetylene and propane, and will cut steel faster and cheaper than either of those gases. In addition, the liquid fuel vapor is about 5 times the density of a gaseous fuel, providing much greater "punch". A high velocity cutting flame is produced by the huge volume expansion when the liquid transitions to a vapor, so the cutting flame will easily cut across air voids between plates. A diesel/oxygen torch can cut through paint, dirt, rust and other surface contaminants on steel. This system provides almost 100% oxidation during cutting so it leaves virtually no molten steel in the slag, preventing the "sticking together" of the cut materials. The operating cost for a diesel torch is typically 75-90% less than using propane or acetylene. Growing use in the demolition or scrap industries
Hydrogen
Hydrogen has a clean flame and is good for use on aluminium. It can be used at a higher pressure than acetylene and is therefore useful for underwater welding and cutting. It is a good type of flame to use when heating large amounts of material. The flame temperature is high, about 2,000 °C for hydrogen gas in air at atmospheric pressure, and up to 2800 °C when pre-mixed in a 2:1 ratio with pure oxygen (oxyhydrogen). Hydrogen is not used for welding steels and other ferrous materials, because it causes hydrogen embrittlement.
For some oxyhydrogen torches the oxygen and hydrogen are produced by electrolysis of water in an apparatus which is connected directly to the torch. Types of this sort of torch:
The oxygen and the hydrogen are led off the electrolysis cell separately and are fed into the two gas connections of an ordinary oxy-gas torch. This happens in the water torch, which is sometimes used in small torches used in making jewelry and electronics.
The mixed oxygen and hydrogen are drawn from the electrolysis cell and are led into a special torch designed to prevent flashback. See oxyhydrogen.
MPS and MAPP gas
Methylacetylene-propadiene (MAPP) gas and LPG gas are similar fuels, because LPG gas is liquefied petroleum gas mixed with MPS. It has the storage and shipping characteristics of LPG and has a heat value a little lower than that of acetylene. Because it can be shipped in small containers for sale at retail stores, it is used by hobbyists and large industrial companies and shipyards because it does not polymerize at high pressures — above 15 psi or so (as acetylene does) and is therefore much less dangerous than acetylene.
Further, more of it can be stored in a single place at one time, as the increased compressibility allows for more gas to be put into a tank. MAPP gas can be used at much higher pressures than acetylene, sometimes up to 40 or 50 psi in high-volume oxy-fuel cutting torches which can cut up to steel. Other welding gases that develop comparable temperatures need special procedures for safe shipping and handling. MPS and MAPP are recommended for cutting applications in particular, rather than welding applications.
On 30 April 2008 the Petromont Varennes plant closed its methylacetylene/propadiene crackers. As it was the only North American plant making MAPP gas, many substitutes were introduced by companies that had repackaged the Dow and Varennes product(s) - most of these substitutes are propylene, see below.
Propylene and fuel gas
Propylene is used in production welding and cutting. It cuts similarly to propane. When propylene is used, the torch rarely needs tip cleaning. There is often a substantial advantage to cutting with an injector torch (see the propane section) rather than an equal-pressure torch when using propylene. Quite a few North American suppliers have begun selling propylene under proprietary trademarks such as FG2 and Fuel-Max.
Butane, propane and butane/propane mixes
Butane, like propane, is a saturated hydrocarbon. Butane and propane do not react with each other and are regularly mixed. Butane boils at 0.6 °C. Propane is more volatile, with a boiling point of -42 °C. Vaporization is rapid at temperatures above the boiling points. The calorific (heat) values of the two are almost equal. Both are thus mixed to attain the vapor pressure that is required by the end user and depending on the ambient conditions. If the ambient temperature is very low, propane is preferred to achieve higher vapor pressure at the given temperature.
Propane does not burn as hot as acetylene in its inner cone, and so it is rarely used for welding. Propane, however, has a very high number of BTUs per cubic foot in its outer cone, and so with the right torch (injector style) can make a faster and cleaner cut than acetylene, and is much more useful for heating and bending than acetylene.
The maximum neutral flame temperature of propane in oxygen is .
Propane is cheaper than acetylene and easier to transport.
Operating costs
The following is a comparison of operating costs in cutting plate. Costing is based on an average cost for oxygen and different fuels in May 2012. The opex with Gasoline was 25% that of propane and 10% that of acetylene. Numbers will vary depending on source of oxygen or fuel and on the type of cutting and the cutting environment or situation.
The role of oxygen
Oxygen is not the fuel. It is the oxidizing agent, which chemically combines with the fuel to produce the heat for welding. This is called 'oxidation', but the more specific and more commonly used term in this context is 'combustion'. In the case of hydrogen, the product of combustion is simply water. For the other hydrocarbon fuels, water and carbon dioxide are produced. The heat is released because the molecules of the products of combustion have a lower energy state than the molecules of the fuel and oxygen. In oxy-fuel cutting, oxidation of the metal being cut (typically iron) produces nearly all of the heat required to "burn" through the workpiece.
Oxygen is usually produced elsewhere by distillation of liquefied air and shipped to the welding site in high-pressure vessels (commonly called "tanks" or "cylinders") at a pressure of about 21,000 kPa (3,000 lbf/in² = 200 atmospheres). It is also shipped as a liquid in Dewar type vessels (like a large Thermos jar) to places that use large amounts of oxygen.
It is also possible to separate oxygen from air by passing the air, under pressure, through a zeolite sieve that selectively adsorbs the nitrogen and lets the oxygen (and argon) pass. This gives a purity of oxygen of about 93%. This method works well for brazing, but higher-purity oxygen is necessary to produce a clean, slag-free kerf when cutting.
Types of flame
The welder can adjust the oxy-acetylene flame to be carburizing (aka reducing), neutral, or oxidizing. Adjustment is made by adding more or less oxygen to the acetylene flame. The neutral flame is the flame most generally used when welding or cutting. The welder uses the neutral flame as the starting point for all other flame adjustments because it is so easily defined. This flame is attained when welders, as they slowly open the oxygen valve on the torch body, first see only two flame zones. At that point, the acetylene is being completely burned in the welding oxygen and surrounding air. The flame is chemically neutral.
The two parts of this flame are the light blue inner cone and the darker blue to colorless outer cone. The inner cone is where the acetylene and the oxygen combine. The tip of this inner cone is the hottest part of the flame. It is approximately and provides enough heat to easily melt steel. In the inner cone the acetylene breaks down and partly burns to hydrogen and carbon monoxide, which in the outer cone combine with more oxygen from the surrounding air and burn.
An excess of acetylene creates a reducing (sometimes called carbonizing) flame. This flame is characterized by three flame zones; the hot inner cone, a white-hot "acetylene feather", and the blue-colored outer cone. This is the type of flame observed when oxygen is first added to the burning acetylene. The feather is adjusted and made ever smaller by adding increasing amounts of oxygen to the flame. A welding feather is measured as 2X or 3X, with X being the length of the inner flame cone.
The unburned carbon insulates the flame and drops the temperature to approximately . The reducing flame is typically used for hardfacing operations or backhand pipe welding techniques. The feather is caused by incomplete combustion of the acetylene to cause an excess of carbon in the flame. Some of this carbon is dissolved by the molten metal to carbonize it. The carbonizing flame will tend to remove the oxygen from iron oxides which may be present, a fact which has caused the flame to be known as a "reducing flame".
The oxidizing flame is the third possible flame adjustment. It occurs when the ratio of oxygen to acetylene required for a neutral flame has been changed to give an excess of oxygen. This flame type is observed when welders add more oxygen to the neutral flame. This flame is hotter than the other two flames because the combustible gases will not have to search so far to find the necessary amount of oxygen, nor heat up as much thermally inert carbon. It is called an oxidizing flame because of its effect on metal. This flame adjustment is generally not preferred. The oxidizing flame creates undesirable oxides to the structural and mechanical detriment of most metals. In an oxidizing flame, the inner cone acquires a purplish tinge and gets pinched and smaller at the tip, and the sound of the flame gets harsh. A slightly oxidizing flame is used in braze-welding and bronze-surfacing while a more strongly oxidizing flame is used in fusion welding certain brasses and bronzes
The size of the flame can be adjusted to a limited extent by the valves on the torch and by the regulator settings, but in the main it depends on the size of the orifice in the tip. In fact, the tip should be chosen first according to the job at hand, and then the regulators set accordingly.
Welding
The flame is applied to the base metal and held until a small puddle of molten metal is formed. The puddle is moved along the path where the weld bead is desired. Usually, more metal is added to the puddle as it is moved along by dipping metal from a welding rod or filler rod into the molten metal puddle. The metal puddle will travel towards where the metal is the hottest. This is accomplished through torch manipulation by the welder.
The amount of heat applied to the metal is a function of the welding tip size, the speed of travel, and the welding position. The flame size is determined by the welding tip size. The proper tip size is determined by the metal thickness and the joint design.
Welding gas pressures using oxy-acetylene are set in accordance with the manufacturer's recommendations. The welder will modify the speed of welding travel to maintain a uniform bead width. Uniformity is a quality attribute indicating good workmanship. Trained welders are taught to keep the bead the same size at the beginning of the weld as at the end. If the bead gets too wide, the welder increases the speed of welding travel. If the bead gets too narrow or if the weld puddle is lost, the welder slows down the speed of travel. Welding in the vertical or overhead positions is typically slower than welding in the flat or horizontal positions.
The welder must add the filler rod to the molten puddle. The welder must also keep the filler metal in the hot outer flame zone when not adding it to the puddle to protect filler metal from oxidation. Do not let the welding flame burn off the filler metal. The metal will not wet into the base metal and will look like a series of cold dots on the base metal. There is very little strength in a cold weld. When the filler metal is properly added to the molten puddle, the resulting weld will be stronger than the original base metal.
Welding lead or 'lead burning' was much more common in the 19th century to make some pipe connections and tanks. Great skill is required, but it can be quickly learned. In building construction today some lead flashing is welded but soldered copper flashing is much more common in America. In the automotive body repair industry before the 1980s, oxyacetylene gas torch welding was seldom used to weld sheet metal, since warping was a byproduct as well as excess heat. Automotive body repair methods at the time were crude and yielded improprieties until MIG welding became the industry standard. Since the 1970s, when high strength steel became the standard for automotive manufacturing, electric welding became the preferred method. After the 1980s, oxyacetylene torches fell out of use for sheet metal welding in the industrialized world.
Cutting
For cutting, the setup is a little different. A cutting torch has a 60- or 90-degree angled head with orifices placed around a central jet. The outer jets are for preheat flames of oxygen and acetylene. The central jet carries only oxygen for cutting. The use of several preheating flames rather than a single flame makes it possible to change the direction of the cut as desired without changing the position of the nozzle or the angle which the torch makes with the direction of the cut, as well as giving a better preheat balance. Manufacturers have developed custom tips for Mapp, propane, and propylene gases to optimize the flames from these alternate fuel gases.
The flame is not intended to melt the metal, but to bring it to its ignition temperature.
The torch's trigger blows extra oxygen at higher pressures down the torch's third tube out of the central jet into the workpiece, causing the metal to burn and blowing the resulting molten oxide through to the other side. The ideal kerf is a narrow gap with a sharp edge on either side of the workpiece; overheating the workpiece and thus melting through it causes a rounded edge.
Cutting is initiated by heating the edge or leading face (as in cutting shapes such as round rod) of the steel to the ignition temperature (approximately bright cherry red heat) using the pre-heat jets only, then using the separate cutting oxygen valve to release the oxygen from the central jet. The oxygen chemically combines with the iron in the ferrous material to oxidize the iron quickly into molten iron oxide, producing the cut. Initiating a cut in the middle of a workpiece is known as piercing.
It is worth noting several things at this point:
The oxygen flow rate is critical; too little will make a slow ragged cut, while too much will waste oxygen and produce a wide concave cut. Oxygen lances and other custom made torches do not have a separate pressure control for the cutting oxygen, so the cutting oxygen pressure must be controlled using the oxygen regulator. The oxygen cutting pressure should match the cutting tip oxygen orifice. The tip manufacturer's equipment data should be reviewed for the proper cutting oxygen pressures for the specific cutting tip.
The oxidation of iron by this method is highly exothermic. Once it has started, steel can be cut at a surprising rate, far faster than if it were merely melted through. At this point, the pre-heat jets are there purely for assistance. The rise in temperature will be obvious by the intense glare from the ejected material, even through proper goggles. A thermal lance is a tool that also uses rapid oxidation of iron to cut through almost any material.
Since the melted metal flows out of the workpiece, there must be room on the opposite side of the workpiece for the spray to exit. When possible, pieces of metal are cut on a grate that lets the melted metal fall freely to the ground. The same equipment can be used for oxyacetylene blowtorches and welding torches, by exchanging the part of the torch in front of the torch valves.
For a basic oxy-acetylene rig, the cutting speed in light steel section will usually be nearly twice as fast as a petrol-driven cut-off grinder. The advantages when cutting large sections are obvious: an oxy-fuel torch is light, small and quiet and needs very little effort to use, whereas an angle grinder is heavy and noisy and needs considerable operator exertion and may vibrate severely, leading to stiff hands and possible long-term vibration white finger. Oxy-acetylene torches can easily cut through ferrous materials in excess of . Oxygen lances are used in scrapping operations and cut sections thicker than 200 mm. Cut-off grinders are useless for these kinds of application.
Robotic oxy-fuel cutters sometimes use a high-speed divergent nozzle. This uses an oxygen jet that opens slightly along its passage. This allows the compressed oxygen to expand as it leaves, forming a high-velocity jet that spreads less than a parallel-bore nozzle, allowing a cleaner cut. These are not used for cutting by hand since they need very accurate positioning above the work. Their ability to produce almost any shape from large steel plates gives them a secure future in shipbuilding and in many other industries.
Oxy-propane torches are usually used for cutting up scrap to save money, as LPG is far cheaper joule for joule than acetylene, although propane does not produce acetylene's very neat cut profile. Propane also finds a place in production, for cutting very large sections.
Oxy-acetylene can cut only low- to medium-carbon steels and wrought iron. High-carbon steels are difficult to cut because the melting point of the slag is closer to the melting point of the parent metal, so that the slag from the cutting action does not eject as sparks but rather mixes with the clean melt near the cut. This keeps the oxygen from reaching the clean metal and burning it. In the case of cast iron, graphite between the grains and the shape of the grains themselves interfere with the cutting action of the torch. Stainless steels cannot be cut either because the material does not burn readily.
Safety
Oxyacetylene welding/cutting is generally considered not to be difficult, but there are a good number of subtle safety points that should be learned such as:
More than 1/7 the capacity of the cylinder should not be used per hour. This causes the acetone inside the acetylene cylinder to come out of the cylinder and contaminate the hose and possibly the torch.
Acetylene is dangerous above 1 atm (15 psi) pressure. It is unstable and explosively decomposes.
Proper ventilation when welding will help to avoid large chemical exposure.
Eye protection
Proper protection such as welding goggles should be worn at all times, including to protect the eyes against glare and flying sparks. Special safety eyewear must be used—both to protect the welder and to provide a clear view through the yellow-orange flare given off by the incandescing flux. In the 1940s cobalt melters’ glasses were borrowed from steel foundries and were still available until the 1980s.
However, the lack of protection from impact, ultra-violet, infrared and blue light caused severe eyestrain and eye damage. Didymium eyewear, developed for glassblowers in the 1960s, was also borrowed—until many complained of eye problems from excessive infrared, blue light, and insufficient shading. Today very good eye protection can be found designed especially for gas-welding aluminum that cuts the sodium orange flare completely and provides the necessary protection from ultraviolet, infrared, blue light and impact, according to ANSI Z87-1989 safety standards for a Special Purpose Lens.
Safety with cylinders
Fuel and oxygen tanks should be fastened securely and upright to a wall, post, or portable cart. An oxygen tank is especially dangerous because the gas is stored at a pressure of ) when full. If the tank falls over and damages the valve, the tank can be jettisoned by the compressed oxygen escaping the cylinder at high speed. Tanks in this state are capable of breaking through a brick wall.
For this reason, an oxygen tank should never be moved around without its valve cap screwed in place.
On an oxyacetylene torch system there are three types of valves: the tank valve, the regulator valve, and the torch valve. Each gas in the system will have each of these three valves. The regulator converts the high pressure gas inside of the tanks to a low pressure stream suitable for welding. Acetylene cylinders must be maintained in an upright position to prevent the internal acetone and acetylene from separating in the filler material.
Chemical exposure
A less obvious hazard of welding is exposure to harmful chemicals. Exposure to certain metals, metal oxides, or carbon monoxide can often lead to severe medical conditions. Damaging chemicals can be produced from the fuel, from the work-piece, or from a protective coating on the work-piece. By increasing ventilation around the welding environment, exposure to harmful chemicals are significantly reduced from any source.
The most common fuel used in welding is acetylene, which has a two-stage reaction. The primary chemical reaction involves the acetylene disassociating in the presence of oxygen to produce heat, carbon monoxide, and hydrogen gas: C2H2 + O2 → 2CO + H2. A secondary reaction follows where the carbon monoxide and hydrogen combine with more oxygen to produce carbon dioxide and water vapor. When the secondary reaction does not burn all of the reactants from the primary reaction, the welding process can often produce large amounts of carbon monoxide. Carbon monoxide is also the byproduct of many other incomplete fuel reactions.
Almost every piece of metal is an alloy of one type or another. Copper, aluminum, and other base metals are occasionally alloyed with beryllium, which is a highly toxic metal. When a metal like this is welded or cut, high concentrations of toxic beryllium fumes are released. Long-term exposure to beryllium may result in shortness of breath, chronic cough, and significant weight loss, accompanied by fatigue and general weakness. Other alloying elements such as arsenic, manganese, silver, and aluminum can cause sickness to those who are exposed.
More common are the anti-rust coatings on many manufactured metal components. Zinc, cadmium, and fluorides are often used to protect irons and steels from oxidizing. Galvanized metals have a very heavy zinc coating. Exposure to zinc oxide fumes can lead to a sickness named "metal fume fever". This condition rarely lasts longer than 24 hours, but severe cases can be fatal. Not unlike common influenza, fevers, chills, nausea, cough, and fatigue are common effects of high zinc oxide exposure.
Flashback
Flashback is the condition of the flame propagating down the hoses of an oxy-fuel welding and cutting system. To prevent such a situation a flashback arrestor is usually employed. The flame burns backwards into the hose, causing a popping or squealing noise. It can cause an explosion in the hose with the potential to injure or kill the operator. Using a lower pressure than recommended can cause a flashback.
See also
List of welding processes
Gas metal arc ("MIG"/"MAG") welding
Shielded metal arc ("stick") welding
Tungsten inert gas ("TIG") welding
Air-arc cutting
Flame cleaning
Oxyhydrogen flame
Plasma arc cutting
Thermal lance
Underwater welding
References
Bibliography
Further reading
External links
"Welding and Cutting with Oxyacetylene" Popular Mechanics, December 1935 pp. 948–953
Using Oxy-Fuel Welding on Aircraft Aluminum Sheet
More on oxyacetylene
welding history at Welding.com
An e-book about oxy-gas cutting and welding
Oxy-fuel torch at Everything2.com
Torch Brazing Information
Video of how to weld lead sheet
Working with lead sheet
Burners
Hydrogen technologies
Metalworking cutting tools
Oxygen
Acetylene
Propane
Butane
Welding
Industrial gases | Oxy-fuel welding and cutting | [
"Chemistry",
"Engineering"
] | 8,324 | [
"Chemical process engineering",
"Industrial gases",
"Mechanical engineering",
"Welding"
] |
12,415,190 | https://en.wikipedia.org/wiki/Action%20algebra | In algebraic logic, an action algebra is an algebraic structure which is both a residuated semilattice and a Kleene algebra. It adds the star or reflexive transitive closure operation of the latter to the former, while adding the left and right residuation or implication operations of the former to the latter. Unlike dynamic logic and other modal logics of programs, for which programs and propositions form two distinct sorts, action algebra combines the two into a single sort. It can be thought of as a variant of intuitionistic logic with star and with a noncommutative conjunction whose identity need not be the top element. Unlike Kleene algebras, action algebras form a variety, which furthermore is finitely axiomatizable, the crucial axiom being a•(a → a)* ≤ a. Unlike models of the equational theory of Kleene algebras (the regular expression equations), the star operation of action algebras is reflexive transitive closure in every model of the equations. Action algebras were introduced by Vaughan Pratt in the European Workshop JELIA'90.
Definition
An action algebra (A, ∨, 0, •, 1, ←, →, *) is an algebraic structure such that (A, ∨, •, 1, ←, →) forms a residuated semilattice in the sense of Ward and Dilworth, while (A, ∨, 0, •, 1, *) forms a Kleene algebra in the sense of Dexter Kozen. That is, it is any model of the joint theory of both classes of algebras. Now Kleene algebras are axiomatized with quasiequations, that is, implications between two or more equations, whence so are action algebras when axiomatized directly in this way. However, action algebras have the advantage that they also have an equivalent axiomatization that is purely equational. The language of action algebras extends in a natural way to that of action lattices, namely by the inclusion of a meet operation.
In the following we write the inequality a ≤ b as an abbreviation for the equation a ∨ b = b. This allows us to axiomatize the theory using inequalities yet still have a purely equational axiomatization when the inequalities are expanded to equalities.
The equations axiomatizing action algebra are those for a residuated semilattice, together with the following equations for star.
1 ∨ a*•a* ∨ a ≤ a*
a* ≤ (a ∨ b)*
(a → a)* ≤ a → a
The first equation can be broken out into three equations, 1 ≤ a*, a*•a* ≤ a*, and a ≤ a*. Defining a to be reflexive when 1 ≤ a and transitive when a•a ≤ a by abstraction from binary relations, the first two of those equations force a* to be reflexive and transitive while the third forces a* to be greater or equal to a. The next axiom asserts that star is monotone. The last axiom can be written equivalently as a•(a → a)* ≤ a, a form which makes its role as induction more apparent. These two axioms in conjunction with the axioms for a residuated semilattice force a* to be the least reflexive transitive element of the semilattice of elements greater or equal to a. Taking that as the definition of reflexive transitive closure of a, we then have that for every element a of any action algebra, a* is the reflexive transitive closure of a.
Properties
The equational theory of the implication-free fragment of action algebras, those equations not containing → or ←, can be shown to coincide with the equational theory of Kleene algebras, also known as the regular expression equations. In that sense the above axioms constitute a finite axiomatization of regular expressions. Redko showed in 1967 that the regular expression equations had no finite axiomatization, for which John Horton Conway gave a shorter proof in 1971.
Arto Salomaa gave an equation schema axiomatizing this theory which Dexter Kozen subsequently reformulated as a finite axiomatization using quasiequations, or implications between equations, the crucial quasiequations being those of induction: if x•a ≤ x then x•a* ≤ x, and if a•x ≤ x then a*•x ≤ x. Kozen defined a Kleene algebra to be any model of this finite axiomatization.
Conway showed that the equational theory of regular expressions admit models in which a* was not the reflexive transitive closure of a, by giving a four-element model 0 ≤ 1 ≤ a ≤ a* in which a•a = a. In Conway's model, a is reflexive and transitive, whence its reflexive transitive closure should be a. However the regular expressions do not enforce this, allowing a* to be strictly greater than a. Such anomalous behavior is not possible in an action algebra, which forces a* to be the least transitive reflexive element.
Examples
Any Heyting algebra (and hence any Boolean algebra) is made an action algebra by taking • to be ∧ and a* = 1. This is necessary and sufficient for star because the top element 1 of a Heyting algebra is its only reflexive element, and is transitive as well as greater or equal to every element of the algebra.
The set 2Σ* of all formal languages (sets of finite strings) over an alphabet Σ forms an action algebra with 0 as the empty set, 1 = {ε}, ∨ as union, • as concatenation, L ← M as the set of all strings x such that xM ⊆ L (and dually for M → L), and L* as the set of all strings of strings in L (Kleene closure).
The set 2X² of all binary relations on a set X forms an action algebra with 0 as the empty relation, 1 as the identity relation or equality, ∨ as union, • as relation composition, R ← S as the relation consisting of all pairs (x,y) such that for all z in X, ySz implies xRz (and dually for S → R), and R* as the reflexive transitive closure of R, defined as the union over all relations Rn for integers n ≥ 0.
The two preceding examples are power sets, which are Boolean algebras under the usual set theoretic operations of union, intersection, and complement. This justifies calling them Boolean action algebras. The relational example constitutes a relation algebra equipped with an operation of reflexive transitive closure. Note that every Boolean algebra is a Heyting algebra and therefore an action algebra by virtue of being an instance of the first example.
See also
Dynamic logic (modal logic)
Kleene star
Regular expression
References
Formal languages
Algebraic logic
Algebraic structures | Action algebra | [
"Mathematics"
] | 1,458 | [
"Mathematical structures",
"Formal languages",
"Mathematical logic",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic logic",
"Algebraic structures"
] |
12,416,124 | https://en.wikipedia.org/wiki/Multiplet | In physics and particularly in particle physics, a multiplet is the state space for 'internal' degrees of freedom of a particle; that is, degrees of freedom associated to a particle itself, as opposed to 'external' degrees of freedom such as the particle's position in space. Examples of such degrees of freedom are the spin state of a particle in quantum mechanics, or the color, isospin and hypercharge state of particles in the Standard Model of particle physics. Formally, we describe this state space by a vector space which carries the action of a group of continuous symmetries.
Mathematical formulation
Mathematically, multiplets are described via representations of a Lie group or its corresponding Lie algebra, and is usually used to refer to irreducible representations (irreps, for short).
At the group level, this is a triplet where
is a vector space over a field (in the algebra sense) , generally taken to be or
is a Lie group. This is often a compact Lie group.
is a group homomorphism , that is, a map from the group to the space of invertible linear maps on . This map must preserve the group structure: for we have .
At the algebra level, this is a triplet , where
is as before.
is a Lie algebra. It is often a finite-dimensional Lie algebra over or .
is an Lie algebra homomorphism . This is a linear map which preserves the Lie bracket: for we have .
The symbol is used for both Lie algebras and Lie groups as, at least in finite dimension, there is a well understood correspondence between Lie groups and Lie algebras.
In mathematics, it is common to refer to the homomorphism as the representation, for example in the sentence 'consider a representation ', and the vector space is referred to as the 'representation space'. In physics sometimes the vector space is referred to as the representation, for example in the sentence 'we model the particle as transforming in the singlet representation', or even to refer to a quantum field which takes values in such a representation, and the physical particles which are modelled by such a quantum field.
For an irreducible representation, an -plet refers to an dimensional irreducible representation. Generally, a group may have multiple non-isomorphic representations of the same dimension, so this does not fully characterize the representation. An exception is which has exactly one irreducible representation of dimension for each non-negative integer .
For example, consider real three-dimensional space, . The group of 3D rotations SO(3) acts naturally on this space as a group of matrices. This explicit realisation of the rotation group is known as the fundamental representation , so is a representation space. The full data of the representation is . Since the dimension of this representation space is 3, this is known as the triplet representation for , and it is common to denote this as .
Application to theoretical physics
For applications to theoretical physics, we can restrict our attention to the representation theory of a handful of physically important groups. Many of these have well understood representation theory:
: Part of the gauge group of the Standard model, and the gauge group for theories of electromagnetism. Irreps are all 1 dimensional and are indexed by integers , given explicitly by . The index can be understood as the winding number of the map.
: Part of the gauge group of the Standard model. Irreps are indexed by non-negative integers in , with describing the dimension of the representation, or, with appropriate normalisation, the highest weight of the representation. In physics it is common convention to label these by half-integers instead. See Representation theory of SU(2).
: The group of rotations of 3D space. Irreps are the odd-dimensional irreps of
: Part of the gauge group of the Standard model. Irreps are indexed pairs of non-negative integers , describing the highest weight of the representation. See Clebsch-Gordan coefficients for SU(3).
: The Lorentz group, the linear symmetries of flat spacetime. All representations arise as representations of its corresponding spin group. See Representation theory of the Lorentz group.
: The spin group of . Irreps are indexed by pairs of non-negative integers , indexing the dimension of the representation.
: The Poincaré group of isometries of flat spacetime. This can be understood in terms of the representation theory of the groups above. See Wigner's classification.
These groups all appear in the theory of the Standard model. For theories which extend these symmetries, the representation theory of some other groups might be considered:
Conformal symmetry: For pseudo-Euclidean space, symmetries are described by the conformal group .
Supersymmetry: Symmetry described by a supergroup.
Grand unified theories: Gauge groups which contain the Standard model gauge group as a subgroup. Proposed candidates include and .
Physics
Quantum field theory
In quantum physics, the mathematical notion is usually applied to representations of the gauge group. For example, an gauge theory will have multiplets which are fields whose representation of is determined by the single half-integer number , the isospin. Since irreducible representations are isomorphic to the th symmetric power of the fundamental representation, every field has symmetrized internal indices.
Fields also transform under representations of the Lorentz group , or more generally its spin group which can be identified with due to an exceptional isomorphism. Examples include scalar fields, commonly denoted , which transform in the trivial representation, vector fields (strictly, this might be more accurately labelled a covector field), which transforms as a 4-vector, and spinor fields such as Dirac or Weyl spinors which transform in representations of . A right-handed Weyl spinor transforms in the fundamental representation, , of .
Beware that besides the Lorentz group, a field can transform under the action of a gauge group. For example, a scalar field , where is a spacetime point, might have an isospin state taking values in the fundamental representation of . Then is a vector valued function of spacetime, but is still referred to as a scalar field, as it transforms trivially under Lorentz transformations.
In quantum field theory different particles correspond one to one with gauged fields transforming in irreducible representations of the internal and Lorentz group. Thus, a multiplet has also come to describe a collection of subatomic particles described by these representations.
Examples
The best known example is a spin multiplet, which describes symmetries of a group representation of an SU(2) subgroup of the Lorentz algebra, which is used to define spin quantization. A spin singlet is a trivial representation, a spin doublet is a fundamental representation and a spin triplet is in the vector representation or adjoint representation.
In QCD, quarks are in a multiplet of SU(3), specifically the three-dimensional fundamental representation.
Other uses
Spectroscopy
In spectroscopy, particularly Gamma spectroscopy and X-ray spectroscopy, a multiplet is a group of related or unresolvable spectral lines. Where the number of unresolved lines is small, these are often referred to specifically as doublet or triplet peaks, while multiplet is used to describe groups of peaks in any number.
References
Georgi, H. (1999). Lie Algebras in Particle Physics: From Isospin to Unified Theories (1st ed.). CRC Press. https://doi.org/10.1201/9780429499210
See also
Hypercharge
Isospin
Spin (physics)
Group representation
Multiplicity (chemistry)
Quantum mechanics
Rotational symmetry | Multiplet | [
"Physics"
] | 1,571 | [
"Theoretical physics",
"Quantum mechanics",
"Symmetry",
"Rotational symmetry"
] |
12,416,395 | https://en.wikipedia.org/wiki/Metabolite%20pool | Metabolite pool is a collective term for all of the substances involved in the metabolic process in a biological system.
Metabolic pools are within cells (or organelles such as chloroplasts) and refer to the reservoir of molecules upon which enzymes can operate. The size of the reservoir is referred to as its "metabolic pool." The metabolic pool concept is important to cellular biology.
In certain ways, a metabolic pathway is similar to a factory assembly line. Products are assembled from parts by workers who each perform a specific step in the manufacturing process. Enzymes of a cell are like workers on an assembly line; each is only responsible for a particular step in the assembly process. A lag period also occurs when a new factory is constructed, a time period before finished products begin to roll off the assembly line at a steady rate. This lag period partially results from the time needed to fill supply bins with the necessary parts. As you might imagine, when parts are not readily available, production slows or stops. Metabolite pools are somewhat analogous to the parts bins of a factory. The Calvin-Benson cycle will only operate at full speed when the cellular 'bins' are full of the molecular building blocks that lie between PGA and RUBP.
References
Systems biology | Metabolite pool | [
"Biology"
] | 259 | [
"Systems biology"
] |
7,754,624 | https://en.wikipedia.org/wiki/Alkali%E2%80%93aggregate%20reaction | Alkali–aggregate reaction is a term mainly referring to a reaction which occurs over time in concrete between the highly alkaline cement paste and non-crystalline silicon dioxide, which is found in many common aggregates. This reaction can cause the expansion of the altered aggregate, leading to spalling and loss of strength of concrete.
More accurate terminology
The alkali–aggregate reaction is a general, but relatively vague, expression which can lead to confusion. More exact definitions include the following:
Alkali–silica reaction (ASR, the most common reaction of this type);
Alkali–silicate reaction, and;
Alkali–carbonate reaction.
The alkali–silica reaction is the most common form of alkali–aggregate reaction.
Two other types are:
the alkali–silicate reaction, in which layer silicate minerals (clay minerals), sometimes present as impurities, are attacked, and;
the alkali–carbonate reaction, which is an uncommon attack on certain argillaceous dolomitic limestones, likely involving the expansion of the mineral brucite (Mg(OH)2).
The pozzolanic reaction which occurs in the setting of the mixture of slaked lime and pozzolanic materials has also features similar to the alkali–silica reaction, mainly the formation of calcium silicate hydrate (C-S-H).
See also
Energetically modified cement (EMC)
Calthemite
Pozzolanic reaction
External links
Cement.org | Alkali-aggregate reaction
Alkali-Aggregate Reactions (AAR) – International Centre of Research and Applied Technology
Cement
Concrete
Inorganic reactions
de:Alkali-Kieselsäure-Reaktion
fr:Réaction alcali-granulat
it:Reazione alcali aggregati
pt:Reação álcali-agregado | Alkali–aggregate reaction | [
"Chemistry",
"Engineering"
] | 385 | [
"Structural engineering",
"Concrete",
"Inorganic reactions"
] |
7,755,182 | https://en.wikipedia.org/wiki/Stride%20scheduling | Stride scheduling is a type of scheduling mechanism that has been introduced as a simple concept to achieve proportional central processing unit (CPU) capacity reservation among concurrent processes. Stride scheduling aims to sequentially allocate a resource for the duration of standard time-slices (quantum) in a fashion, that performs periodic recurrences of allocations. Thus, a process p1 which has reserved twice the share of a process p2 will be allocated twice as often as p2. In particular, process p1 will even be allocated two times every time p2 is waiting for allocation, assuming that neither of the two processes performs a blocking operation.
See also
Computer multitasking
Concurrency control
Concurrent computing
Resource contention
Time complexity
Thread (computing)
References
Computational resources
Concurrency control algorithms
Concurrent computing
Processor scheduling algorithms | Stride scheduling | [
"Technology"
] | 159 | [
"Computing platforms",
"IT infrastructure",
"Concurrent computing",
"Computer science stubs",
"Computer science",
"Computing stubs"
] |
7,755,881 | https://en.wikipedia.org/wiki/Carleson%20measure | In mathematics, a Carleson measure is a type of measure on subsets of n-dimensional Euclidean space Rn. Roughly speaking, a Carleson measure on a domain Ω is a measure that does not vanish at the boundary of Ω when compared to the surface measure on the boundary of Ω.
Carleson measures have many applications in harmonic analysis and the theory of partial differential equations, for instance in the solution of Dirichlet problems with "rough" boundary. The Carleson condition is closely related to the boundedness of the Poisson operator. Carleson measures are named after the Swedish mathematician Lennart Carleson.
Definition
Let n ∈ N and let Ω ⊂ Rn be an open (and hence measurable) set with non-empty boundary ∂Ω. Let μ be a Borel measure on Ω, and let σ denote the surface measure on ∂Ω. The measure μ is said to be a Carleson measure if there exists a constant C > 0 such that, for every point p ∈ ∂Ω and every radius r > 0,
where
denotes the open ball of radius r about p.
Carleson's theorem on the Poisson operator
Let D denote the unit disc in the complex plane C, equipped with some Borel measure μ. For 1 ≤ p < +∞, let Hp(∂D) denote the Hardy space on the boundary of D and let Lp(D, μ) denote the Lp space on D with respect to the measure μ. Define the Poisson operator
by
Then P is a bounded linear operator if and only if the measure μ is Carleson.
Other related concepts
The infimum of the set of constants C > 0 for which the Carleson condition
holds is known as the Carleson norm of the measure μ.
If C(R) is defined to be the infimum of the set of all constants C > 0 for which the restricted Carleson condition
holds, then the measure μ is said to satisfy the vanishing Carleson condition if C(R) → 0 as R → 0.
References
External links
Measures (measure theory)
Norms (mathematics) | Carleson measure | [
"Physics",
"Mathematics"
] | 423 | [
"Mathematical analysis",
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Size",
"Norms (mathematics)"
] |
7,760,322 | https://en.wikipedia.org/wiki/Thermodynamic%20databases%20for%20pure%20substances | Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy, entropy, and Gibbs free energy. Numerical values of these thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the standard pressure of 101.325 kPa (1 atm), or 100 kPa (1 bar). Both of these definitions for the standard condition for pressure are in use.
Thermodynamic data
Thermodynamic data is usually presented as a table or chart of function values for one mole of a substance (or in the case of the steam tables, one kg). A thermodynamic datafile is a set of equation parameters from which the numerical data values can be calculated. Tables and datafiles are usually presented at a standard pressure of 1 bar or 1 atm, but in the case of steam and other industrially important gases, pressure may be included as a variable. Function values depend on the state of aggregation of the substance, which must be defined for the value to have any meaning. The state of aggregation for thermodynamic purposes is the standard state, sometimes called the reference state, and defined by specifying certain conditions. The normal standard state is commonly defined as the most stable physical form of the substance at the specified temperature and a pressure of 1 bar or 1 atm. However, since any non-normal condition could be chosen as a standard state, it must be defined in the context of use. A physical standard state is one that exists for a time sufficient to allow measurements of its properties. The most common physical standard state is one that is stable thermodynamically (i.e., the normal one). It has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable (for example, a supercooled liquid), it is called a metastable state. A non-physical standard state is one whose properties are obtained by extrapolation from a physical state (for example, a solid superheated above the normal melting point, or an ideal gas at a condition where the real gas is non-ideal). Metastable liquids and solids are important because some substances can persist and be used in that state indefinitely. Thermodynamic functions that refer to conditions in the normal standard state are designated with a small superscript °. The relationship between certain physical and thermodynamic properties may be described by an equation of state.
Enthalpy, heat content and heat capacity
It is very difficult to measure the absolute amount of any thermodynamic quantity involving the internal energy (e.g. enthalpy), since the internal energy of a substance can take many forms, each of which has its own typical temperature at which it begins to become important in thermodynamic reactions. It is therefore the change in these functions that is of most interest. The isobaric change in enthalpy H above the common reference temperature of 298.15 K (25 °C) is called the high temperature heat content, the sensible heat, or the relative high-temperature enthalpy, and called henceforth the heat content. Different databases designate this term in different ways; for example HT-H298, H°-H°298, H°T-H°298 or H°-H°(Tr), where Tr means the reference temperature (usually 298.15 K, but abbreviated in heat content symbols as 298). All of these terms mean the molar heat content for a substance in its normal standard state above a reference temperature of 298.15 K. Data for gases is for the hypothetical ideal gas at the designated standard pressure. The SI unit for enthalpy is J/mol, and is a positive number above the reference temperature. The heat content has been measured and tabulated for virtually all known substances, and is commonly expressed as a polynomial function of temperature. The heat content of an ideal gas is independent of pressure (or volume), but the heat content of real gases varies with pressure, hence the need to define the state for the gas (real or ideal) and the pressure. Note that for some thermodynamic databases such as for steam, the reference temperature is 273.15 K (0 °C).
The heat capacity C is the ratio of heat added to the temperature increase. For an incremental isobaric addition of heat:
Cp is therefore the slope of a plot of temperature vs. isobaric heat content (or the derivative of a temperature/heat content equation). The SI units for heat capacity are J/(mol·K).
Enthalpy change of phase transitions
When heat is added to a condensed-phase substance, its temperature increases until a phase change temperature is reached. With further addition of heat, the temperature remains constant while the phase transition takes place. The amount of substance that transforms is a function of the amount of heat added. After the transition is complete, adding more heat increases the temperature. In other words, the enthalpy of a substance changes isothermally as it undergoes a physical change. The enthalpy change resulting from a phase transition is designated ΔH. There are four types of enthalpy changes resulting from a phase transition. To wit:
Enthalpy of transformation. This applies to the transformations from one solid phase to another, such as the transformation from α-Fe (bcc ferrite) to -Fe (fcc austenite). The transformation is designated ΔHtr.
Enthalpy of fusion or melting. This applies to the transition of a solid to a liquid and is designated ΔHm.
Enthalpy of vaporization. This applies to the transition of a liquid to a vapor and is designated ΔHv.
Enthalpy of sublimation. This applies to the transition of a solid to a vapor and is designated ΔHs.
Cp is infinite at phase transition temperatures because the enthalpy changes isothermally. At the Curie temperature, Cp shows a sharp discontinuity while the enthalpy has a change in slope.
Values of ΔH are usually given for the transition at the normal standard state temperature for the two states, and if so, are designated with a superscript °. ΔH for a phase transition is a weak function of temperature. In some texts, the heats of phase transitions are called latent heats (for example, latent heat of fusion).
Enthalpy change for a chemical reaction
An enthalpy change occurs during a chemical reaction. For the special case of the formation of a compound from the elements, the change is designated ΔHform and is a weak function of temperature. Values of ΔHform are usually given where the elements and compound are in their normal standard states, and as such are designated standard heats of formation, as designated by a superscript °. The ΔH°form undergoes discontinuities at a phase transition temperatures of the constituent element(s) and the compound. The enthalpy change for any standard reaction is designated ΔH°rx.
Entropy and Gibbs energy
The entropy of a system is another thermodynamic quantity that is not easily measured. However, using a combination of theoretical and experimental techniques, entropy can in fact be accurately estimated. At low temperatures, the Debye model leads to the result that the atomic heat capacity Cv for solids should be proportional to T3, and that for perfect crystalline solids it should become zero at absolute zero. Experimentally, the heat capacity is measured at temperature intervals to as low a temperature as possible. Values of Cp/T are plotted against T for the whole range of temperatures where the substance exists in the same physical state. The data are extrapolated from the lowest experimental temperature to 0 K using the Debye model. The third law of thermodynamics states that the entropy of a perfect crystalline substance becomes zero at 0 K. When S0 is zero, the area under the curve from 0 K to any temperature gives the entropy at that temperature. Even though the Debye model contains Cv instead of Cp, the difference between the two at temperatures near 0 K is so small as to be negligible.
The absolute value of entropy for a substance in its standard state at the reference temperature of 298.15 K is designated S°298. Entropy increases with temperature, and is discontinuous at phase transition temperatures. The change in entropy (ΔS°) at the normal phase transition temperature is equal to the heat of transition divided by the transition temperature. The SI units for entropy are J/(mol·K).
The standard entropy change for the formation of a compound from the elements, or for any standard reaction is designated ΔS°form or ΔS°rx. The entropy change is obtained by summing the absolute entropies of the products minus the sum of the absolute entropies of the reactants.
Like enthalpy, the Gibbs energy G has no intrinsic value, so it is the change in G that is of interest.
Furthermore, there is no change in G at phase transitions between substances in their standard states.
Hence, the main functional application of Gibbs energy from a thermodynamic database is its change in value during the formation of a compound from the standard-state elements, or for any standard chemical reaction (ΔG°form or ΔG°rx).
The SI units of Gibbs energy are the same as for enthalpy (J/mol).
Additional functions
Compilers of thermochemical databases may contain some additional thermodynamic functions. For example, the absolute enthalpy of a substance H(T) is defined in terms of its formation enthalpy and its heat content as follows:
For an element, H(T) and [HT - H298] are identical at all temperatures because ΔH°form is zero, and of course at 298.15 K, H(T) = 0. For a compound:
Similarly, the absolute Gibbs energy G(T) is defined by the absolute enthalpy and entropy of a substance:
For a compound:
Some tables may also contain the Gibbs energy function (H°298.15 – G°T)/T which is defined in terms of the entropy and heat content.
The Gibbs energy function has the same units as entropy, but unlike entropy, exhibits no discontinuity at normal phase transition temperatures.
The log10 of the equilibrium constant Keq is often listed, which is calculated from the defining thermodynamic equation.
Thermodynamic databases
A thermodynamic database consists of sets of critically evaluated values for the major thermodynamic functions.
Originally, data was presented as printed tables at 1 atm and at certain temperatures, usually 100° intervals and at phase transition temperatures. Some compilations included polynomial equations that could be used to reproduce the tabular values. More recently, computerized databases are used which consist of the equation parameters and subroutines to calculate specific values at any temperature and prepare tables for printing. Computerized databases often include subroutines for calculating reaction properties and displaying the data as charts.
Thermodynamic data comes from many types of experiments, such as calorimetry, phase equilibria, spectroscopy, composition measurements of chemical equilibrium mixtures, and emf measurements of reversible reactions. A proper database takes all available information about the elements and compounds in the database, and assures that the presented results are internally consistent. Internal consistency requires that all values of the thermodynamic functions are correctly calculated by application of the appropriate thermodynamic equations. For example, values of the Gibbs energy obtained from high-temperature equilibrium emf methods must be identical to those calculated from calorimetric measurements of the enthalpy and entropy values. The database provider must use recognized data analysis procedures to resolve differences between data obtained by different types of experiments.
All thermodynamic data is a non-linear function of temperature (and pressure), but there is no universal equation format for expressing the various functions. Here we describe a commonly used polynomial equation to express the temperature dependence of the heat content. A common six-term equation for the isobaric heat content is:
Regardless of the equation format, the heat of formation of a compound at any temperature is ΔH°form at 298.15 K, plus the sum of the heat content parameters of the products minus the sum of the heat content parameters of the reactants. The Cp equation is obtained by taking the derivative of the heat content equation.
The entropy equation is obtained by integrating the Cp/T equation:
F' is a constant of integration obtained by inserting S° at any temperature T. The Gibbs energy of formation of a compound is obtained from the defining equation ΔG°form = ΔH°form – T(ΔS°form), and is expressed as
For most substances, ΔG°form deviates only slightly from linearity with temperature, so over a short temperature span, the seven-term equation can be replaced by a three-term equation, whose parameter values are obtained by regression of tabular values.
Depending on the accuracy of the data and the length of the temperature span, the heat content equation may require more or fewer terms. Over a very long temperature span, two equations may be used instead of one. It is unwise to extrapolate the equations to obtain values outside the range of experimental data used to derive the equation parameters.
Thermodynamic datafiles
The equation parameters and all other information required to calculate values of the important thermodynamic functions are stored in a thermodynamic datafile. The values are organized in a format that makes them readable by a thermodynamic calculation program or for use in a spreadsheet. For example, the Excel-based thermodynamic database FREED creates the following type of datafile, here for a standard pressure of 1 atm.
Row 1. Molar mass of species, density at 298.15 K, ΔH°form 298.15, S°298.15. and the upper temperature limit for the file.
Row 2. Number of Cp equations required. Here, three because of three species phases.
Row 3. Values of the five parameters for the first Cp equation; temperature limit for the equation.
Row 4. Values of the five parameters for the second Cp equation; temperature limit for the equation.
Row 5. Values of the five parameters for the third Cp equation; temperature limit for the equation.
Row 6. Number of HT - H298 equations required.
Row 7. Values of the six parameters for the first HT - H298 equation; temperature limit for the equation, and ΔH°trans for the first phase change.
Row 8. Values of the six parameters for the second HT - H298 equation; temperature limit for the equation, and ΔH°trans for the second phase change.
Row 9. Values of the six parameters for the third HT - H298 equation; temperature limit for the equation, and ΔH°trans for the third phase change.
Row 10. Number of ΔH°form equations required. Here five; three for species phases and two because one of the elements has a phase change.
Row 11. Values of the six parameters for the first ΔH°form equation; temperature limit for the equation.
Row 12. Values of the six parameters for the second ΔH°form equation; temperature limit for the equation.
Row 13. Values of the six parameters for the third ΔH°form equation; temperature limit for the equation.
Row 14. Values of the six parameters for the fourth ΔH°form equation; temperature limit for the equation.
Row 15. Values of the six parameters for the fifth ΔH°form equation; temperature limit for the equation.
Row 16. Number of ΔG°form equations required.
Row 17. Values of the seven parameters for the first ΔG°form equation; temperature limit for the equation.
Row 18. Values of the seven parameters for the second ΔG°form equation; temperature limit for the equation.
Row 19. Values of the seven parameters for the third ΔG°form equation; temperature limit for the equation.
Row 20. Values of the seven parameters for the fourth ΔG°form equation; temperature limit for the equation.
Row 21. Values of the seven parameters for the fifth ΔG°form equation; temperature limit for the equation.
Most computerized databases will create a table of thermodynamic values using the values from the datafile. For MgCl2(c,l,g) at 1 atm pressure:
The table format is a common way to display thermodynamic data. The FREED table gives additional information in the top rows, such as the mass and amount composition and transition temperatures of the constituent elements. Transition temperatures for the constituent elements have dashes ------- in the first column in a blank row, such as at 922 K, the melting point of Mg. Transition temperatures for the substance have two blank rows with dashes, and a center row with the defined transition and the enthalpy change, such as the melting point of MgCl2 at 980 K. The datafile equations are at the bottom of the table, and the entire table is in an Excel worksheet. This is particularly useful when the data is intended for making specific calculations.
See also
Chemical thermodynamics
Physical chemistry
Materials science
Laws of thermodynamics
Thermochemistry
Standard temperature and pressure
Dortmund Data Bank
CALPHAD (method)
References
Robie, Richard A., and Bruce S. Hemingway (1995). Thermodynamic Properties of Minerals . . . at Higher Temperatures, U. S. Geological Survey Bulletin 2131.
Yaws, Carl L. (2007). Yaws Handbook of Thermodynamic Properties for Hydrocarbons & Chemicals, Gulf Publishing Company. .
Gurvich, L.V., Veitz, I.V., et al. (1989) Thermodynamic Properties of Individual Substances. Fourth edition, Hemisphere Pub Co. NY, L., Vol.1 in 2 parts.
External links
NIST WebBook A gateway to the data collection of the National Institute of Standards and Technology.
NASA Glenn ThermoBuild A web interface to generate tabulated thermodynamic data.
Burcat's Thermodynamic Database Database for more than 3,000 chemical species.
DIPPR The Design Institute for Physical Properties
DIPPR 801 Critically evaluated thermophysical property database useful for chemical process design and equilibrium calculations.
Free Steam Tables Online calculator based on IAPWS-IF97
FACT-Web programs Various on-line tools for obtaining thermodynamic data and making equilibrium calculations.
Mol-Instincts A chemical database based on Quantum Mechanics and QSPR, providing thermodynamic properties for millions of compounds.
Thermodynamics databases | Thermodynamic databases for pure substances | [
"Physics",
"Chemistry"
] | 3,958 | [
"Thermodynamics databases",
"Thermodynamics"
] |
7,760,747 | https://en.wikipedia.org/wiki/Theoretical%20plate | A theoretical plate in many separation processes is a hypothetical zone or stage in which two phases, such as the liquid and vapor phases of a substance, establish an equilibrium with each other. Such equilibrium stages may also be referred to as an equilibrium stage, ideal stage, or a theoretical tray. The performance of many separation processes depends on having series of equilibrium stages and is enhanced by providing more such stages. In other words, having more theoretical plates increases the efficiency of the separation process be it either a distillation, absorption, chromatographic, adsorption or similar process.
Applications
The concept of theoretical plates and trays or equilibrium stages is used in the design of many different types of separation.
Distillation columns
The concept of theoretical plates in designing distillation processes has been discussed in many reference texts. Any physical device that provides good contact between the vapor and liquid phases present in industrial-scale distillation columns or laboratory-scale glassware distillation columns constitutes a "plate" or "tray". Since an actual, physical plate can never be a 100% efficient equilibrium stage, the number of actual plates is more than the required theoretical plates.
where is the number of actual, physical plates or trays, is the number of theoretical plates or trays and is the plate or tray efficiency.
So-called bubble-cap or valve-cap trays are examples of the vapor and liquid contact devices used in industrial distillation columns. Another example of vapor and liquid contact devices are the spikes in laboratory Vigreux fractionating columns.
The trays or plates used in industrial distillation columns are fabricated of circular steel plates and usually installed inside the column at intervals of about 60 to 75 cm (24 to 30 inches) up the height of the column. That spacing is chosen primarily for ease of installation and ease of access for future repair or maintenance.
An example of a very simple tray is a perforated tray. The desired contacting between vapor and liquid occurs as the vapor, flowing upwards through the perforations, comes into contact with the liquid flowing downwards through the perforations. In current modern practice, as shown in the adjacent diagram, better contacting is achieved by installing bubble-caps or valve caps at each perforation to promote the formation of vapor bubbles flowing through a thin layer of liquid maintained by a weir on each tray.
To design a distillation unit or a similar chemical process, the number of theoretical trays or plates (that is, hypothetical equilibrium stages), , required in the process should be determined, taking into account a likely range of feedstock composition and the desired degree of separation of the components in the output fractions. In industrial continuous fractionating columns, is determined by starting at either the top or bottom of the column and calculating material balances, heat balances and equilibrium flash vaporizations for each of the succession of equilibrium stages until the desired end product composition is achieved. The calculation process requires the availability of a great deal of vapor–liquid equilibrium data for the components present in the distillation feed, and the calculation procedure is very complex.
In an industrial distillation column, the required to achieve a given separation also depends upon the amount of reflux used. Using more reflux decreases the number of plates required and using less reflux increases the number of plates required. Hence, the calculation of is usually repeated at various reflux rates. is then divided by the tray efficiency, E, to determine the actual number of trays or physical plates, , needed in the separating column. The final design choice of the number of trays to be installed in an industrial distillation column is then selected based upon an economic balance between the cost of additional trays and the cost of using a higher reflux rate.
There is a very important distinction between the theoretical plate terminology used in discussing conventional distillation trays and the theoretical plate terminology used in the discussions below of packed bed distillation or absorption or in chromatography or other applications. The theoretical plate in conventional distillation trays has no "height". It is simply a hypothetical equilibrium stage. However, the theoretical plate in packed beds, chromatography and other applications is defined as having a height.
The empirical formula known as Van Winkle's Correlation can be used to predict the Murphree plate efficiency for distillation columns separating binary systems.
Distillation and absorption packed beds
Distillation and absorption separation processes using packed beds for vapor and liquid contacting have an equivalent concept referred to as the plate height or the height equivalent to a theoretical plate (HETP). HETP arises from the same concept of equilibrium stages as does the theoretical plate and is numerically equal to the absorption bed length divided by the number of theoretical plates in the absorption bed (and in practice is measured in this way).
where is the number of theoretical plates (also called the "plate count"), is the total bed height and is the height equivalent to a theoretical plate.
The material in packed beds can either be random dumped packing (1-3" wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors contact the wetted surface, where mass transfer occurs.
Chromatographic processes
The theoretical plate concept was also adapted for chromatographic processes by Martin and Synge. The IUPAC's Gold Book provides a definition of the number of theoretical plates in a chromatography column.
The same equation applies in chromatography processes as for the packed bed processes, namely:
In packed column chromatography, the HETP may also be calculated with the Van Deemter equation.
In capillary column chromatography HETP is given by the Golay equation.
Other applications
The concept of theoretical plates or trays applies to other processes as well, such as capillary electrophoresis and some types of adsorption.
See also
Batch distillation
Continuous distillation
Extractive distillation
Fenske equation
Fractional distillation
McCabe–Thiele method
References
External links
Distillation, An Introduction by Ming Tham, Newcastle University, UK
Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology, Norway
Separation processes
Unit operations
Chemical engineering
Chromatography
Distillation | Theoretical plate | [
"Chemistry",
"Engineering"
] | 1,305 | [
"Chromatography",
"Unit operations",
"Separation processes",
"Chemical engineering",
"Distillation",
"nan",
"Chemical process engineering"
] |
88,444 | https://en.wikipedia.org/wiki/Phosphor | A phosphor is a substance that exhibits the phenomenon of luminescence; it emits light when exposed to some type of radiant energy. The term is used both for fluorescent or phosphorescent substances which glow on exposure to ultraviolet or visible light, and cathodoluminescent substances which glow when struck by an electron beam (cathode rays) in a cathode-ray tube.
When a phosphor is exposed to radiation, the orbital electrons in its molecules are excited to a higher energy level; when they return to their former level they emit the energy as light of a certain color. Phosphors can be classified into two categories: fluorescent substances which emit the energy immediately and stop glowing when the exciting radiation is turned off, and phosphorescent substances which emit the energy after a delay, so they keep glowing after the radiation is turned off, decaying in brightness over a period of milliseconds to days.
Fluorescent materials are used in applications in which the phosphor is excited continuously: cathode-ray tubes (CRT) and plasma video display screens, fluoroscope screens, fluorescent lights, scintillation sensors, white LEDs, and luminous paints for black light art. Phosphorescent materials are used where a persistent light is needed, such as glow-in-the-dark watch faces and aircraft instruments, and in radar screens to allow the target 'blips' to remain visible as the radar beam rotates. CRT phosphors were standardized beginning around World War II and designated by the letter "P" followed by a number.
Phosphorus, the light-emitting chemical element for which phosphors are named, emits light due to chemiluminescence, not phosphorescence.
Light-emission process
The scintillation process in inorganic materials is due to the electronic band structure found in the crystals. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap.
The excitons are loosely bound electron–hole pairs that wander through the crystal lattice until they are captured as a whole by impurity centers. They then rapidly de-excite by emitting scintillation light (fast component).
In the conduction band, electrons are independent of their associated holes. Those electrons and holes are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states, slowed by reliance on the low-probability forbidden mechanism, again results in light emission (slow component).
In the case of inorganic scintillators, the activator impurities are typically chosen so that the emitted light is in the visible range or near-UV, where photomultipliers are effective.
Phosphors are often transition-metal compounds or rare-earth compounds of various types. In inorganic phosphors, these inhomogeneities in the crystal structure are created usually by addition of a trace amount of dopants, impurities called activators. (In rare cases dislocations or other crystal defects can play the role of the impurity.) The wavelength emitted by the emission center is dependent on the atom itself and on the surrounding crystal structure.
Materials
Phosphors are usually made from a suitable host material with an added activator. The best known type is a copper-activated zinc sulfide (ZnS) and the silver-activated zinc sulfide (zinc sulfide silver).
The host materials are typically oxides, nitrides and oxynitrides, sulfides, selenides, halides or silicates of zinc, cadmium, manganese, aluminium, silicon, or various rare-earth metals. The activators prolong the emission time (afterglow). In turn, other materials (such as nickel) can be used to quench the afterglow and shorten the decay part of the phosphor emission characteristics.
Many phosphor powders are produced in low-temperature processes, such as sol-gel, and usually require post-annealing at temperatures of ~1000 °C, which is undesirable for many applications. However, proper optimization of the growth process allows manufacturers to avoid the annealing.
Phosphors used for fluorescent lamps require a multi-step production process, with details that vary depending on the particular phosphor. Bulk material must be milled to obtain a desired particle size range, since large particles produce a poor-quality lamp coating, and small particles produce less light and degrade more quickly. During the firing of the phosphor, process conditions must be controlled to prevent oxidation of the phosphor activators or contamination from the process vessels. After milling, the phosphor may be washed to remove minor excess of activator elements. Volatile elements must not be allowed to escape during processing. Lamp manufacturers have changed compositions of phosphors to eliminate some toxic elements formerly used, such as beryllium, cadmium, or thallium.
The commonly quoted parameters for phosphors are the wavelength of emission maximum (in nanometers, or alternatively color temperature in kelvins for white blends), the peak width (in nanometers at 50% of intensity), and decay time (in seconds).
Examples:
Calcium sulfide with strontium sulfide with bismuth as activator, , yields blue light with glow times up to 12 hours, red and orange are modifications of the zinc sulfide formula. Red color can be obtained from strontium sulfide.
Zinc sulfide with about 5 ppm of a copper activator is the most common phosphor for the glow-in-the-dark toys and items. It is also called GS phosphor.
Mix of zinc sulfide and cadmium sulfide emit color depending on their ratio; increasing of the CdS content shifts the output color towards longer wavelengths; its persistence ranges between 1–10 hours.
Strontium aluminate activated by europium or dysprosium, SrAl2O4:Eu(II):Dy(III), is a material developed in 1993 by Nemoto & Co. engineer Yasumitsu Aoki with higher brightness and significantly longer glow persistence; it produces green and aqua hues, where green gives the highest brightness and aqua the longest glow time. SrAl2O4:Eu:Dy is about 10 times brighter, 10 times longer glowing, and 10 times more expensive than ZnS:Cu. The excitation wavelengths for strontium aluminate range from 200 to 450 nm. The wavelength for its green formulation is 520 nm, its blue-green version emits at 505 nm, and the blue one emits at 490 nm. Colors with longer wavelengths can be obtained from the strontium aluminate as well, though for the price of some loss of brightness.
Phosphor degradation
Many phosphors tend to lose efficiency gradually by several mechanisms. The activators can undergo change of valence (usually oxidation), the crystal lattice degrades, atoms – often the activators – diffuse through the material, the surface undergoes chemical reactions with the environment with consequent loss of efficiency or buildup of a layer absorbing the exciting and/or radiated energy, etc.
The degradation of electroluminescent devices depends on frequency of driving current, the luminance level, and temperature; moisture impairs phosphor lifetime very noticeably as well.
Harder, high-melting, water-insoluble materials display lower tendency to lose luminescence under operation.
Examples:
BaMgAl10O17:Eu2+ (BAM), a plasma-display phosphor, undergoes oxidation of the dopant during baking. Three mechanisms are involved; absorption of oxygen atoms into oxygen vacancies on the crystal surface, diffusion of Eu(II) along the conductive layer, and electron transfer from Eu(II) to absorbed oxygen atoms, leading to formation of Eu(III) with corresponding loss of emissivity. Thin coating of aluminium phosphate or lanthanum(III) phosphate is effective in creating a barrier layer blocking access of oxygen to the BAM phosphor, for the cost of reduction of phosphor efficiency. Addition of hydrogen, acting as a reducing agent, to argon in the plasma displays significantly extends the lifetime of BAM:Eu2+ phosphor, by reducing the Eu(III) atoms back to Eu(II).
Y2O3:Eu phosphors under electron bombardment in presence of oxygen form a non-phosphorescent layer on the surface, where electron–hole pairs recombine nonradiatively via surface states.
ZnS:Mn, used in AC thin-film electroluminescent (ACTFEL) devices degrades mainly due to formation of deep-level traps, by reaction of water molecules with the dopant; the traps act as centers for nonradiative recombination. The traps also damage the crystal lattice. Phosphor aging leads to decreased brightness and elevated threshold voltage.
ZnS-based phosphors in CRTs and FEDs degrade by surface excitation, coulombic damage, build-up of electric charge, and thermal quenching. Electron-stimulated reactions of the surface are directly correlated to loss of brightness. The electrons dissociate impurities in the environment, the reactive oxygen species then attack the surface and form carbon monoxide and carbon dioxide with traces of carbon, and nonradiative zinc oxide and zinc sulfate on the surface; the reactive hydrogen removes sulfur from the surface as hydrogen sulfide, forming nonradiative layer of metallic zinc. Sulfur can be also removed as sulfur oxides.
ZnS and CdS phosphors degrade by reduction of the metal ions by captured electrons. The M2+ ions are reduced to M+; two M+ then exchange an electron and become one M2+ and one neutral M atom. The reduced metal can be observed as a visible darkening of the phosphor layer. The darkening (and the brightness loss) is proportional to the phosphor's exposure to electrons and can be observed on some CRT screens that displayed the same image (e.g. a terminal login screen) for prolonged periods.
Europium(II)-doped alkaline earth aluminates degrade by formation of color centers.
:Ce3+ degrades by loss of luminescent Ce3+ ions.
:Mn (P1) degrades by desorption of oxygen under electron bombardment.
Oxide phosphors can degrade rapidly in presence of fluoride ions, remaining from incomplete removal of flux from phosphor synthesis.
Loosely packed phosphors, e.g. when an excess of silica gel (formed from the potassium silicate binder) is present, have tendency to locally overheat due to poor thermal conductivity. E.g. :Tb3+ is subject to accelerated degradation at higher temperatures.
Applications
Lighting
Phosphor layers provide most of the light produced by fluorescent lamps, and are also used to improve the balance of light produced by metal halide lamps. Various neon signs use phosphor layers to produce different colors of light. Electroluminescent displays found, for example, in aircraft instrument panels, use a phosphor layer to produce glare-free illumination or as numeric and graphic display devices. White LED lamps consist of a blue or ultra-violet emitter with a phosphor coating that emits at longer wavelengths, giving a full spectrum of visible light. Unfocused and undeflected cathode-ray tubes have been used as stroboscope lamps since 1958.
Phosphor thermometry
Phosphor thermometry is a temperature measurement approach that uses the temperature dependence of certain phosphors. For this, a phosphor coating is applied to a surface of interest and, usually, the decay time is the emission parameter that indicates temperature. Because the illumination and detection optics can be situated remotely, the method may be used for moving surfaces such as high speed motor surfaces. Also, phosphor may be applied to the end of an optical fiber as an optical analog of a thermocouple.
Glow-in-the-dark toys
In these applications, the phosphor is directly added to the plastic used to mold the toys, or mixed with a binder for use as paints.
ZnS:Cu phosphor is used in glow-in-the-dark cosmetic creams frequently used for Halloween make-ups.
Generally, the persistence of the phosphor increases as the wavelength increases.
See also lightstick for chemiluminescence-based glowing items.
Oxygen sensing
Quenching of the triplet state by O2 (which has a triplet ground state) as a result of Dexter energy transfer is well known in solutions of phosphorescent heavy-metal complexes and doped polymers. In recent years, phosphorescence porous materials(such as Metal–organic frameworks and Covalent organic frameworks) have shown promising oxygen sensing capabilities, for their non-linear gas-adsorption in ultra-low partial pressures of oxygen.
Postage stamps
Phosphor banded stamps first appeared in 1959 as guides for machines to sort mail. Around the world many varieties exist with different amounts of banding. Postage stamps are sometimes collected by whether or not they are "tagged" with phosphor (or printed on luminescent paper).
Radioluminescence
Zinc sulfide phosphors are used with radioactive materials, where the phosphor was excited by the alpha- and beta-decaying isotopes, to create luminescent paint for dials of watches and instruments (radium dials). Between 1913 and 1950 radium-228 and radium-226 were used to activate a phosphor made of silver doped zinc sulfide (ZnS:Ag), which gave a greenish glow. The phosphor is not suitable to be used in layers thicker than 25 mg/cm2, as the self-absorption of the light then becomes a problem. Furthermore, zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium. ZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering atomic nucleus.
Copper doped zinc sulfide (ZnS:Cu) is the most common phosphor used and yields blue-green light. Copper and magnesium doped zinc sulfide yields yellow-orange light.
Tritium is also used as a source of radiation in various products utilizing tritium illumination.
Electroluminescence
Electroluminescence can be exploited in light sources. Such sources typically emit from a large area, which makes them suitable for backlights of LCD displays. The excitation of the phosphor is usually achieved by application of high-intensity electric field, usually with suitable frequency. Current electroluminescent light sources tend to degrade with use, resulting in their relatively short operation lifetimes.
ZnS:Cu was the first formulation successfully displaying electroluminescence, tested at 1936 by Georges Destriau in Madame Marie Curie laboratories in Paris.
Powder or AC electroluminescence is found in a variety of backlight and night light applications. Several groups offer branded EL offerings (e.g. IndiGlo used in some Timex watches) or "Lighttape", another trade name of an electroluminescent material, used in electroluminescent light strips. The Apollo space program is often credited with being the first significant use of EL for backlights and lighting.
White LEDs
White light-emitting diodes are usually blue InGaN LEDs with a coating of a suitable material. Cerium(III)-doped YAG (YAG:Ce3+, or Y3Al5O12:Ce3+) is often used; it absorbs the light from the blue LED and emits in a broad range from greenish to reddish, with most of its output in yellow. This yellow emission combined with the remaining blue emission gives the "white" light, which can be adjusted to color temperature as warm (yellowish) or cold (bluish) white. The pale yellow emission of the Ce3+:YAG can be tuned by substituting the cerium with other rare-earth elements such as terbium and gadolinium and can even be further adjusted by substituting some or all of the aluminium in the YAG with gallium. However, this process is not one of phosphorescence. The yellow light is produced by a process known as scintillation, the complete absence of an afterglow being one of the characteristics of the process.
Some rare-earth-doped Sialons are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists (α-SiAlON). For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN3-based (CASN) phosphor.
White LEDs can also be made by coating near-ultraviolet-emitting LEDs with a mixture of high-efficiency europium-based red- and blue-emitting phosphors plus green-emitting copper- and aluminium-doped zinc sulfide . This is a method analogous to the way fluorescent lamps work.
Some newer white LEDs use a yellow and blue emitter in series, to approximate white; this technology is used in some Motorola phones such as the Blackberry as well as LED lighting and the original-version stacked emitters by using GaN on SiC on InGaP but was later found to fracture at higher drive currents.
Many white LEDs used in general lighting systems can be used for data transfer, as, for example, in systems that modulate the LED to act as a beacon.
It is also common for white LEDs to use phosphors other than Ce:YAG, or to use two or three phosphors to achieve a higher CRI, often at the cost of efficiency. Examples of additional phosphors are R9, which produces a saturated red, nitrides which produce red, and aluminates such as lutetium aluminum garnet that produce green. Silicate phosphors are brighter but fade more quickly, and are used in LCD LED backlights in mobile devices. LED phosphors can be placed directly over the die or made into a dome and placed above the LED: this approach is known as a remote phosphor. Some colored LEDs, instead of using a colored LED, use a blue LED with a colored phosphor because such an arrangement is more efficient than a colored LED. Oxynitride phosphors can also be used in LEDs. The precursors used to make the phosphors may degrade when exposed to air.
Cathode-ray tubes
Cathode-ray tubes produce signal-generated light patterns in a (typically) round or rectangular format. Bulky CRTs were used in the black-and-white television (TV) sets that became popular in the 1950s, developed into color CRTs in the late 1960s, and used in virtually all color TVs and computer monitors until the mid-2000s. In the late 20th century, advanced electronics made new wide-deflection, "short tube" CRT technology viable, making CRTs more compact, but still bulky and heavy. As the original video display technology, having no viable competition for more than 40 years and dominance for over 50 years, the CRT ceased to be the main type of video display in use only around 2010. In addition to direct-view CRTs, CRT projection tubes were the basis of all projection TVs and computer video projectors of both front and rear projection types until at least the late 1990s.
CRTs have also been widely used in scientific and engineering instrumentation, such as oscilloscopes, usually with a single phosphor color, typically green. Phosphors for such applications may have long afterglow, for increased image persistence. A variation of the display CRT, used prior to the 1980s, was the CRT storage tube, a digital memory device which (in later forms) also provided a visible display of the stored data, using a variation of the same electron-beam excited phosphor technology.
The process of producing light in CRTs by electron-beam excited phosphorescence yields much faster signal response times than even modern (2020s) LCDs can achieve, which makes light pens and light gun games possible with CRTs, but not LCDs. Also in contrast to most other video display types, because CRT technology draws an image by scanning an electron beam (or a formation of three beams) across a phosphor surface, a CRT has no intrinsic "native resolution" and does not require scaling to display raster images at different resolutions; the CRT can display any raster format natively, within the limits defined by the electron beam spot size and, for a color CRT, the dot pitch of the phosphor. Because of this operating principle, CRTs can produce images using either raster and vector imaging methods. Vector displays are impossible for display technologies that have permanent discrete pixels, including all LCDs, plasma display panels, DMD projectors, and OLED (LED matrix, e.g. TFT OLED) panels.
The phosphors can be deposited as either thin film, or as discrete particles, a powder bound to the surface. Thin films have better lifetime and better resolution, but provide less bright and less efficient image than powder ones. This is caused by multiple internal reflections in the thin film, scattering the emitted light.
White (in black-and-white): The mix of zinc cadmium sulfide and zinc sulfide silver, the is the white P4 phosphor used in black and white television CRTs. Mixes of yellow and blue phosphors are usual. Mixes of red, green and blue, or a single white phosphor, can also be encountered.
Red: Yttrium oxide-sulfide activated with europium is used as the red phosphor in color CRTs. The development of color TV took a long time due to the search for a red phosphor. The first red emitting rare-earth phosphor, YVO4:Eu3+, was introduced by Levine and Palilla as a primary color in television in 1964. In single crystal form, it was used as an excellent polarizer and laser material.
Yellow: When mixed with cadmium sulfide, the resulting zinc cadmium sulfide , provides strong yellow light.
Green: Combination of zinc sulfide with copper, the P31 phosphor or , provides green light peaking at 531 nm, with long glow.
Blue: Combination of zinc sulfide with few ppm of silver, the ZnS:Ag, when excited by electrons, provides strong blue glow with maximum at 450 nm, with short afterglow with 200 nanosecond duration. It is known as the P22B phosphor. This material, zinc sulfide silver, is still one of the most efficient phosphors in cathode-ray tubes. It is used as a blue phosphor in color CRTs.
The phosphors are usually poor electrical conductors. This may lead to deposition of residual charge on the screen, effectively decreasing the energy of the impacting electrons due to electrostatic repulsion (an effect known as "sticking"). To eliminate this, a thin layer of aluminium (about 100 nm) is deposited over the phosphors, usually by vacuum evaporation, and connected to the conductive layer inside the tube. This layer also reflects the phosphor light to the desired direction, and protects the phosphor from ion bombardment resulting from an imperfect vacuum.
To reduce the image degradation by reflection of ambient light, contrast can be increased by several methods. In addition to black masking of unused areas of screen, the phosphor particles in color screens are coated with pigments of matching color. For example, the red phosphors are coated with ferric oxide (replacing earlier Cd(S,Se) due to cadmium toxicity), blue phosphors can be coated with marine blue (CoO·n) or ultramarine (). Green phosphors based on ZnS:Cu do not have to be coated due to their own yellowish color.
Black-and-white television CRTs
The black-and-white television screens require an emission color close to white. Usually, a combination of phosphors is employed.
The most common combination is (blue + yellow). Other ones are (blue + yellow), and (blue + green + red – does not contain cadmium and has poor efficiency). The color tone can be adjusted by the ratios of the components.
As the compositions contain discrete grains of different phosphors, they produce image that may not be entirely smooth. A single, white-emitting phosphor, overcomes this obstacle. Due to its low efficiency, it is used only on very small screens.
The screens are typically covered with phosphor using sedimentation coating, where particles suspended in a solution are let to settle on the surface.
Reduced-palette color CRTs
For displaying of a limited palette of colors, there are a few options.
In beam penetration tubes, different color phosphors are layered and separated with dielectric material. The acceleration voltage is used to determine the energy of the electrons; lower-energy ones are absorbed in the top layer of the phosphor, while some of the higher-energy ones shoot through and are absorbed in the lower layer. So either the first color or a mixture of the first and second color is shown. With a display with red outer layer and green inner layer, the manipulation of accelerating voltage can produce a continuum of colors from red through orange and yellow to green.
Another method is using a mixture of two phosphors with different characteristics. The brightness of one is linearly dependent on electron flux, while the other one's brightness saturates at higher fluxes—the phosphor does not emit any more light regardless of how many more electrons impact it. At low electron flux, both phosphors emit together; at higher fluxes, the luminous contribution of the nonsaturating phosphor prevails, changing the combined color.
Such displays can have high resolution, due to absence of two-dimensional structuring of RGB CRT phosphors. Their color palette is, however, very limited. They were used e.g. in some older military radar displays.
Color television CRTs
The phosphors in color CRTs need higher contrast and resolution than the black-and-white ones. The energy density of the electron beam is about 100 times greater than in black-and-white CRTs; the electron spot is focused to about 0.2 mm diameter instead of about 0.6 mm diameter of the black-and-white CRTs. Effects related to electron irradiation degradation are therefore more pronounced.
Color CRTs require three different phosphors, emitting in red, green and blue, patterned on the screen. Three separate electron guns are used for color production (except for displays that use beam-index tube technology, which is rare). The red phosphor has always been a problem, being the dimmest of the three necessitating the brighter green and blue electron beam currents be adjusted down to make them equal the red phosphor's lower brightness. This made early color TVs only usable indoors as bright light made it impossible to see the dim picture, while portable black-and-white TVs viewable in outdoor sunlight were already common.
The composition of the phosphors changed over time, as better phosphors were developed and as environmental concerns led to lowering the content of cadmium and later abandoning it entirely. The was replaced with with lower cadmium/zinc ratio, and then with cadmium-free .
The blue phosphor stayed generally unchanged, a silver-doped zinc sulfide. The green phosphor initially used manganese-doped zinc silicate, then evolved through silver-activated cadmium-zinc sulfide, to lower-cadmium copper-aluminium activated formula, and then to cadmium-free version of the same. The red phosphor saw the most changes; it was originally manganese-activated zinc phosphate, then a silver-activated cadmium-zinc sulfide, then the europium(III) activated phosphors appeared; first in an yttrium vanadate matrix, then in yttrium oxide and currently in yttrium oxysulfide. The evolution of the phosphors was therefore (ordered by B-G-R):
– –
– –
– – (1964–?)
– – or
– or –
Projection televisions
For projection televisions, where the beam power density can be two orders of magnitude higher than in conventional CRTs, some different phosphors have to be used.
For blue color, is employed. However, it saturates. can be used as an alternative that is more linear at high energy densities.
For green, a terbium-activated ; its color purity and brightness at low excitation densities is worse than the zinc sulfide alternative, but it behaves linear at high excitation energy densities, while zinc sulfide saturates. However, it also saturates, so or can be substituted. is bright but water-sensitive, degradation-prone, and the plate-like morphology of its crystals hampers its use; these problems are solved now, so it is gaining use due to its higher linearity.
is used for red emission.
Standard phosphor types
Various
Some other phosphors commercially available, for use as X-ray screens, neutron detectors, alpha particle scintillators, etc., are:
See also
Cathodoluminescence
Laser
Luminophore
Photoluminescence
References
Bibliography
External links
a history of electroluminescent displays .
Fluorescence, Phosphorescence
CRT Phosphor Characteristics (P numbers)
Composition of CRT phosphors
Silicon-based oxynitride and nitride phosphors for white LEDs—A review
& – RCA Manual, Fluorescent screens (P1 to P24)
Inorganic Phosphors Compositions, Preparation and Optical Properties, William M. Yen and Marvin J. Weber
Luminescence
Lighting
Display technology
Optical materials | Phosphor | [
"Physics",
"Chemistry",
"Engineering"
] | 6,588 | [
"Luminescence",
"Molecular physics",
"Materials",
"Optical materials",
"Phosphors and scintillators",
"Electronic engineering",
"Display technology",
"Matter"
] |
89,078 | https://en.wikipedia.org/wiki/Glycosidic%20bond | A glycosidic bond or glycosidic linkage is a type of ether bond that joins a carbohydrate (sugar) molecule to another group, which may or may not be another carbohydrate.
A glycosidic bond is formed between the hemiacetal or hemiketal group of a saccharide (or a molecule derived from a saccharide) and the hydroxyl group of some compound such as an alcohol. A substance containing a glycosidic bond is a glycoside.
The term 'glycoside' is now extended to also cover compounds with bonds formed between hemiacetal (or hemiketal) groups of sugars and several chemical groups other than hydroxyls, such as -SR (thioglycosides), -SeR (selenoglycosides), -NR1R2 (N-glycosides), or even -CR1R2R3 (C-glycosides).
Particularly in naturally occurring glycosides, the compound ROH from which the carbohydrate residue has been removed is often termed the aglycone, and the carbohydrate residue itself is sometimes referred to as the 'glycone'.
S-, N-, C-, and O-glycosidic bonds
Glycosidic bonds of the form discussed above are known as O-glycosidic bonds, in reference to the glycosidic oxygen that links the glycoside to the aglycone or reducing end sugar. In analogy, one also considers S-glycosidic bonds (which form thioglycosides), where the oxygen of the glycosidic bond is replaced with a sulfur atom. In the same way, N-glycosidic bonds, have the glycosidic bond oxygen replaced with nitrogen. Substances containing N-glycosidic bonds are also known as glycosylamines. C-glycosyl bonds have the glycosidic oxygen replaced by a carbon; the term "C-glycoside" is considered a misnomer by IUPAC and is discouraged. All of these modified glycosidic bonds have different susceptibility to hydrolysis, and in the case of C-glycosyl structures, they are typically more resistant to hydrolysis.
Numbering, and α/β distinction of glycosidic bonds
When an anomeric center is involved in a glycosidic bond (as is common in nature) then one can distinguish between α- and β-glycosidic bonds by the relative stereochemistry of the anomeric position and the stereocenter furthest from C1 in the saccharide.
Pharmacologists often join substances to glucuronic acid via glycosidic bonds in order to increase their water solubility; this is known as glucuronidation. Many other glycosides have important physiological functions.
Chemical approaches
Nüchter et al. (2001) have shown a new approach to Fischer glycosidation. Employing a microwave oven equipped with refluxing apparatus in a rotor reactor with pressure bombs, Nüchter et al. (2001) were able to achieve 100% yield of α- and β-D-glucosides. This method can be performed on a multi-kilogram scale.
Vishal Y Joshi's method
Joshi et al. (2006) propose the Koenigs-Knorr reaction in the stereoselective synthesis of alkyl D-glucopyranosides via glycosylation, with the exception of using lithium carbonate which is less expensive and toxic than the conventional method of using silver or mercury salts. D-glucose is first protected by forming the peracetate by addition of acetic anhydride in acetic acid, and then addition of hydrogen bromide which brominates at the 5-position. On addition of the alcohol ROH and lithium carbonate, the OR replaces the bromine and on deprotecting the acetylated hydroxyls the product is synthesized in relatively high purity. It was suggested by Joshi et al. (2001) that lithium acts as the nucleophile that attacks the carbon at the 5-position and through a transition state the alcohol is substituted for the bromine group. Advantages of this method as well as its stereoselectivity and low cost of the lithium salt include that it can be done at room temperature and its yield compares relatively well with the conventional Koenigs-Knorr method.
Glycoside hydrolases
Glycoside hydrolases (or glycosidases), are enzymes that break glycosidic bonds. Glycoside hydrolases typically can act either on α- or on β-glycosidic bonds, but not on both. This specificity allows researchers to obtain glycosides in high epimeric excess, one example being Wen-Ya Lu's conversion of D-Glucose to Ethyl β-D-glucopyranoside using naturally-derived glucosidase. Wen-Ya Lu utilized glucosidase in a reverse manner opposite to the enzyme's biological functionality:
Glycosyltransferases
Before monosaccharide units are incorporated into glycoproteins, polysaccharides, or lipids in living organisms, they are typically first "activated" by being joined via a glycosidic bond to the phosphate group of a nucleotide such as uridine diphosphate (UDP), guanosine diphosphate (GDP), thymidine diphosphate (TDP), or cytidine monophosphate (CMP). These activated biochemical intermediates are known as sugar nucleotides or sugar donors. Many biosynthetic pathways use mono- or oligosaccharides activated by a diphosphate linkage to lipids, such as dolichol. These activated donors are then substrates for enzymes known as glycosyltransferases, which transfer the sugar unit from the activated donor to an accepting nucleophile (the acceptor substrate).
Disaccharide phosphorylases
Different biocatalytic approaches have been developed toward the synthesis of glycosides in the past decades, which using "glycosyltransferases" and "glycoside hydrolases" are among the most common catalysis. The former often needs expensive materials and the later often shows low yields, De Winter et al.
investigated use of cellobiose phosphorylase (CP) toward synthesis of alpha-glycosides in ionic liquids. The best condition for use of CP was found to be in the presence of IL AMMOENG 101 and ethyl acetate.
Directed glycosylations
Multiple chemical approaches exist to encourage selectivity of α- and β-glycosidic bonds. The highly substrate specific nature of the selectivity and the overall activity of the pyranoside can provide major synthetic difficulties. The overall specificity of the glycosylation can be improved by utilizing approaches which take into account the relative transition states that the anomeric carbon can undergo during a typical glycosylation. Most notably, recognition and incorporation of Felkin-Ahn-Eisenstein models into rationale chemical design can generally provide reliable results provided the transformation can undergo this type of conformational control in the transition state.
Fluorine directed glycosylations represent an encouraging handle for both B selectivity and introduction of a non-natural biomimetic C2 functionality on the carbohydrate. One innovative example provided by Bucher et al. provides a way to utilize a fluoro oxonium ion and the trichloroacetimidate to encourage B stereoselectivity through the gauche effect. This reasonable stereoselectivity is clear through visualization of the Felkin-Ahn models of the possible chair forms.
This method represents an encouraging way to selectivity incorporate B-ethyl, isopropyl and other glycosides with typical trichloroacetimidate chemistry.
O-linked glycopeptides; pharmaceutical uses of O-glycosylated peptides
O-linked glycopeptides recently have been shown to exhibit excellent CNS permeability and efficacy in multiple animal models with disease states. In addition one of the most intriguing aspects thereof is the capability of O-glycosylation to extend half life, decrease clearance, and improve PK/PD thereof the active peptide beyond increasing CNS penetration. The innate utilization of sugars as solubilizing moieties in Phase II and III metabolism (glucuronic acids) has remarkably allowed an evolutionary advantage in that mammalian enzymes are not directly evolved to degrade O glycosylated products on larger moieties.
The peculiar nature of O-linked glycopeptides is that there are numerous examples which are CNS penetrant. The fundamental basis of this effect is thought to involve "membrane hopping" or "hop diffusion". The non-brownian motion driven "hop diffusion" process is thought to occur due to discontinuity of the plasma membrane. "Hop diffusion" notably combines free diffusion and intercomparmental transitions. Recent examples notably include high permeability of met-enkephalin analogs amongst other peptides. The full mOR agonist pentapeptide DAMGO is also CNS penetrant upon introduction of glycosylation.
N-Glycosidic bonds in DNA
DNA molecules contain 5-membered carbon rings called deoxyriboses that are directly attached to two phosphate groups and a nucleobase that contains amino groups. The nitrogen atoms from the amino group in the nucleotides are covalently linked to the anomeric carbon of the ribose sugar structure through an N-glycosidic bond. Occasionally, the nucleobases attached to the ribose undergo deamination, alkylation, or oxidation which results in cytotoxic lesions along the DNA backbone. These modifications severely threaten the cohesiveness of the DNA molecule, leading to the development of diseases such as cancer. DNA glycosylases are enzymes that catalyze the hydrolysis the N-glycosidic bond to free the damaged or modified nucleobase from the DNA, by cleaving the carbon-nitrogen glycosidic bond at the 2' carbon, subsequently initiating the base excision repair (BER) pathway.
Monofunctional glycosylases catalyze the hydrolysis of the N-glycosidic bond via either a stepwise, SN1 like mechanism, or a concerted, SN2 like mechanism. The stepwise function, the nucleobase acts as a leaving group before the anomeric carbon gets attacked by the water molecule, producing a short-lived unstable oxacarbenium ion intermediate. This intermediate rapidly reacts with the nearby water molecule to substitute the N-glycosidic bond of the ribose and the nucleobase with an O-glycosidic bond with a hydroxy group. The concerted mechanism, the water acts as a nucleophile and attacks at the anomeric carbon before the nucelobase gets to act like a leaving group. The intermediate produced is a similar oxacarbenium ion where both the hydroxy groups and the nucleobase are still attached to the anomeric carbon. Both mechanisms theoretically yield the same product. Most ribonucleotides are hydrolyzed via the concerted SN2 like mechanism, while most deoxyribonucleotides proceed through the stepwise like mechanism.
These reactions are practically irreversible. Due to the fact that the cleavage of the N-glycosidic bond from the DNA backbone can lead to detrimental mutagenic and cytotoxic responses in an organism the ability to also catalyze the synthesis of N-glycosidic bonds by way of an abasic DNA site and a specific nucleobase.
References
Marco Brito-Arias, "Synthesis and Characterization of Glycosides", second edition, Editorial Springer 2016.
External links
Definition of glycosides, from the IUPAC Compendium of Chemical Terminology, the "Gold Book"
Varki A et al. Essentials of Glycobiology. Cold Spring Harbor Laboratory Press; 1999. Searchable online
Glycosides
Carbohydrates
Carbohydrate chemistry
Chemical bonding | Glycosidic bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,692 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Glycosides",
"Organic compounds",
"Condensed matter physics",
"Chemical synthesis",
"Carbohydrate chemistry",
"Biomolecules",
"Glycobiology",
"Chemical bonding",
"nan"
] |
89,188 | https://en.wikipedia.org/wiki/Methylation | Methylation, in the chemical sciences, is the addition of a methyl group on a substrate, or the substitution of an atom (or group) by a methyl group. Methylation is a form of alkylation, with a methyl group replacing a hydrogen atom. These terms are commonly used in chemistry, biochemistry, soil science, and biology.
In biological systems, methylation is catalyzed by enzymes; such methylation can be involved in modification of heavy metals, regulation of gene expression, regulation of protein function, and RNA processing. In vitro methylation of tissue samples is also a way to reduce some histological staining artifacts. The reverse of methylation is demethylation.
In biology
In biological systems, methylation is accomplished by enzymes. Methylation can modify heavy metals and can regulate gene expression, RNA processing, and protein function. It is a key process underlying epigenetics. Sources of methyl groups include S-methylmethionine, methyl folate, methyl B12.
Methanogenesis
Methanogenesis, the process that generates methane from CO2, involves a series of methylation reactions. These reactions are caused by a set of enzymes harbored by a family of anaerobic microbes.
In reverse methanogenesis, methane is the methylating agent.
O-methyltransferases
A wide variety of phenols undergo O-methylation to give anisole derivatives. This process, catalyzed by such enzymes as caffeoyl-CoA O-methyltransferase, is a key reaction in the biosynthesis of lignols, percursors to lignin, a major structural component of plants.
Plants produce flavonoids and isoflavones with methylations on hydroxyl groups, i.e. methoxy bonds. This 5-O-methylation affects the flavonoid's water solubility. Examples are 5-O-methylgenistein, 5-O-methylmyricetin, and 5-O-methylquercetin (azaleatin).
Proteins
Along with ubiquitination and phosphorylation, methylation is a major biochemical process for modifying protein function. The most prevalent protein methylations affect arginine and lysine residue of specific histones. Otherwise histidine, glutamate, asparagine, cysteine are susceptible to methylation. Some of these products include S-methylcysteine, two isomers of N-methylhistidine, and two isomers of N-methylarginine.
Methionine synthase
Methionine synthase regenerates methionine (Met) from homocysteine (Hcy). The overall reaction transforms 5-methyltetrahydrofolate (N5-MeTHF) into tetrahydrofolate (THF) while transferring a methyl group to Hcy to form Met. Methionine Syntheses can be cobalamin-dependent and cobalamin-independent: Plants have both, animals depend on the methylcobalamin-dependent form.
In methylcobalamin-dependent forms of the enzyme, the reaction proceeds by two steps in a ping-pong reaction. The enzyme is initially primed into a reactive state by the transfer of a methyl group from N5-MeTHF to Co(I) in enzyme-bound cobalamin ((Cob), also known as vitamine B12)) ,
, forming methyl-cobalamin(Me-Cob) that now contains Me-Co(III) and activating the enzyme. Then, a Hcy that has coordinated to an enzyme-bound zinc to form a reactive thiolate reacts with the Me-Cob. The activated methyl group is transferred from Me-Cob to the Hcy thiolate, which regenerates Co(I) in Cob, and Met is released from the enzyme.
Heavy metals: arsenic, mercury, cadmium
Biomethylation is the pathway for converting some heavy elements into more mobile or more lethal derivatives that can enter the food chain. The biomethylation of arsenic compounds starts with the formation of methanearsonates. Thus, trivalent inorganic arsenic compounds are methylated to give methanearsonate. S-adenosylmethionine is the methyl donor. The methanearsonates are the precursors to dimethylarsonates, again by the cycle of reduction (to methylarsonous acid) followed by a second methylation. Related pathways are found in the microbial methylation of mercury to methylmercury.
Epigenetic methylation
DNA methylation
DNA methylation is the conversion of the cytosine to 5-methylcytosine. The formation of Me-CpG is catalyzed by the enzyme DNA methyltransferase. In vertebrates, DNA methylation typically occurs at CpG sites (cytosine-phosphate-guanine sites—that is, sites where a cytosine is directly followed by a guanine in the DNA sequence). In mammals, DNA methylation is common in body cells, and methylation of CpG sites seems to be the default. Human DNA has about 80–90% of CpG sites methylated, but there are certain areas, known as CpG islands, that are CG-rich (high cytosine and guanine content, made up of about 65% CG residues), wherein none is methylated. These are associated with the promoters of 56% of mammalian genes, including all ubiquitously expressed genes. One to two percent of the human genome are CpG clusters, and there is an inverse relationship between CpG methylation and transcriptional activity. Methylation contributing to epigenetic inheritance can occur through either DNA methylation or protein methylation. Improper methylations of human genes can lead to disease development, including cancer.
In honey bees, DNA methylation is associated with alternative splicing and gene regulation based on functional genomic research published in 2013. In addition, DNA methylation is associated with expression changes in immune genes when honey bees were under lethal viral infection. Several review papers have been published on the topics of DNA methylation in social insects.
RNA methylation
RNA methylation occurs in different RNA species viz. tRNA, rRNA, mRNA, tmRNA, snRNA, snoRNA, miRNA, and viral RNA. Different catalytic strategies are employed for RNA methylation by a variety of RNA-methyltransferases. RNA methylation is thought to have existed before DNA methylation in the early forms of life evolving on earth.
N6-methyladenosine (m6A) is the most common and abundant methylation modification in RNA molecules (mRNA) present in eukaryotes. 5-methylcytosine (5-mC) also commonly occurs in various RNA molecules. Recent data strongly suggest that m6A and 5-mC RNA methylation affects the regulation of various biological processes such as RNA stability and mRNA translation, and that abnormal RNA methylation contributes to etiology of human diseases.
In social insects such as honey bees, RNA methylation is studied as a possible epigenetic mechanism underlying aggression via reciprocal crosses.
Protein methylation
Protein methylation typically takes place on arginine or lysine amino acid residues in the protein sequence. Arginine can be methylated once (monomethylated arginine) or twice, with either both methyl groups on one terminal nitrogen (asymmetric dimethylarginine) or one on both nitrogens (symmetric dimethylarginine), by protein arginine methyltransferases (PRMTs). Lysine can be methylated once, twice, or three times by lysine methyltransferases. Protein methylation has been most studied in the histones. The transfer of methyl groups from S-adenosyl methionine to histones is catalyzed by enzymes known as histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression. Protein methylation is one type of post-translational modification.
Evolution
Methyl metabolism is very ancient and can be found in all organisms on earth, from bacteria to humans, indicating the importance of methyl metabolism for physiology. Indeed, pharmacological inhibition of global methylation in species ranging from human, mouse, fish, fly, roundworm, plant, algae, and cyanobacteria causes the same effects on their biological rhythms, demonstrating conserved physiological roles of methylation during evolution.
In chemistry
The term methylation in organic chemistry refers to the alkylation process used to describe the delivery of a group.
Electrophilic methylation
Methylations are commonly performed using electrophilic methyl sources such as iodomethane, dimethyl sulfate, dimethyl carbonate, or tetramethylammonium chloride. Less common but more powerful (and more dangerous) methylating reagents include methyl triflate, diazomethane, and methyl fluorosulfonate (magic methyl). These reagents all react via SN2 nucleophilic substitutions. For example, a carboxylate may be methylated on oxygen to give a methyl ester; an alkoxide salt may be likewise methylated to give an ether, ; or a ketone enolate may be methylated on carbon to produce a new ketone.
The Purdie methylation is a specific for the methylation at oxygen of carbohydrates using iodomethane and silver oxide.
Eschweiler–Clarke methylation
The Eschweiler–Clarke reaction is a method for methylation of amines. This method avoids the risk of quaternization, which occurs when amines are methylated with methyl halides.
Diazomethane and trimethylsilyldiazomethane
Diazomethane and the safer analogue trimethylsilyldiazomethane methylate carboxylic acids, phenols, and even alcohols:
RCO2H + tmsCHN2 + CH3OH -> RCO2CH3 + CH3Otms + N2
The method offers the advantage that the side products are easily removed from the product mixture.
Nucleophilic methylation
Methylation sometimes involve use of nucleophilic methyl reagents. Strongly nucleophilic methylating agents include methyllithium () or Grignard reagents such as methylmagnesium bromide (). For example, will add methyl groups to the carbonyl (C=O) of ketones and aldehyde.:
Milder methylating agents include tetramethyltin, dimethylzinc, and trimethylaluminium.
See also
Biology topics
Bisulfite sequencing – the biochemical method used to determine the presence or absence of methyl groups on a DNA sequence
MethDB DNA Methylation Database
Microscale thermophoresis – a biophysical method to determine the methylisation state of DNA
Remethylation, the reversible removal of methyl group in methionine and 5-methylcytosine
Organic chemistry topics
Alkylation
Methoxy
Titanium–zinc methylenation
Petasis reagent
Nysted reagent
Wittig reaction
Tebbe's reagent
References
External links
deltaMasses Detection of Methylations after Mass Spectrometry
Epigenetics
Organic reactions
Post-translational modification | Methylation | [
"Chemistry"
] | 2,392 | [
"Gene expression",
"Organic reactions",
"Biochemical reactions",
"Post-translational modification",
"Methylation"
] |
89,221 | https://en.wikipedia.org/wiki/Methyl%20radical | Methyl radical is an organic compound with the chemical formula (also written as •). It is a metastable colourless gas, which is mainly produced in situ as a precursor to other hydrocarbons in the petroleum cracking industry. It can act as either a strong oxidant or a strong reductant, and is quite corrosive to metals.
Chemical properties
Its first ionization potential (yielding the methenium ion, ) is .
Redox behaviour
The carbon centre in methyl can bond with electron-donating molecules by reacting:
+ R• →
Because of the capture of the nucleophile (R•), methyl has oxidising character. Methyl is a strong oxidant with organic chemicals. However, it is equally a strong reductant with chemicals such as water. It does not form aqueous solutions, as it reduces water to produce methanol and elemental hydrogen:
2 + 2 → 2 +
Structure
The molecular geometry of the methyl radical is trigonal planar (bond angles are 120°), although the energy cost of distortion to a pyramidal geometry is small. All other electron-neutral, non-conjugated alkyl radicals are pyramidalized to some extent, though with very small inversion barriers. For instance, the t-butyl radical has a bond angle of 118° with a barrier to pyramidal inversion. On the other hand, substitution of hydrogen atoms by more electronegative substituents leads to radicals with a strongly pyramidal geometry (112°), such as the trifluoromethyl radical, , with a much more substantial inversion barrier of around .
Chemical reactions
Methyl undergoes the typical chemical reactions of a radical. Below approximately , it rapidly dimerises to form ethane. Upon treatment with an alcohol, it converts to methane and either an alkoxy or hydroxyalkyl. Reduction of methyl gives methane. When heated above, at most, , methyl decomposes to produce methylidyne and elemental hydrogen, or to produce methylene and atomic hydrogen:
→ CH• +
→ + H•
Methyl is very corrosive to metals, forming methylated metal compounds:
M + n → M(CH3)n
Production
Biosynthesis
Some radical SAM enzymes generate methyl radicals by reduction of S-adenosylmethionine.
Acetone photolysis
It can be produced by the ultraviolet photodissociation of acetone vapour at 193 nm:
→ CO + 2
Halomethane photolysis
It is also produced by the ultraviolet dissociation of halomethanes:
→ X• +
Methane oxidation
It can also be produced by the reaction of methane with the hydroxyl radical:
OH• + CH4 → + H2O
This process begins the major removal mechanism of methane from the atmosphere. The reaction occurs in the troposphere or stratosphere. In addition to being the largest known sink for atmospheric methane, this reaction is one of the most important sources of water vapor in the upper atmosphere.
This reaction in the troposphere gives a methane lifetime of 9.6 years. Two more minor sinks are soil sinks (160 year lifetime) and stratospheric loss by reaction with •OH, •Cl and •O1D in the stratosphere (120 year lifetime), giving a net lifetime of 8.4 years.
Azomethane pyrolysis
Methyl radicals can also be obtained by pyrolysis of azomethane, CH3N=NCH3, in a low-pressure system.
In the interstellar medium
Methyl was discovered in interstellar medium in 2000 by a team led by Helmut Feuchtgruber who detected it using the Infrared Space Observatory. It was first detected in molecular clouds toward the centre of the Milky Way.
References
Astrochemistry
Free radicals
Oil refining
Yl | Methyl radical | [
"Chemistry",
"Astronomy",
"Biology"
] | 787 | [
"Methane",
"Free radicals",
"Petroleum technology",
"Senescence",
"Astrochemistry",
"Oil refining",
"nan",
"Biomolecules",
"Greenhouse gases",
"Astronomical sub-disciplines"
] |
89,242 | https://en.wikipedia.org/wiki/Gene%20knockout | Gene knockouts (also known as gene deletion or gene inactivation) are a widely used genetic engineering technique that involves the targeted removal or inactivation of a specific gene within an organism's genome. This can be done through a variety of methods, including homologous recombination, CRISPR-Cas9, and TALENs.
One of the main advantages of gene knockouts is that they allow researchers to study the function of a specific gene in vivo, and to understand the role of the gene in normal development and physiology as well as in the pathology of diseases. By studying the phenotype of the organism with the knocked out gene, researchers can gain insights into the biological processes that the gene is involved in.
There are two main types of gene knockouts: complete and conditional. A complete gene knockout permanently inactivates the gene, while a conditional gene knockout allows for the gene to be turned off and on at specific times or in specific tissues. Conditional knockouts are particularly useful for studying developmental processes and for understanding the role of a gene in specific cell types or tissues.
Gene knockouts have been widely used in many different organisms, including bacteria, yeast, fruit flies, zebrafish, and mice. In mice, gene knockouts are commonly used to study the function of specific genes in development, physiology, and cancer research.
The use of gene knockouts in mouse models has been particularly valuable in the study of human diseases. For example, gene knockouts in mice have been used to study the role of specific genes in cancer, neurological disorders, immune disorders, and metabolic disorders.
However, gene knockouts also have some limitations. For example, the loss of a single gene may not fully mimic the effects of a genetic disorder, and the knockouts may have unintended effects on other genes or pathways. Additionally, gene knockouts are not always a good model for human disease as the mouse genome is not identical to the human genome, and mouse physiology is different from human physiology.
The KO technique is essentially the opposite of a gene knock-in. Knocking out two genes simultaneously in an organism is known as a double knockout (DKO). Similarly the terms triple knockout (TKO) and quadruple knockouts (QKO) are used to describe three or four knocked out genes, respectively. However, one needs to distinguish between heterozygous and homozygous KOs. In the former, only one of two gene copies (alleles) is knocked out, in the latter both are knocked out.
Methods
Knockouts are accomplished through a variety of techniques. Originally, naturally occurring mutations were identified and then gene loss or inactivation had to be established by DNA sequencing or other methods.
Gene knockout by mutation
Gene knockout by mutation is commonly carried out in bacteria. An early instance of the use of this technique in Escherichia coli was published in 1989 by Hamilton, et al. In this experiment, two sequential recombinations were used to delete the gene. This work established the feasibility of removing or replacing a functional gene in bacteria. That method has since been developed for other organisms, particularly research animals, like mice. Knockout mice are commonly used to study genes with human equivalents that may have significance for disease. An example of a study using knockout mice is an investigation of the roles of Xirp proteins in Sudden Unexplained Nocturnal Death Syndrome (SUNDS) and Brugada Syndrome in the Chinese Han Population.
Gene silencing
For gene knockout investigations, RNA interference (RNAi), a more recent method, also known as gene silencing, has gained popularity. In RNA interference (RNAi), messenger RNA for a particular gene is inactivated using small interfering RNA (siRNA) or short hairpin RNA (shRNA). This effectively stops the gene from being expressed. Oncogenes like Bcl-2 and p53, as well as genes linked to neurological disease, genetic disorders, and viral infections, have all been targeted for gene silencing utilizing RNA interference (RNAi).
Homologous recombination
Homologous recombination is the exchange of genes between two DNA strands that include extensive regions of base sequences that are identical to one another. In eukaryotic species, bacteria, and some viruses, homologous recombination happens spontaneously and is a useful tool in genetic engineering. Homologous recombination, which takes place during meiosis in eukaryotes, is essential for the repair of double-stranded DNA breaks and promotes genetic variation by allowing the movement of genetic information during chromosomal crossing. Homologous recombination, a key DNA repair mechanism in bacteria, enables the insertion of genetic material acquired through horizontal transfer of genes and transformation into DNA. Homologous recombination in viruses influences the course of viral evolution. Homologous recombination, a type of gene targeting used in genetic engineering, involves the introduction of an engineered mutation into a particular gene in order to learn more about the function of that gene. This method involves inserting foreign DNA into a cell that has a sequence similar to the target gene while being flanked by sequences that are the same upstream and downstream of the target gene. The target gene's DNA is substituted with the foreign DNA sequence during replication when the cell detects the similar flanking regions as homologues. The target gene is "knocked out" by the exchange. By using this technique to target particular alleles in embryonic stem cells in mice, it is possible to create knockout mice. With the aid of gene targeting, numerous mouse genes have been shut down, leading to the creation of hundreds of distinct mouse models of various human diseases, such as cancer, diabetes, cardiovascular diseases, and neurological disorders. Mario Capecchi, Sir Martin J. Evans, and Oliver Smithies performed groundbreaking research on homologous recombination in mouse stem cells, and they shared the 2007 Nobel Prize in Physiology or Medicine for their findings. Traditionally, homologous recombination was the main method for causing a gene knockout. This method involves creating a DNA construct containing the desired mutation. For knockout purposes, this typically involves a drug resistance marker in place of the desired knockout gene. The construct will also contain a minimum of 2kb of homology to the target sequence. The construct can be delivered to stem cells either through microinjection or electroporation. This method then relies on the cell's own repair mechanisms to recombine the DNA construct into the existing DNA. This results in the sequence of the gene being altered, and most cases the gene will be translated into a nonfunctional protein, if it is translated at all. However, this is an inefficient process, as homologous recombination accounts for only 10−2 to 10−3 of DNA integrations. Often, the drug selection marker on the construct is used to select for cells in which the recombination event has occurred.
These stem cells now lacking the gene could be used in vivo, for instance in mice, by inserting them into early embryos. If the resulting chimeric mouse contained the genetic change in their germline, this could then be passed on offspring.
In diploid organisms, which contain two alleles for most genes, and may as well contain several related genes that collaborate in the same role, additional rounds of transformation and selection are performed until every targeted gene is knocked out. Selective breeding may be required to produce homozygous knockout animals.
Site-specific nucleases
There are currently three methods in use that involve precisely targeting a DNA sequence in order to introduce a double-stranded break. Once this occurs, the cell's repair mechanisms will attempt to repair this double stranded break, often through non-homologous end joining (NHEJ), which involves directly ligating the two cut ends together. This may be done imperfectly, therefore sometimes causing insertions or deletions of base pairs, which cause frameshift mutations. These mutations can render the gene in which they occur nonfunctional, thus creating a knockout of that gene. This process is more efficient than homologous recombination, and therefore can be more easily used to create biallelic knockouts.
Zinc-fingers
Zinc-finger nucleases consist of DNA binding domains that can precisely target a DNA sequence. Each zinc-finger can recognize codons of a desired DNA sequence, and therefore can be modularly assembled to bind to a particular sequence. These binding domains are coupled with a restriction endonuclease that can cause a double stranded break (DSB) in the DNA. Repair processes may introduce mutations that destroy functionality of the gene.
TALENS
Transcription activator-like effector nucleases (TALENs) also contain a DNA binding domain and a nuclease that can cleave DNA. The DNA binding region consists of amino acid repeats that each recognize a single base pair of the desired targeted DNA sequence. If this cleavage is targeted to a gene coding region, and NHEJ-mediated repair introduces insertions and deletions, a frameshift mutation often results, thus disrupting function of the gene.
CRISPR/Cas9
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is a genetic engineering technique that allows for precise editing of the genome. One application of CRISPR is gene knockout, which involves disabling or "knocking out" a specific gene in an organism.
The process of gene knockout with CRISPR involves three main steps: designing a guide RNA (gRNA) that targets a specific location in the genome, delivering the gRNA and a Cas9 enzyme (which acts as a molecular scissors) to the target cell, and then allowing the cell to repair the cut in the DNA. When the cell repairs the cut, it can either join the cut ends back together, resulting in a non-functional gene, or introduce a mutation that disrupts the gene's function.
This technique can be used in a variety of organisms, including bacteria, yeast, plants, and animals, and it allows scientists to study the function of specific genes by observing the effects of their absence. CRISPR-based gene knockout is a powerful tool for understanding the genetic basis of disease and for developing new therapies.
It is important to note that CRISPR-based gene knockout, like any genetic engineering technique, has the potential to produce unintended or harmful effects on the organism, so it should be used with caution. The coupled Cas9 will cause a double stranded break in the DNA. Following the same principle as zinc-fingers and TALENs, the attempts to repair these double stranded breaks often result in frameshift mutations that result in an nonfunctional gene. Non invasive CRISPR-Cas9 technology has successfully knocked out a gene associated in depression and anxiety in mice, being the first successful delivery passing through the blood–brain barrier to enable gene modification.
Knock-in
Gene knock-in is similar to gene knockout, but it replaces a gene with another instead of deleting it.
Types
Conditional knockouts
A conditional gene knockout allows gene deletion in a tissue in a tissue specific manner. This is required in place of a gene knockout if the null mutation would lead to embryonic death, or a specific tissue or cell type is of specific interest. This is done by introducing short sequences called loxP sites around the gene. These sequences will be introduced into the germ-line via the same mechanism as a knockout. This germ-line can then be crossed to another germline containing Cre-recombinase which is a viral enzyme that can recognize these sequences, recombines them and deletes the gene flanked by these sites. Other recombinases have since been created and employed in conditional knockout experiments.
Use
Knockouts are primarily used to understand the role of a specific gene or DNA region by comparing the knockout organism to a wildtype with a similar genetic background.
Knockout organisms are also used as screening tools in the development of drugs, to target specific biological processes or deficiencies by using a specific knockout, or to understand the mechanism of action of a drug by using a library of knockout organisms spanning the entire genome, such as in Saccharomyces cerevisiae.
See also
Essential gene
Gene knockdown
Conditional gene knockout
Germline
Gene silencing
Genome editing
Planned extinction
Recombineering
Myostatin
Belgian Blue
References
External links
Diagram of targeted gene replacement
Frontiers in Bioscience Gene Knockout Database (available on archive only)
International Knockout Mouse Consortium
KOMP Repository
Genetically modified organisms
Molecular biology techniques
Molecular genetics
Laboratory techniques
Gene expression
Biotechnology | Gene knockout | [
"Chemistry",
"Engineering",
"Biology"
] | 2,596 | [
"Genetically modified organisms",
"Gene expression",
"Genetic engineering",
"Biotechnology",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"nan",
"Molecular biology",
"Biochemistry"
] |
89,547 | https://en.wikipedia.org/wiki/Water%20vapor | Water vapor, water vapour or aqueous vapor is the gaseous phase of water. It is one state of water within the hydrosphere. Water vapor can be produced from the evaporation or boiling of liquid water or from the sublimation of ice. Water vapor is transparent, like most constituents of the atmosphere. Under typical atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. It is less dense than most of the other constituents of air and triggers convection currents that can lead to clouds and fog.
Being a component of Earth's hydrosphere and hydrologic cycle, it is particularly abundant in Earth's atmosphere, where it acts as a greenhouse gas and warming feedback, contributing more to total greenhouse effect than non-condensable gases such as carbon dioxide and methane. Use of water vapor, as steam, has been important for cooking, and as a major component in energy production and transport systems since the industrial revolution.
Water vapor is a relatively common atmospheric constituent, present even in the solar atmosphere as well as every planet in the Solar System and many astronomical objects including natural satellites, comets and even large asteroids. Likewise the detection of extrasolar water vapor would indicate a similar distribution in other planetary systems. Water vapor can also be indirect evidence supporting the presence of extraterrestrial liquid water in the case of some planetary mass objects.
Water vapor, which reacts to temperature changes, is referred to as a 'feedback', because it amplifies the effect of forces that initially cause the warming. Therefore, it is a greenhouse gas.
Properties
Evaporation
Whenever a water molecule leaves a surface and diffuses into a surrounding gas, it is said to have evaporated. Each individual water molecule which transitions between a more associated (liquid) and a less associated (vapor/gas) state does so through the absorption or release of kinetic energy. The aggregate measurement of this kinetic energy transfer is defined as thermal energy and occurs only when there is differential in the temperature of the water molecules. Liquid water that becomes water vapor takes a parcel of heat with it, in a process called evaporative cooling. The amount of water vapor in the air determines how frequently molecules will return to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water.
In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches per year. Formulas can be used for calculating the rate of evaporation from a water surface such as a swimming pool. In some countries, the evaporation rate far exceeds the precipitation rate.
Evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of water vapor in the air. The vapor content of air is measured with devices known as hygrometers. The measurements are usually expressed as specific humidity or percent relative humidity. The temperatures of the atmosphere and the water surface determine the equilibrium vapor pressure; 100% relative humidity occurs when the partial pressure of water vapor is equal to the equilibrium vapor pressure. This condition is often referred to as complete saturation. Humidity ranges from 0 grams per cubic metre in dry air to 30 grams per cubic metre (0.03 ounce per cubic foot) when the vapor is saturated at 30 °C.
Sublimation
Sublimation is the process by which water molecules directly leave the surface of ice without first becoming liquid water. Sublimation accounts for the slow mid-winter disappearance of ice and snow at temperatures too low to cause melting. Antarctica shows this effect to a unique degree because it is by far the continent with the lowest rate of precipitation on Earth. As a result, there are large areas where millennial layers of snow have sublimed, leaving behind whatever non-volatile materials they had contained. This is extremely valuable to certain scientific disciplines, a dramatic example being the collection of meteorites that are left exposed in unparalleled numbers and excellent states of preservation.
Sublimation is important in the preparation of certain classes of biological specimens for scanning electron microscopy. Typically the specimens are prepared by cryofixation and freeze-fracture, after which the broken surface is freeze-etched, being eroded by exposure to vacuum until it shows the required level of detail. This technique can display protein molecules, organelle structures and lipid bilayers with very low degrees of distortion.
Condensation
Water vapor will only condense onto another surface when that surface is cooler than the dew point temperature, or when the water vapor equilibrium in air has been exceeded. When water vapor condenses onto a surface, a net warming occurs on that surface. The water molecule brings heat energy with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense. Condensation in the atmosphere forms cloud droplets.
Also, a net condensation of water vapor occurs on surfaces when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition is a phase transition separate from condensation which leads to the direct formation of ice from water vapor. Frost and snow are examples of deposition.
There are several mechanisms of cooling by which condensation occurs:
1) Direct loss of heat by conduction or radiation.
2) Cooling from the drop in air pressure which occurs with uplift of air, also known as adiabatic cooling.
Air can be lifted by mountains, which deflect the air upward, by convection, and by cold and warm fronts.
3) Advective cooling - cooling due to horizontal movement of air.
Importance and Uses
Provides water for plants and animals: Water vapour gets converted to rain and snow that serve as a natural source of water for plants and animals.
Controls evaporation: Excess water vapor in the air decreases the rate of evaporation.
Determines climatic conditions: Excess water vapor in the air produces rain, fog, snow etc. Hence, it determines climatic conditions.
Chemical reactions
A number of chemical reactions have water as a product. If the reactions take place at temperatures higher than the dew point of the surrounding air the water will be formed as vapor and increase the local humidity, if below the dew point local condensation will occur. Typical reactions that result in water formation are the burning of hydrogen or hydrocarbons in air or other oxygen containing gas mixtures, or as a result of reactions with oxidizers.
In a similar fashion other chemical or physical reactions can take place in the presence of water vapor resulting in new chemicals forming such as rust on iron or steel, polymerization occurring (certain polyurethane foams and cyanoacrylate glues cure with exposure to atmospheric humidity) or forms changing such as where anhydrous chemicals may absorb enough vapor to form a crystalline structure or alter an existing one, sometimes resulting in characteristic color changes that can be used for measurement.
Measurement
Measuring the quantity of water vapor in a medium can be done directly or remotely with varying degrees of accuracy. Remote methods such electromagnetic absorption are possible from satellites above planetary atmospheres. Direct methods may use electronic transducers, moistened thermometers or hygroscopic materials measuring changes in physical properties or dimensions.
Impact on air density
Water vapor is lighter or less dense than dry air. At equivalent temperatures it is buoyant with respect to dry air, whereby the density of dry air at standard temperature and pressure (273.15 K, 101.325 kPa) is 1.27 g/L and water vapor at standard temperature has a vapor pressure of 0.6 kPa and the much lower density of 0.0048 g/L.
Calculations
Water vapor and dry air density calculations at 0 °C:
The molar mass of water is , as calculated from the sum of the atomic masses of its constituent atoms.
The average molar mass of air (approx. 78% nitrogen, N2; 21% oxygen, O2; 1% other gases) is at standard temperature and pressure (STP).
Obeying Avogadro's Law and the ideal gas law, moist air will have a lower density than dry air. At max. saturation (i. e. rel. humidity = 100% at 0 °C) the density will go down to 28.51 g/mol.
STP conditions imply a temperature of 0 °C, at which the ability of water to become vapor is very restricted. Its concentration in air is very low at 0 °C. The red line on the chart to the right is the maximum concentration of water vapor expected for a given temperature. The water vapor concentration increases significantly as the temperature rises, approaching 100% (steam, pure water vapor) at 100 °C. However the difference in densities between air and water vapor would still exist (0.598 vs. 1.27 g/L).
At equal temperatures
At the same temperature, a column of dry air will be denser or heavier than a column of air containing any water vapor, the molar mass of diatomic nitrogen and diatomic oxygen both being greater than the molar mass of water. Thus, any volume of dry air will sink if placed in a larger volume of moist air. Also, a volume of moist air will rise or be buoyant if placed in a larger region of dry air. As the temperature rises the proportion of water vapor in the air increases, and its buoyancy will increase. The increase in buoyancy can have a significant atmospheric impact, giving rise to powerful, moisture rich, upward air currents when the air temperature and sea temperature reaches 25 °C or above. This phenomenon provides a significant driving force for cyclonic and anticyclonic weather systems (typhoons and hurricanes).
Respiration and breathing
Water vapor is a by-product of respiration in plants and animals. Its contribution to the pressure, increases as its concentration increases. Its partial pressure contribution to air pressure increases, lowering the partial pressure contribution of the other atmospheric gases (Dalton's Law). The total air pressure must remain constant. The presence of water vapor in the air naturally dilutes or displaces the other air components as its concentration increases.
This can have an effect on respiration. In very warm air (35 °C) the proportion of water vapor is large enough to give rise to the stuffiness that can be experienced in humid jungle conditions or in poorly ventilated buildings.
Lifting gas
Water vapor has lower density than that of air and is therefore buoyant in air but has lower vapor pressure than that of air. When water vapor is used as a lifting gas by a thermal airship the water vapor is heated to form steam so that its vapor pressure is greater than the surrounding air pressure in order to maintain the shape of a theoretical "steam balloon", which yields approximately 60% the lift of helium and twice that of hot air.
General discussion
The amount of water vapor in an atmosphere is constrained by the restrictions of partial pressures and temperature. Dew point temperature and relative humidity act as guidelines for the process of water vapor in the water cycle. Energy input, such as sunlight, can trigger more evaporation on an ocean surface or more sublimation on a chunk of ice on top of a mountain. The balance between condensation and evaporation gives the quantity called vapor partial pressure.
The maximum partial pressure (saturation pressure) of water vapor in air varies with temperature of the air and water vapor mixture. A variety of empirical formulas exist for this quantity; the most used reference formula is the Goff-Gratch equation for the SVP over liquid water below zero degrees Celsius:
where , temperature of the moist air, is given in units of kelvin, and is given in units of millibars (hectopascals).
The formula is valid from about −50 to 102 °C; however there are a very limited number of measurements of the vapor pressure of water over supercooled liquid water. There are a number of other formulae which can be used.
Under certain conditions, such as when the boiling temperature of water is reached, a net evaporation will always occur during standard atmospheric conditions regardless of the percent of relative humidity. This immediate process will dispel massive amounts of water vapor into a cooler atmosphere.
Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test.
Controlling water vapor in air is a key concern in the heating, ventilating, and air-conditioning (HVAC) industry. Thermal comfort depends on the moist air conditions. Non-human comfort situations are called refrigeration, and also are affected by water vapor. For example, many food stores, like supermarkets, utilize open chiller cabinets, or food cases, which can significantly lower the water vapor pressure (lowering humidity). This practice delivers several benefits as well as problems.
In Earth's atmosphere
Gaseous water represents a small but environmentally significant constituent of the atmosphere. The percentage of water vapor in surface air varies from 0.01% at -42 °C (-44 °F) to 4.24% when the dew point is 30 °C (86 °F). Over 99% of atmospheric water is in the form of vapour, rather than liquid water or ice, and approximately 99.13% of the water vapour is contained in the troposphere. The condensation of water vapor to the liquid or ice phase is responsible for clouds, rain, snow, and other precipitation, all of which count among the most significant elements of what we experience as weather. Less obviously, the latent heat of vaporization, which is released to the atmosphere whenever condensation occurs, is one of the most important terms in the atmospheric energy budget on both local and global scales. For example, latent heat release in atmospheric convection is directly responsible for powering destructive storms such as tropical cyclones and severe thunderstorms. Water vapor is an important greenhouse gas owing to the presence of the hydroxyl bond which strongly absorbs in the infra-red.
Water vapor is the "working medium" of the atmospheric thermodynamic engine which transforms heat energy from sun irradiation into mechanical energy in the form of winds. Transforming thermal energy into mechanical energy requires an upper and a lower temperature level, as well as a working medium which shuttles forth and back between both. The upper temperature level is given by the soil or water surface of the Earth, which absorbs the incoming sun radiation and warms up, evaporating water. The moist and warm air at the ground is lighter than its surroundings and rises up to the upper limit of the troposphere. There the water molecules radiate their thermal energy into outer space, cooling down the surrounding air. The upper atmosphere constitutes the lower temperature level of the atmospheric thermodynamic engine. The water vapor in the now cold air condenses out and falls down to the ground in the form of rain or snow. The now heavier cold and dry air sinks down to ground as well; the atmospheric thermodynamic engine thus establishes a vertical convection, which transports heat from the ground into the upper atmosphere, where the water molecules can radiate it to outer space. Due to the Earth's rotation and the resulting Coriolis forces, this vertical atmospheric convection is also converted into a horizontal convection, in the form of cyclones and anticyclones, which transport the water evaporated over the oceans into the interior of the continents, enabling vegetation to grow.
Water in Earth's atmosphere is not merely below its boiling point (100 °C), but at altitude it goes below its freezing point (0 °C), due to water's highly polar attraction. When combined with its quantity, water vapor then has a relevant dew point and frost point, unlike e. g., carbon dioxide and methane. Water vapor thus has a scale height a fraction of that of the bulk atmosphere, as the water condenses and exits, primarily in the troposphere, the lowest layer of the atmosphere. Carbon dioxide () and methane, being well-mixed in the atmosphere, tend to rise above water vapour. The absorption and emission of both compounds contribute to Earth's emission to space, and thus the planetary greenhouse effect. This greenhouse forcing is directly observable, via distinct spectral features versus water vapor, and observed to be rising with rising levels. Conversely, adding water vapor at high altitudes has a disproportionate impact, which is why jet traffic has a disproportionately high warming effect. Oxidation of methane is also a major source of water vapour in the stratosphere, and adds about 15% to methane's global warming effect.
In the absence of other greenhouse gases, Earth's water vapor would condense to the surface; this has likely happened, possibly more than once. Scientists thus distinguish between non-condensable (driving) and condensable (driven) greenhouse gases, i.e., the above water vapor feedback.
Fog and clouds form through condensation around cloud condensation nuclei. In the absence of nuclei, condensation will only occur at much lower temperatures. Under persistent condensation or deposition, cloud droplets or snowflakes form, which precipitate when they reach a critical mass.
Atmospheric concentration of water vapour is highly variable between locations and times, from 10 ppmv in the coldest air to 5% (50 000 ppmv) in humid tropical air, and can be measured with a combination of land observations, weather balloons and satellites. The water content of the atmosphere as a whole is constantly depleted by precipitation. At the same time it is constantly replenished by evaporation, most prominently from oceans, lakes, rivers, and moist earth. Other sources of atmospheric water include combustion, respiration, volcanic eruptions, the transpiration of plants, and various other biological and geological processes. At any given time there is about 1.29 x 1016 litres (3.4 x 1015 gal.) of water in the atmosphere. The atmosphere holds 1 part in 2500 of the fresh water, and 1 part in 100,000 of the total water on Earth. The mean global content of water vapor in the atmosphere is roughly sufficient to cover the surface of the planet with a layer of liquid water about 25 mm deep. The mean annual precipitation for the planet is about 1 metre, a comparison which implies a rapid turnover of water in the air – on average, the residence time of a water molecule in the troposphere is about 9 to 10 days.
Global mean water vapour is about 0.25% of the atmosphere by mass and also varies seasonally, in terms of contribution to atmospheric pressure between 2.62 hPa in July and 2.33 hPa in December. IPCC AR6 expresses medium confidence in increase of total water vapour at about 1-2% per decade; it is expected to increase by around 7% per °C of warming.
Episodes of surface geothermal activity, such as volcanic eruptions and geysers, release variable amounts of water vapor into the atmosphere. Such eruptions may be large in human terms, and major explosive eruptions may inject exceptionally large masses of water exceptionally high into the atmosphere, but as a percentage of total atmospheric water, the role of such processes is trivial. The relative concentrations of the various gases emitted by volcanoes varies considerably according to the site and according to the particular event at any one site. However, water vapor is consistently the commonest volcanic gas; as a rule, it comprises more than 60% of total emissions during a subaerial eruption.
Atmospheric water vapor content is expressed using various measures. These include vapor pressure, specific humidity, mixing ratio, dew point temperature, and relative humidity.
Radar and satellite imaging
Because water molecules absorb microwaves and other radio wave frequencies, water in the atmosphere attenuates radar signals. In addition, atmospheric water will reflect and refract signals to an extent that depends on whether it is vapor, liquid or solid.
Generally, radar signals lose strength progressively the farther they travel through the troposphere. Different frequencies attenuate at different rates, such that some components of air are opaque to some frequencies and transparent to others. Radio waves used for broadcasting and other communication experience the same effect.
Water vapor reflects radar to a lesser extent than do water's other two phases. In the form of drops and ice crystals, water acts as a prism, which it does not do as an individual molecule; however, the existence of water vapor in the atmosphere causes the atmosphere to act as a giant prism.
A comparison of GOES-12 satellite images shows the distribution of atmospheric water vapor relative to the oceans, clouds and continents of the Earth. Vapor surrounds the planet but is unevenly distributed. The image loop on the right shows monthly average of water vapor content with the units are given in centimeters, which is the precipitable water or equivalent amount of water that could be produced if all the water vapor in the column were to condense. The lowest amounts of water vapor (0 centimeters) appear in yellow, and the highest amounts (6 centimeters) appear in dark blue. Areas of missing data appear in shades of gray. The maps are based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on NASA's Aqua satellite. The most noticeable pattern in the time series is the influence of seasonal temperature changes and incoming sunlight on water vapor. In the tropics, a band of extremely humid air wobbles north and south of the equator as the seasons change. This band of humidity is part of the Intertropical Convergence Zone, where the easterly trade winds from each hemisphere converge and produce near-daily thunderstorms and clouds. Farther from the equator, water vapor concentrations are high in the hemisphere experiencing summer and low in the one experiencing winter. Another pattern that shows up in the time series is that water vapor amounts over land areas decrease more in winter months than adjacent ocean areas do. This is largely because air temperatures over land drop more in the winter than temperatures over the ocean. Water vapor condenses more rapidly in colder air.
As water vapor absorbs light in the visible spectral range, its absorption can be used in spectroscopic applications (such as DOAS) to determine the amount of water vapor in the atmosphere. This is done operationally, e.g. from the Global Ozone Monitoring Experiment (GOME) spectrometers on ERS (GOME) and MetOp (GOME-2). The weaker water vapor absorption lines in the blue spectral range and further into the UV up to its dissociation limit around 243 nm are mostly based on quantum mechanical calculations and are only partly confirmed by experiments.
Lightning generation
Water vapor plays a key role in lightning production in the atmosphere. From cloud physics, usually clouds are the real generators of static charge as found in Earth's atmosphere. The ability of clouds to hold massive amounts of electrical energy is directly related to the amount of water vapor present in the local system.
The amount of water vapor directly controls the permittivity of the air. During times of low humidity, static discharge is quick and easy. During times of higher humidity, fewer static discharges occur. Permittivity and capacitance work hand in hand to produce the megawatt outputs of lightning.
After a cloud, for instance, has started its way to becoming a lightning generator, atmospheric water vapor acts as a substance (or insulator) that decreases the ability of the cloud to discharge its electrical energy. Over a certain amount of time, if the cloud continues to generate and store more static electricity, the barrier that was created by the atmospheric water vapor will ultimately break down from the stored electrical potential energy. This energy will be released to a local oppositely charged region, in the form of lightning. The strength of each discharge is directly related to the atmospheric permittivity, capacitance, and the source's charge generating ability.
Extraterrestrial
Water vapor is common in the Solar System and by extension, other planetary systems. Its signature has been detected in the atmospheres of the Sun, occurring in sunspots. The presence of water vapor has been detected in the atmospheres of all seven extraterrestrial planets in the Solar System, the Earth's Moon, and the moons of other planets, although typically in only trace amounts.
Geological formations such as cryogeysers are thought to exist on the surface of several icy moons ejecting water vapor due to tidal heating and may indicate the presence of substantial quantities of subsurface water. Plumes of water vapor have been detected on Jupiter's moon Europa and are similar to plumes of water vapor detected on Saturn's moon Enceladus. Traces of water vapor have also been detected in the stratosphere of Titan. Water vapor has been found to be a major constituent of the atmosphere of dwarf planet, Ceres, largest object in the asteroid belt The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes." According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." Scientists studying Mars hypothesize that if water moves about the planet, it does so as vapor.
The brilliance of comet tails comes largely from water vapor. On approach to the Sun, the ice many comets carry sublimes to vapor. Knowing a comet's distance from the sun, astronomers may deduce the comet's water content from its brilliance.
Water vapor has also been confirmed outside the Solar System. Spectroscopic analysis of HD 209458 b, an extrasolar planet in the constellation Pegasus, provides the first evidence of atmospheric water vapor beyond the Solar System. A star called CW Leonis was found to have a ring of vast quantities of water vapor circling the aging, massive star. A NASA satellite designed to study chemicals in interstellar gas clouds, made the discovery with an onboard spectrometer. Most likely, "the water vapor was vaporized from the surfaces of orbiting comets." Other exoplanets with evidence of water vapor include HAT-P-11b and K2-18b.
See also
Air density
Atmospheric river
Boiling point
Condensation in aerosol dynamics
Deposition
Earth's atmosphere
Eddy covariance
Equation of state
Evaporative cooler
Fog
Frost
Gas laws
Gibbs free energy
Gibbs phase rule
Greenhouse gas
Heat capacity
Heat of vaporization
Humidity
Hygrometer
Ideal gas
Kinetic theory of gases
Latent heat
Latent heat flux
Microwave radiometer
Phase of matter
Saturation vapor density
Steam
Sublimation
Superheating
Supersaturation
Thermodynamics
Troposphere
Vapor pressure
References
Bibliography
External links
National Science Digital Library – Water Vapor
Calculate the condensation of your exhaled breath
Water Vapor Myths: A Brief Tutorial
AGU Water Vapor in the Climate System – 1995
Free Windows Program, Water Vapor Pressure Units Conversion Calculator – PhyMetrix
Greenhouse gases
Atmospheric thermodynamics
Forms of water
Water in gas
Psychrometrics
Articles containing video clips | Water vapor | [
"Physics",
"Chemistry",
"Environmental_science"
] | 5,672 | [
"Environmental chemistry",
"Phases of matter",
"Forms of water",
"Greenhouse gases",
"Matter"
] |
89,830 | https://en.wikipedia.org/wiki/Seismic%20wave | A seismic wave is a mechanical wave of acoustic energy that travels through the Earth or another planetary body. It can result from an earthquake (or generally, a quake), volcanic eruption, magma movement, a large landslide and a large man-made explosion that produces low-frequency acoustic energy. Seismic waves are studied by seismologists, who record the waves using seismometers, hydrophones (in water), or accelerometers. Seismic waves are distinguished from seismic noise (ambient vibration), which is persistent low-amplitude vibration arising from a variety of natural and anthropogenic sources.
The propagation velocity of a seismic wave depends on density and elasticity of the medium as well as the type of wave. Velocity tends to increase with depth through Earth's crust and mantle, but drops sharply going from the mantle to Earth's outer core.
Earthquakes create distinct types of waves with different velocities. When recorded by a seismic observatory, their different travel times help scientists locate the quake's hypocenter. In geophysics, the refraction or reflection of seismic waves is used for research into Earth's internal structure. Scientists sometimes generate and measure vibrations to investigate shallow, subsurface structure.
Types
Among the many types of seismic waves, one can make a broad distinction between body waves, which travel through the Earth, and surface waves, which travel at the Earth's surface.
Other modes of wave propagation exist than those described in this article; though of comparatively minor importance for earth-borne waves, they are important in the case of asteroseismology.
Body waves travel through the interior of the Earth.
Surface waves travel across the surface. Surface waves decay more slowly with distance than body waves which travel in three dimensions.
Particle motion of surface waves is larger than that of body waves, so surface waves tend to cause more damage.
Body waves
Body waves travel through the interior of the Earth along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves. This distinction was recognized in 1830 by the French mathematician Siméon Denis Poisson.
Primary waves
Primary waves (P waves) are compressional waves that are longitudinal in nature. P waves are pressure waves that travel faster than other waves through the earth to arrive at seismograph stations first, hence the name "Primary". These waves can travel through any type of material, including fluids, and can travel nearly 1.7 times faster than the S waves. In air, they take the form of sound waves, hence they travel at the speed of sound. Typical speeds are 330 m/s in air, 1450 m/s in water and about 5000 m/s in granite.
Secondary waves
Secondary waves (S waves) are shear waves that are transverse in nature. Following an earthquake event, S waves arrive at seismograph stations after the faster-moving P waves and displace the ground perpendicular to the direction of propagation. Depending on the propagational direction, the wave can take on different surface characteristics; for example, in the case of horizontally polarized S waves, the ground moves alternately to one side and then the other. S waves can travel only through solids, as fluids (liquids and gases) do not support shear stresses. S waves are slower than P waves, and speeds are typically around 60% of that of P waves in any given material. Shear waves can not travel through any liquid medium, so the absence of S waves in earth's outer core suggests a liquid state.
Surface waves
Seismic surface waves travel along the Earth's surface. They can be classified as a form of mechanical surface wave. Surface waves diminish in amplitude as they get farther from the surface and propagate more slowly than seismic body waves (P and S). Surface waves from very large earthquakes can have globally observable amplitude of several centimeters.
Rayleigh waves
Rayleigh waves, also called ground roll, are surface waves that propagate with motions that are similar to those of waves on the surface of water (note, however, that the associated seismic particle motion at shallow depths is typically retrograde, and that the restoring force in Rayleigh and in other seismic waves is elastic, not gravitational as for water waves). The existence of these waves was predicted by John William Strutt, Lord Rayleigh, in 1885. They are slower than body waves, e.g., at roughly 90% of the velocity of S waves for typical homogeneous elastic media. In a layered medium (e.g., the crust and upper mantle) the velocity of the Rayleigh waves depends on their frequency and wavelength. See also Lamb waves.
Love waves
Love waves are horizontally polarized shear waves (SH waves), existing only in the presence of a layered medium. They are named after Augustus Edward Hough Love, a British mathematician who created a mathematical model of the waves in 1911. They usually travel slightly faster than Rayleigh waves, about 90% of the S wave velocity.
Stoneley waves
A Stoneley wave is a type of boundary wave (or interface wave) that propagates along a solid-fluid boundary or, under specific conditions, also along a solid-solid boundary. Amplitudes of Stoneley waves have their maximum values at the boundary between the two contacting media and decay exponentially away from the contact. These waves can also be generated along the walls of a fluid-filled borehole, being an important source of coherent noise in vertical seismic profiles (VSP) and making up the low frequency component of the source in sonic logging.
The equation for Stoneley waves was first given by Dr. Robert Stoneley (1894–1976), emeritus professor of seismology, Cambridge.
Normal modes
Free oscillations of the Earth are standing waves, the result of interference between two surface waves traveling in opposite directions. Interference of Rayleigh waves results in spheroidal oscillation S while interference of Love waves gives toroidal oscillation T. The modes of oscillations are specified by three numbers, e.g., nSlm, where l is the angular order number (or spherical harmonic degree, see Spherical harmonics for more details). The number m is the azimuthal order number. It may take on 2l+1 values from −l to +l. The number n is the radial order number. It means the wave with n zero crossings in radius. For spherically symmetric Earth the period for given n and l does not depend on m.
Some examples of spheroidal oscillations are the "breathing" mode 0S0, which involves an expansion and contraction of the whole Earth, and has a period of about 20 minutes; and the "rugby" mode 0S2, which involves expansions along two alternating directions, and has a period of about 54 minutes. The mode 0S1 does not exist because it would require a change in the center of gravity, which would require an external force.
Of the fundamental toroidal modes, 0T1 represents changes in Earth's rotation rate; although this occurs, it is much too slow to be useful in seismology. The mode 0T2 describes a twisting of the northern and southern hemispheres relative to each other; it has a period of about 44 minutes.
The first observations of free oscillations of the Earth were done during the great 1960 earthquake in Chile. Presently the periods of thousands of modes have been observed. These data are used for constraining large scale structures of the Earth's interior.
P and S waves in Earth's mantle and core
When an earthquake occurs, seismographs near the epicenter are able to record both P and S waves, but those at a greater distance no longer detect the high frequencies of the first S wave. Since shear waves cannot pass through liquids, this phenomenon was original evidence for the now well-established observation that the Earth has a liquid outer core, as demonstrated by Richard Dixon Oldham. This kind of observation has also been used to argue, by seismic testing, that the Moon has a solid core, although recent geodetic studies suggest the core is still molten.
Notation
The naming of seismic waves is usually based on the wave type and its path; due to the theoretically infinite possibilities of travel paths and the different areas of application, a wide variety of nomenclatures have emerged historically, the standardization of which – for example in the IASPEI Standard Seismic Phase List – is still an ongoing process. The path that a wave takes between the focus and the observation point is often drawn as a ray diagram. Each path is denoted by a set of letters that describe the trajectory and phase through the Earth. In general, an upper case denotes a transmitted wave and a lower case denotes a reflected wave. The two exceptions to this seem to be "g" and "n".
For example:
ScP is a wave that begins traveling towards the center of the Earth as an S wave. Upon reaching the outer core the wave reflects as a P wave.
sPKIKP is a wave path that begins traveling towards the surface as an S wave. At the surface, it reflects as a P wave. The P wave then travels through the outer core, the inner core, the outer core, and the mantle.
Usefulness of P and S waves in locating an event
In the case of local or nearby earthquakes, the difference in the arrival times of the P and S waves can be used to determine the distance to the event. In the case of earthquakes that have occurred at global distances, three or more geographically diverse observing stations (using a common clock) recording P wave arrivals permits the computation of a unique time and location on the planet for the event. Typically, dozens or even hundreds of P wave arrivals are used to calculate hypocenters. The misfit generated by a hypocenter calculation is known as "the residual". Residuals of 0.5 second or less are typical for distant events, residuals of 0.1–0.2 s typical for local events, meaning most reported P arrivals fit the computed hypocenter that well. Typically a location program will start by assuming the event occurred at a depth of about 33 km; then it minimizes the residual by adjusting depth. Most events occur at depths shallower than about 40 km, but some occur as deep as 700 km.
A quick way to determine the distance from a location to the origin of a seismic wave less than 200 km away is to take the difference in arrival time of the P wave and the S wave in seconds and multiply by 8 kilometers per second. Modern seismic arrays use more complicated earthquake location techniques.
At teleseismic distances, the first arriving P waves have necessarily travelled deep into the mantle, and perhaps have even refracted into the outer core of the planet, before travelling back up to the Earth's surface where the seismographic stations are located. The waves travel more quickly than if they had traveled in a straight line from the earthquake. This is due to the appreciably increased velocities within the planet, and is termed Huygens' Principle. Density in the planet increases with depth, which would slow the waves, but the modulus of the rock increases much more, so deeper means faster. Therefore, a longer route can take a shorter time.
The travel time must be calculated very accurately in order to compute a precise hypocenter. Since P waves move at many kilometers per second, being off on travel-time calculation by even a half second can mean an error of many kilometers in terms of distance. In practice, P arrivals from many stations are used and the errors cancel out, so the computed epicenter is likely to be quite accurate, on the order of 10–50 km or so around the world. Dense arrays of nearby sensors such as those that exist in California can provide accuracy of roughly a kilometer, and much greater accuracy is possible when timing is measured directly by cross-correlation of seismogram waveforms.
See also
Adams–Williamson equation
Helioseismology
Reflection seismology
References
Sources
External links
EDT: A MATLAB Website for seismic wave propagation
Seismology
Surface waves | Seismic wave | [
"Physics"
] | 2,532 | [
"Surface waves",
"Waves",
"Physical phenomena"
] |
90,446 | https://en.wikipedia.org/wiki/Equality%20%28mathematics%29 | In mathematics, equality is a relationship between two quantities or expressions, stating that they have the same value, or represent the same mathematical object. Equality between and is written , and pronounced " equals ". In this equality, and are distinguished by calling them left-hand side (LHS), and right-hand side (RHS). Two objects that are not equal are said to be distinct.
Equality is often considered a kind of primitive notion, meaning, it's not formally defined, but rather informally said to be "a relation each thing bears to itself and nothing else". This characterization is notably circular ("nothing else"). This makes equality a somewhat slippery idea to pin down.
Basic properties about equality like reflexivity, symmetry, and transitivity have been understood intuitively since at least the ancient Greeks, but weren't symbolically stated as general properties of relations until the late 19th century by Giuseppe Peano. Other properties like substitution and function application weren't formally stated until the development of symbolic logic.
There are generally two ways that equality is formalized in mathematics: through logic or through set theory. In logic, equality is a primitive predicate (a statement that may have free variables) with the reflexive property (called the Law of identity), and the substitution property. From those, one can derive the rest of the properties usually needed for equality. Logic also defines the principle of extensionality, which defines two objects of a certain kind to be equal if they satisfy the same external property (See the example of sets below).
After the foundational crisis in mathematics at the turn of the 20th century, set theory (specifically Zermelo–Fraenkel set theory) became the most common foundation of mathematics in order to resolve the crisis. In set theory, any two sets are defined to be equal if they have all the same members. This is called the Axiom of extensionality. Usually set theory is defined within logic, and therefore uses the equality described above, however, if a logic system does not have equality, it is possible to define equality within set theory.
Etymology
The word equal is derived from the Latin ('like', 'comparable', 'similar'), which itself stems from ('level', 'just'). The word entered Middle English around the 14th century, borrowed from Old French (modern ).
The equals sign , now universally accepted in mathematics for equality, was first recorded by Welsh mathematician Robert Recorde in The Whetstone of Witte (1557). The original form of the symbol was much wider than the present form. In his book, Recorde explains his design of the "Gemowe lines", from the Latin ('twin'), using two parallel lines to represent equality because he believed that "no two things could be more equal." Later, a vertical version was also used by some but never overtook Recorde's version.
It was common into the 18th century to use an abbreviation of the word equals as the symbol for equality; examples included and , from the Latin . Diophantus's use of , short for ( 'equals'), in Arithmetica () is considered one of the first uses of an equals sign.
Basic properties
Reflexivity
For every , one has .
Symmetry
For every and , if , then .
Transitivity
For every , , and , if and , then .
Substitution
Informally, this just means that if , then can replace in any mathematical expression or formula without changing its meaning. (For a formal explanation, see ) For example:
Operation application
For every and , with some operation , if , then . For example:
The first three properties are generally attributed to Giuseppe Peano for being the first to explicitly state these as fundamental properties of equality in his (1889). However, the basic notions have always existed; for example, in Euclid's Elements (), he includes 'common notions': "Things that are equal to the same thing are also equal to one another" (transitivity), "Things that coincide with one another are equal to one another" (reflexivity), along with some operation-application properties for addition and subtraction. The operation-application property was also stated in Peano's , however, it had been common practice in algebra since at least Diophantus (). The substitution property is generally attributed to Gottfried Leibniz ().
Equations
An equation is a symbolic equality of two mathematical expressions connected with an equals sign (=). Equation solving is the problem of finding values of some variable, called , for which the specified equality is true. Each value of the unknown for which the equation holds is called a of the given equation; also stated as the equation. For example, the equation has the values and as its only solutions. The terminology is used similarly for equations with several unknowns.
In mathematical logic and computer science, an equation may described as a binary formula or Boolean-valued expression, which may be true for some values of the variables (if any) and false for other values. More specifically, an equation represents a binary relation (i.e., a two-argument predicate) which may produce a truth value (true or false) from its arguments. In computer programming, the computation from the two expressions is known as comparison.
An equation can be used to define a set, called its solution set. For example, the set of all solution pairs of the equation forms the unit circle in analytic geometry; therefore, this equation is called .
Identities
An identity is an equality that is true for all values of its variables in a given domain. An "equation" may sometimes mean an identity, but more often than not, it a subset of the variable space to be the subset where the equation is true. An example is , which is true for each real number . There is no standard notation that distinguishes an equation from an identity, or other use of the equality relation: one has to guess an appropriate interpretation from the semantics of expressions and the context. Sometimes, but not always, an identity is written with a triple bar:
Definitions
Equations are often used to introduce new terms or symbols for constants, assert equalities, and introduce shorthand for complex expressions, which is called "equal by definition", and often denoted with (). It is similar to the concept of assignment of a variable in computer science. For example, defines Euler's number, and is the defining property of the imaginary number .
In mathematical logic, this is called an extension by definition (by equality) which is a conservative extension to a formal system. This is done by taking the equation defining the new constant symbol as a new axiom of the theory.
The first recorded symbolic use of "Equal by definition" appeared in Logica Matematica (1894) by Cesare Burali-Forti, an Italian mathematician. Burali-Forti, in his book, used the notation ().
In logic
History
Equality (or identity) is often considered a primitive notion, informally said to be "a relation each thing bears to itself and to no other thing". This tradition can be traced back to at least 350 BC by Aristotle: in his Categories, he defines the notion of quantity in terms of a more primitive equality, stating:"The most distinctive mark of quantity is that equality and inequality are predicated of it. Each of the aforesaid quantities is said to be equal or unequal. For instance, one solid is said to be equal or unequal to another; number, too, and time can have these terms applied to them, indeed can all those kinds of quantity that have been mentioned. That which is not a quantity can by no means, it would seem, be termed equal or unequal to anything else. One particular disposition or one particular quality, such as whiteness, is by no means compared with another in terms of equality and inequality but rather in terms of similarity. Thus it is the distinctive mark of quantity that it can be called equal and unequal." - (Translated by E. M. Edghill)The characterization as "a relation each thing bears to itself and to no other thing" is often criticized as circular ("no other thing"), and around the 17th century, with the growth of modern logic, it became necessary to have a more concrete description of equality. In foundations of mathematics, especially mathematical logic and analytic philosophy, equality is often axiomatized through the law of identity and the substitution property.
The precursor to the substitution property of equality was first formulated by Gottfired Leibniz in his Discourse on Metaphysics (1686), stating, roughly, that "No two distinct things can have all properties in common." This has since broken into two principles, the substitution property (if , then any property of is a property of ), and its converse, the identity of indiscernibles (if and have all properties in common, then ). Its introduction to logic, and first symbolic formulation is due to Bertrand Russell and Alfred Whitehead in their Principia Mathematica (1910), who credit Leibniz for the idea.
Axioms
Law of identity: Stating that each thing is identical with itself, without restriction. That is, for every , . It is the first of the traditional three laws of thought. Stated symbolically as:Substitution property: Sometimes referred to as Leibniz's law, generally states that if two things are equal, then any property of one must be a property of the other. It can be stated formally as: for every and , and any formula (with a free variable ), if , then implies . Stated symbolically as:Function application is also sometimes included in the axioms of equality, but isn't necessary as it can be deduced from the other two axioms, and similarly for symmetry and transitivity. (See )
In first-order logic, these are axiom schemas, each of which specify an infinite set of axioms. If a theory has a predicate that satisfies the Law of Identity and Substitution property, it is common to say that it "has equality," or is "a theory with equality." The use of "equality" here is a misnomer in that an arbitrary binary predicate that satisfies those properties may not be true equality, and there is no property or list of properties one could add to correct for this. If, however, one is given that a predicate is true equality, then those properties are enough, since if has all the same properties as , and has the property of being equal to , then has the property of being equal to .
Objections
As mentioned above, these axioms don't explicitly define equality, in the sense that we still don't know if two objects are equal, only that if they're equal, then they have the same properties. If these axioms were to define a complete axiomatization of equality, meaning, if they were to define equality, then the converse of the second statement must be true. This is because any reflexive relation satisfying the substitution property within a given theory would be considered an "equality" for that theory. The converse of the Substitution property is the identity of indiscernibles, which states that two distinct things cannot have all their properties in common. Stated symbolically as:
In mathematics, the identity of indiscernibles is usually rejected since indiscernibles in mathematical logic are not necessarily forbidden. Outside of pure math, the identity of indiscernibles has attracted much controversy and criticism, especially from corpuscular philosophy and quantum mechanics.
Derivations of basic properties
Reflexivity of Equality: Given some set with a relation induced by equality (), assume . Then by the Law of identity, thus .
Symmetry of Equality: Given some set with a relation induced by equality (), assume there are elements such that . Then, take the formula . So we have . Since by assumption, and by Reflexivity, we have that .
Transitivity of Equality: Given some set with a relation induced by equality (), assume there are elements such that and . Then take the formula . So we have . Since by symmetry, and by assumption, we have that .
Function application: Given some function , assume there are elements and from its domain such that , then take the formula . So we have Since by assumption, and by reflexivity, we have that .
In set theory
Set theory is the branch of mathematics that studies sets, which can be informally described as "collections of objects." Although objects of any kind can be collected into a set, set theory – as a branch of mathematics – is mostly concerned with those that are relevant to mathematics as a whole.
Sets are uniquely characterized by their elements; this means that two sets that have precisely the same elements are equal (they are the same set). In a formalized set theory, this is usually defined by an axiom called the Axiom of extensionality.
For example, using set builder notation,
Which states that "The set of all integers greater than 0 but not more than 3 is equal to the set containing only 1, 2, and 3", despite the differences in notation.
José Ferreirós credits Richard Dedekind for being the first to explicitly state the principle, (although he does not assert it as a definition):"It very frequently happens that different things a, b, c ... considered for any reason under a common point of view, are collected together in the mind, and one then says that they form a system S; one calls the things a, b, c ... the elements of the system S, they are contained in S; conversely, S consists of these elements. Such a system S (or a collection, a manifold, a totality), as an object of our thought, is likewise a thing; it is completely determined when, for every thing, it is determined whether it is an element of S or not." - Richard Dedekind, 1888 (Translated by José Ferreirós)
Background
Around the turn of the 20th century, mathematics faced several paradoxes and counter-intuitive results. For example, Russell's paradox showed a contradiction of naive set theory, it was shown that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics.
The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic, which studies formal logic within mathematics. Subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. This framework is based on a systematic use of axiomatic method and on set theory, specifically Zermelo–Fraenkel set theory, developed by Ernst Zermelo and Abraham Fraenkel. This set theory (and set theory in general) is now considered the most common foundation of mathematics.
Extensionality
The term extensionality, as used in 'Axiom of Extensionality has its roots in logic. An intensional definition describes the necessary and sufficient conditions for a term to apply to an object. For example: "An even number is an integer which is divisible by 2." An extensional definition instead lists all objects where the term applies. For example: "An even number is any one of the following integers: 0, 2, 4, 6, 8..., -2, -4, -8..." In logic, the extension of a predicate is the set of all things for which the predicate is true.
The logical term was introduced to set theory in 1893, Gottlob Frege attempted to use this idea of an extension formally in his Foundations of Arithmetic, where, if is a predicate, its extension , is the set of all objects satisfying . For example if is "x is even" then is the set . In his work, he defined his infamous Basic Law V as:Stating that if two predicates have the same extensions (they are satisfied by the same set of objects) then they are logically equivalent, however, it was determined later that this axiom led to Russell's paradox. The first explicit statement of the modern Axiom of Extensionality was in 1908 by Ernst Zermelo in a paper on the well-ordering theorem, where he presented the first axiomatic set theory, now called Zermelo set theory, which became the basis of modern set theories. The specific term for "Extensionality" used by Zermelo was "Bestimmtheit".The specific English term "extensionality" only became common in mathematical and logical texts in the 1920s and 1930s, particularly with the formalization of logic and set theory by figures like Alfred Tarski and John von Neumann.
Set equality based on first-order logic with equality
In first-order logic with equality (See ), the axiom of extensionality states that two sets that contain the same elements are the same set.
Logic axiom:
Logic axiom:
Set theory axiom:
The first two are given by the substitution property of equality from first-order logic; the last is a new axiom of the theory. Incorporating half of the work into the first-order logic may be regarded as a mere matter of convenience, as noted by Azriel Lévy.
"The reason why we take up first-order predicate calculus with equality is a matter of convenience; by this, we save the labor of defining equality and proving all its properties; this burden is now assumed by the logic."
Set equality based on first-order logic without equality
In first-order logic without equality, two sets are defined to be equal if they contain the same elements. Then the axiom of extensionality states that two equal sets are contained in the same sets.
Set theory definition:
Set theory axiom:
Or, equivalently, one may choose to define equality in a way that mimics, the substitution property explicitly, as the conjunction of all atomic formuals:
Set theory definition:
Set theory axiom:
In either case, the Axiom of Extensionality based on first-order logic without equality states:
Proof of basic properties
Reflexivity: Given a set , assume , it follows trivially that , and the same follows in reverse, therefore , thus .
Symmetry: Given sets , such that , then , which implies , thus .
Transitivity''''': Given sets , such that (1) and (2) , assume , then by (1), which implies by (2), and similarly for the reverse, therefore , thus .
Similar relations
Approximate equality
Numerical approximation is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis.
Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules, and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation, expressed in a limited number of significant digits, although they can be programmed to produce more precise results.
If viewed as a binary relation, (denoted by the symbol ) between real numbers or other things, if precisely defined, is not an equivalence relation since it's not transitive, even if modeled as a fuzzy relation.
In computer science, equality is given by some relational operator. Real numbers are often approximated by floating-point numbers (A sequence of some fixed number of digits of a given base, scaled by an integer exponent of that base), thus it is common to store an expression that denotes the real number as to not lose precision. However, the equality of two real numbers given by an expression is known to be undecidable (specifically, real numbers defined by expressions involving the integers, the basic arithmetic operations, the logarithm and the exponential function). In other words, there cannot exist any algorithm for deciding such an equality (see Richardson's theorem).
A questionable equality under test may be denoted using the symbol.
Equivalence relation
An equivalence relation is a mathematical relation that generalizes the idea of similarity or sameness. It is defined on a set as a binary relation that satisfies the three properties: reflexivity, symmetry, and transitivity. Reflexivity means that every element in is equivalent to itself ( for all ). Symmetry requires that if one element is equivalent to another, the reverse also holds (). Transitivity ensures that if one element is equivalent to a second, and the second to a third, then the first is equivalent to the third ( and ). These properties are enough to partition a set into disjoint equivalence classes. Conversely, every partition defines an equivalence class.
The equivalence relation of equality is a special case, as, if restricted to a given set , it is the strictest possible equivalence relation on ; specifically, equality partitions a set into equivalence classes consisting of all singleton sets. Other equivalence relations, while less restrictive, often generalize equality by identifying elements based on shared properties or transformations, such as congruence in modular arithmetic or similarity in geometry.
Congruence relation
In abstract algebra, a congruence relation extends the idea of an equivalence relation to include the operation-application property. That is, given a set , and a set of operations on , then a congruence relation has the property that for all operations (here, written as unary to avoid cumbersome notation, but may be of any arity). A congruence relation on an algebraic structure such as a group, ring, or module is an equivalence relation that respects the operations defined on that structure.
Isomorphism
In mathematics, especially in abstract algebra and category theory, it is common to deal with objects that already have some internal structure. An isomorphism describes a kind of structure-preserving correspondence between two objects, establishing them as essentially identical in their structure or properties.
More formally, an isomorphism is a bijective mapping (or morphism) between two sets or structures and such that and its inverse preserve the operations, relations, or functions defined on those structures. This means that any operation or relation valid in corresponds precisely to the operation or relation in under the mapping. For example, in group theory, a group isomorphism satisfies for all elements , where denotes the group operation.
When two objects or systems are isomorphic, they are considered indistinguishable in terms of their internal structure, even though their elements or representations may differ. For instance, all cyclic groups of order are isomorphic to the integers, , with addition. Similarly, in linear algebra, two vector spaces are isomorphic if they have the same dimension, as there exists a linear bijection between their elements.
The concept of isomorphism extends to numerous branches of mathematics, including graph theory (graph isomorphism), topology (homeomorphism), and algebra (group and ring isomorpisms), among others. Isomorphisms facilitate the classification of mathematical entities and enable the transfer of results and techniques between similar systems. Bridging the gap between isomorphism and equality was one motivation for the development of category theory, as well as for homotopy type theory and univalent foundations.
See also
Equipollence (geometry)
Homotopy type theory
Identity type
Inequality
Logical equality
Logical equivalence
Proportionality (mathematics)
Theory of pure equality
Notes
References
Mathematical logic
Binary relations
Elementary arithmetic
Equivalence (mathematics) | Equality (mathematics) | [
"Mathematics"
] | 4,813 | [
"Elementary arithmetic",
"Mathematical logic",
"Binary relations",
"Elementary mathematics",
"Arithmetic",
"Mathematical relations"
] |
90,465 | https://en.wikipedia.org/wiki/Super-Poulet%20number | In number theory, a super-Poulet number is a Poulet number, or pseudoprime to base 2, whose every divisor divides .
For example, 341 is a super-Poulet number: it has positive divisors (1, 11, 31, 341), and we have:
(211 - 2) / 11 = 2046 / 11 = 186
(231 - 2) / 31 = 2147483646 / 31 = 69273666
(2341 - 2) / 341 = 13136332798696798888899954724741608669335164206654835981818117894215788100763407304286671514789484550
When is not prime, then it and every divisor of it are a pseudoprime to base 2, and a super-Poulet number.
The super-Poulet numbers below 10,000 are :
Super-Poulet numbers with 3 or more distinct prime divisors
It is relatively easy to get super-Poulet numbers with 3 distinct prime divisors. If you find three Poulet numbers with three common prime factors, you get a super-Poulet number, as you built the product of the three prime factors.
Example:
2701 = 37 * 73 is a Poulet number,
4033 = 37 * 109 is a Poulet number,
7957 = 73 * 109 is a Poulet number;
so 294409 = 37 * 73 * 109 is a Poulet number too.
Super-Poulet numbers with up to 7 distinct prime factors you can get with the following numbers:
{ 103, 307, 2143, 2857, 6529, 11119, 131071 }
{ 709, 2833, 3541, 12037, 31153, 174877, 184081 }
{ 1861, 5581, 11161, 26041, 37201, 87421, 102301 }
{ 6421, 12841, 51361, 57781, 115561, 192601, 205441 }
For example, 1118863200025063181061994266818401 = 6421 * 12841 * 51361 * 57781 * 115561 * 192601 * 205441 is a super-Poulet number with 7 distinct prime factors and 120 Poulet numbers.
External links
Numericana
Integer sequences | Super-Poulet number | [
"Mathematics"
] | 483 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
1,726,883 | https://en.wikipedia.org/wiki/Intergenic%20region | An intergenic region is a stretch of DNA sequences located between genes. Intergenic regions may contain functional elements and junk DNA.
Properties and functions
Intergenic regions may contain a number of functional DNA sequences such as promoters and regulatory elements, enhancers, spacers, and (in eukaryotes) centromeres. They may also contain origins of replication, scaffold attachment regions, and transposons and viruses.
Non-functional DNA elements such as pseudogenes and repetitive DNA, both of which are types of junk DNA, can also be found in intergenic regions—although they may also be located within genes in introns. It is possible that these regions contain as of yet unidentified functional elements, such as non-coding genes or regulatory sequences. This indeed occurs occasionally, but the amount of functional DNA discovered usually constitute only a tiny fraction of the overall amount of intergenic or intronic DNA.
Intergenic regions in different organisms
In humans, intergenic regions comprise about 50% of the genome, whereas this number is much less in bacteria (15%) and yeast (30%).
As with most other non-coding DNA, the GC-content of intergenic regions vary considerably among species. For example in Plasmodium falciparum, many intergenic regions have an AT content of 90%.
Molecular evolution of intergenic regions
Functional elements in intergenic regions will evolve slowly because their sequence is maintained by negative selection. In species with very large genomes, a large percentage of intergenic regions is probably junk DNA and it will evolve at the neutral rate of evolution. Junk DNA sequences are not maintained by purifying selection but gain-of-function mutations with deleterious fitness effects can occur.
Phylostratigraphic inference and bioinformatics methods have shown that intergenic regions can—on geological timescales—transiently evolve into open reading frame sequences that mimic those of protein coding genes, and can therefore lead to the evolution of novel protein-coding genes in a process known as de novo gene birth.
See also
Exon
Promoter (biology)
ENCODE
Heterochromatin
Noncoding DNA
Repetitive DNA
Regulator gene
Whole genome sequencing
References
External links
ENCODE threads Explorer Characterization of intergenic regions and gene definition. Nature (journal)
DNA
Molecular biology | Intergenic region | [
"Chemistry",
"Biology"
] | 468 | [
"Biochemistry",
"Molecular biology"
] |
1,726,921 | https://en.wikipedia.org/wiki/Photomagneton | The photomagneton is a theoretical treatment of the unitary group in quantum field theory and quantum chemistry that effectively describes the experimentally observed inverse Faraday effect. When circularly polarized light travels through a plasma, the angular momentum associated to the circular motion of the photons induces an angular momentum in the electrons of the plasma. This angular momentum induces an associated magnetic field.
Exactly how this happens remains a subject of debate. For instance, if the so-called ghost field does not contribute to the free electromagnetic energy density in the plasma, then the electron must couple to something like a complex electric field. However, if the field induces a finite magnetic field in the absence of matter, then the implication may be a finite photon rest mass.
References
A. Hasanein and M. Evans, The Photomagneton and Quantum Field Theory: Vol. 1 of Quantum Chemistry, World Scientific, 1994
Quantum field theory
Magneto-optic effects | Photomagneton | [
"Physics",
"Chemistry",
"Materials_science"
] | 192 | [
"Quantum field theory",
"Physical phenomena",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Optical phenomena",
"Magneto-optic effects",
"Quantum physics stubs"
] |
1,726,941 | https://en.wikipedia.org/wiki/Grubbs%20catalyst | Grubbs catalysts are a series of transition metal carbene complexes used as catalysts for olefin metathesis. They are named after Robert H. Grubbs, the chemist who supervised their synthesis. Several generations of the catalyst have also been developed. Grubbs catalysts tolerate many functional groups in the alkene substrates, are air-tolerant, and are compatible with a wide range of solvents. For these reasons, Grubbs catalysts have become popular in synthetic organic chemistry. Grubbs, together with Richard R. Schrock and Yves Chauvin, won the Nobel Prize in Chemistry in recognition of their contributions to the development of olefin metathesis.
First-generation Grubbs catalyst
In the 1960s, ruthenium trichloride was found to catalyze olefin metathesis. Processes were commercialized based on these discoveries. These ill-defined but highly active homogeneous catalysts remain in industrial use. The first well-defined ruthenium catalyst was reported in 1992. It was prepared from RuCl2(PPh3)4 and diphenylcyclopropene.
This initial ruthenium catalyst was followed in 1995 by what is now known as the first-generation Grubbs catalyst. It is synthesized from RuCl2(PPh3)3, phenyldiazomethane, and tricyclohexylphosphine in a one-pot synthesis.
The first-generation Grubbs catalyst was the first well-defined Ru-based catalyst. It is also important as a precursor to all other Grubbs-type catalysts.
Second-generation Grubbs catalyst
The second-generation catalyst has the same uses in organic synthesis as the first generation catalyst, but generally with higher activity. This catalyst is stable toward moisture and air, thus is easier to handle in laboratories.
Shortly before the discovery of the second-generation Grubbs catalyst, a very similar catalyst based on an unsaturated N-heterocyclic carbene (1,3-bis(2,4,6-trimethylphenyl)imidazole) was reported independently by Nolan and Grubbs in March 1999, and by Fürstner in June of the same year. Shortly thereafter, in August 1999, Grubbs reported the second-generation catalyst, based on a saturated N-heterocyclic carbene (1,3-bis(2,4,6-trimethylphenyl)dihydroimidazole):
In both the saturated and unsaturated cases a phosphine ligand is replaced with an N-heterocyclic carbene (NHC), which is characteristic of all second-generation-type catalysts.
Both the first- and second-generation catalysts are commercially available, along with many derivatives of the second-generation catalyst.
Hoveyda–Grubbs catalysts
In the Hoveyda–Grubbs catalysts, the benzylidene ligands have a chelating ortho-isopropoxy group attached to the benzene rings. The ortho-isopropoxybenzylidene moiety is sometimes referred to as a Hoveyda chelate. The chelating oxygen atom replaces a phosphine ligand, which in the case of the 2nd generation catalyst, gives a completely phosphine-free structure. The 1st generation Hoveyda–Grubbs catalyst was reported in 1999 by Amir H. Hoveyda's group, and in the following year, the second-generation Hoveyda–Grubbs catalyst was described in nearly simultaneous publications by the Blechert and Hoveyda laboratories. Siegfried Blechert's name is not commonly included in the eponymous catalyst name. The Hoveyda–Grubbs catalysts, while more expensive and slower to initiate than the Grubbs catalyst from which they are derived, are popular because of their improved stability. By changing the steric and electronic properties of the chelate, the initiation rate of the catalyst can be modulated, such as in the Zhan catalysts. Hoveyda–Grubbs catalysts are easily formed from the corresponding Grubbs catalyst by the addition of the chelating ligand and the use of a phosphine scavenger like copper(I) chloride:
The second-generation Hoveyda–Grubbs catalysts can also be prepared from the 1st generation Hoveyda–Grubbs catalyst by the addition of the NHC:
In one study published by Grubbs and Hong in 2006, a water-soluble Grubbs catalyst was prepared by attaching a polyethylene glycol chain to the imidazolidine group. This catalyst is used in the ring-closing metathesis reaction in water of a diene carrying an ammonium salt group making it water-soluble as well.
Third-generation Grubbs catalyst (fast-initiating catalysts)
The rate of the Grubbs catalyst can be altered by replacing the phosphine ligand with more labile pyridine ligands. By using 3-bromopyridine the initiation rate is increased more than a millionfold. Both pyridine and 3-bromopyridine are commonly used, with the bromo- version 4.8 times more labile resulting in even faster rates. The catalyst is traditionally isolated as a two pyridine complex, however one pyridine is lost upon dissolving and reversibly inhibits the ruthenium center throughout any chemical reaction.
The principal application of the fast-initiating catalysts is as initiators for ring opening metathesis polymerisation (ROMP). Because of their usefulness in ROMP these catalysts are sometimes referred to as the 3rd generation Grubbs catalysts. The high ratio of the rate of initiation to the rate of propagation makes these catalysts useful in living polymerization, yielding polymers with low polydispersity.
Applications
Grubbs catalysts are of interest for olefin metathesis. It is mainly applied to fine chemical synthesis. Large-scale commercial applications of olefin metathesis almost always employ heterogeneous catalysts or ill-defined systems based on ruthenium trichloride.
References
Organoruthenium compounds
Catalysts
Phosphine complexes
Chloro complexes
D
Substances discovered in the 1990s
Ruthenium(II) compounds
Cyclohexyl compounds | Grubbs catalyst | [
"Chemistry"
] | 1,333 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
1,727,187 | https://en.wikipedia.org/wiki/Extended%20H%C3%BCckel%20method | The extended Hückel method is a semiempirical quantum chemistry method, developed by Roald Hoffmann since 1963. It is based on the Hückel method but, while the original Hückel method only considers pi orbitals, the extended method also includes the sigma orbitals.
The extended Hückel method can be used for determining the molecular orbitals, but it is not very successful in determining the structural geometry of an organic molecule. It can however determine the relative energy of different geometrical configurations. It involves calculations of the electronic interactions in a rather simple way for which the electron-electron repulsions are not explicitly included and the total energy is just a sum of terms for each electron in the molecule. The off-diagonal Hamiltonian matrix elements are given by an approximation due to Wolfsberg and Helmholz that relates them to the diagonal elements and the overlap matrix element.
K is the Wolfsberg–Helmholz constant, and is usually given a value of 1.75. In the extended Hückel method, only valence electrons are considered; the core electron energies and functions are supposed to be more or less constant between atoms of the same type. The method uses a series of parametrized energies calculated from atomic ionization potentials or theoretical methods to fill the diagonal of the Fock matrix. After filling the non-diagonal elements and diagonalizing the resulting Fock matrix, the energies (eigenvalues) and wavefunctions (eigenvectors) of the valence orbitals are found.
It is common in many theoretical studies to use the extended Hückel molecular orbitals as a preliminary step to determining the molecular orbitals by a more sophisticated method such as the CNDO/2 method and ab initio quantum chemistry methods. Since the extended Hückel basis set is fixed, the monoparticle calculated wavefunctions must be projected to the basis set where the accurate calculation is to be done. One usually does this by adjusting the orbitals in the new basis to the old ones by least squares method.
As only valence electron wavefunctions are found by this method, one must fill the core electron functions by orthonormalizing the rest of the basis set with the calculated orbitals and then selecting the ones with less energy. This leads to the determination of more accurate structures and electronic properties, or in the case of ab initio methods, to somewhat faster convergence.
The method was first used by Roald Hoffmann who developed, with Robert Burns Woodward, rules for elucidating reaction mechanisms (the Woodward–Hoffmann rules). He used pictures of the molecular orbitals from extended Hückel theory to work out the orbital interactions in these cycloaddition reactions.
A closely similar method was used earlier by Hoffmann and William Lipscomb for studies of boron hydrides. The off-diagonal Hamiltonian matrix elements were given as proportional to the overlap integral.
This simplification of the Wolfsberg and Helmholz approximation is reasonable for boron hydrides as the diagonal elements are reasonably similar due to the small difference in electronegativity between boron and hydrogen.
The method works poorly for molecules that contain atoms of very different electronegativity. To overcome this weakness, several groups have suggested iterative schemes that depend on the atomic charge. One such method, that is still widely used in inorganic and organometallic chemistry is the Fenske-Hall method.
A program for the extended Hückel method is YAeHMOP which stands for "yet another extended Hückel molecular orbital package". YAeHMOP has also been merged with the Avogadro open-source molecular editor and visualizer to enable calculations directly from the Avogadro graphical user interface for materials that are periodic in one, two, or three dimensions. This integration also enables visualization of band structures, total and projected density of states, and crystal orbital overlap/Hamilton populations (COOPs/COHPs).
See also
Erich Hückel
Roald Hoffmann
References
External links
Online Extended Hückel Calculator.
Semiempirical quantum chemistry methods | Extended Hückel method | [
"Chemistry"
] | 843 | [
"Computational chemistry",
"Quantum chemistry",
"Semiempirical quantum chemistry methods"
] |
1,727,409 | https://en.wikipedia.org/wiki/Mesoplanet | Mesoplanets are planetary-mass objects with sizes smaller than Mercury but larger than Ceres. The term was coined by Isaac Asimov. Assuming size is defined in relation to equatorial radius, mesoplanets should be approximately 500 km to 2,500 km in radius.
History
The term was coined in Asimov's essay "What's in a Name?", which first appeared in the Los Angeles Times in the late 1980s and was reprinted in his 1990 book Frontiers; the term was later revisited in his essay, "The Incredible Shrinking Planet" which appeared first in The Magazine of Fantasy and Science Fiction and then in the anthology The Relativity of Wrong (1988).
Asimov noted that the Solar System has many planetary bodies (as opposed to the Sun and natural satellites) and stated that lines dividing "major planets" from minor planets were necessarily arbitrary. Asimov then pointed out that there was a large gap in size between Mercury, the smallest planetary body that was considered to be undoubtedly a major planet, and Ceres, the largest planetary body that was considered to be undoubtedly a minor planet. Only one planetary body known at the time, Pluto, fell within the gap. Rather than arbitrarily decide whether Pluto belonged with the major planets or the minor planets, Asimov suggested that any planetary body that fell within the size gap between Mercury and Ceres be called a mesoplanet, because mesos means "middle" in Greek.... my own suggestion is that everything from Mercury up be called a major planet; everything from Ceres down be called a minor planet; and everything between Mercury and Ceres be called a "mesoplanet" (from a Greek word for "intermediate"). At the moment, Pluto is the only mesoplanet known. — I. Asimov (1988)
Today, the known objects that would be included by this definition are Pluto, , , , , , probably , and perhaps . These eight, together with Ceres, are the objects astronomers generally agree are dwarf planets (though with some doubt regarding Orcus); other smaller bodies have been proposed, but astronomers disagree about their dwarf planethood.
See also
Asteroid
Centaur (minor planet)
Fusor (astronomy)
Protoplanet
Planetesimal
Brown dwarf
References
Isaac Asimov
Types of planet
Definition of planet | Mesoplanet | [
"Astronomy"
] | 474 | [
"Definition of planet",
"Astronomical controversies",
"Astronomical classification systems"
] |
1,727,620 | https://en.wikipedia.org/wiki/Methylene%20diphenyl%20diisocyanate | Methylene diphenyl diisocyanate (MDI) is an aromatic diisocyanate. Three isomers are common, varying by the positions of the isocyanate groups around the rings: 2,2′-MDI, 2,4′-MDI, and 4,4′-MDI. The 4,4′ isomer is most widely used, and is also known as 4,4′-diphenylmethane diisocyanate. This isomer is also known as Pure MDI. MDI reacts with polyols in the manufacture of polyurethane. It is the most produced diisocyanate, accounting for 61.3% of the global market in the year 2000.
Production
Total world production of MDI and polymeric MDI is over 7.5 million tonnes per year (in 2017).
As of 2019, the largest producer was Wanhua Chemical Group. Other major producers are Covestro, BASF, Dow, Huntsman, Tosoh, Kumho Mitsui Chemicals. All major producers of MDI are members of the International Isocyanate Institute, whose aim is the promotion of the safe handling of MDI and TDI in the workplace, community and environment.
The first step of the production of MDI is the reaction of aniline and formaldehyde, using hydrochloric acid as a catalyst to produce 4,4'-Methylenedianiline and other diamine precursors, as well as their corresponding polyamines:
Then, these diamines are treated with phosgene to form a mixture of isocyanates, the isomer ratio being determined by the isomeric composition of the diamine. Two different reaction mechanisms for this transformation are possible, namely "phosgenations first" and "step-wise phosgenations".
Distillation of the mixture gives a mixture of oligomeric polyisocyanates, known as polymeric MDI, and a mixture of MDI isomers which has a low 2,4′ isomer content. Further purification entails fractionation of the MDI isomer mixture.
Reactivity of the isocyanate group
The positions of the isocyanate groups influences their reactivity. In 4,4′-MDI, the two isocyanate groups are equivalent but in 2,4′-MDI the two groups display highly differing reactivities. The group at the 4-position is approximately four times more reactive than the group at the 2-position due to steric hindrance.
Applications
The major application of 4,4′-MDI is the production of rigid polyurethane. These rigid polyurethane foams are good thermal insulators and used in nearly all freezers and refrigerators worldwide, as well as buildings. Typical polyols used are polyethylene adipate (a polyester) and poly(tetramethylene ether) glycol (a polyether).
4,4′-MDI is also used as an industrial strength adhesive, which is available to end consumers as various high-strength bottled glue preparations.
Safety
MDI, like the other isocyanates, is an allergen and sensitizer. Persons developing sensitivity to isocyanates may have dangerous systemic reactions to extremely small exposures, including respiratory failure. Handling MDI requires strict engineering controls and personal protective equipment. It is a potentially violently reactive material towards water and other nucleophiles.
References
External links
International Chemical Safety Card 0298
IARC Monograph: "4,4'-Methylenediphenyl Diisocyanate"
NIOSH Safety and Health Topic: Isocyanates, from the website of the National Institute for Occupational Safety and Health (NIOSH)
Isofact American Chemistry Council Diisocyanates Panel
Azom Chemical database on Polyurethane chemistry
MDI and the Environment - 2005 presentation by Center for the Polyurethanes Industry
International Isocyanate Institute
Concise International Chemical Assessment Document 27
Isocyanates
Monomers
IARC Group 3 carcinogens | Methylene diphenyl diisocyanate | [
"Chemistry",
"Materials_science"
] | 855 | [
"Isocyanates",
"Monomers",
"Functional groups",
"Polymer chemistry"
] |
1,728,510 | https://en.wikipedia.org/wiki/Permanganate | A permanganate () is a chemical compound with the manganate(VII) ion, , the conjugate base of permanganic acid. Because the manganese atom has a +7 oxidation state, the permanganate(VII) ion is a strong oxidising agent. The ion is a transition metal ion with a tetrahedral structure. Permanganate solutions are purple in colour and are stable in neutral or slightly alkaline media. The exact chemical reaction depends on the carbon-containing reactants present and the oxidant used. For example, trichloroethane (C2H3Cl3) is oxidised by permanganate ions to form carbon dioxide (CO2), manganese dioxide (MnO2), hydrogen ions (H+), and chloride ions (Cl−).
8 + 3 → 6 + 8 + + 4 + 9
In an acidic solution, permanganate(VII) is reduced to the pale pink manganese(II) (Mn2+) with an oxidation state of +2.
8 + + 5 e− → Mn2+ + 4 H2O
In a strongly basic or alkaline solution, permanganate(VII) is reduced to the green manganate ion, with an oxidation state of +6.
+ e− →
In a neutral solution, however, it gets reduced to the brown manganese dioxide MnO2 with an oxidation state of +4.
2 H2O + + 3 e− → MnO2 + 4 OH−
Production
Permanganates can be produced by oxidation of manganese compounds such as manganese chloride or manganese sulfate by strong oxidizing agents, for instance, sodium hypochlorite or lead dioxide:
2 MnCl2 + 5 NaClO + 6 NaOH → 2 NaMnO4 + 9 NaCl + 3 H2O
2 MnSO4 + 5 PbO2 + 3 H2SO4 → 2 HMnO4 + 5 PbSO4 + 2 H2O
It may also be produced by the disproportionation of manganates, with manganese dioxide as a side-product:
3 Na2MnO4 + 2 H2O → 2 NaMnO4 + MnO2 + 4 NaOH
They are produced commercially by electrolysis or air oxidation of alkaline solutions of manganate salts ().
Usage
This is a common and strong disinfectant, used regularly to sanitize baths, toilets, and wash basins. It is a cheap and extremely effective compound for the task.
Properties
Permanganates(VII) are salts of permanganic acid. They have a deep purple colour, due to a charge transfer transition from oxo ligand p orbitals to empty orbitals derived from manganese(VII) d orbitals. Permanganate(VII) is a strong oxidizer, and similar to perchlorate. It is therefore in common use in qualitative analysis that involves redox reactions (permanganometry). According to theory, permanganate is strong enough to oxidize water, but this does not actually happen to any extent. Besides this, it is stable.
It is a useful reagent, but it is not very selective with organic compounds. Potassium permanganate is used as a disinfectant and water treatment additive in aquaculture.
Manganates(VII) are not very stable thermally. For instance, potassium permanganate decomposes at 230 °C to potassium manganate and manganese dioxide, releasing oxygen gas:
2 KMnO4 → K2MnO4 + MnO2 + O2
A permanganate can oxidize an amine to a nitro compound, an alcohol to a ketone, an aldehyde to a carboxylic acid, a terminal alkene to a carboxylic acid, oxalic acid to carbon dioxide, and an alkene to a diol. This list is not exhaustive.
In alkene oxidations one intermediate is a cyclic Mn(V) species:
Compounds
Ammonium permanganate, NH4MnO4
Barium permanganate, Ba(MnO4)2
Calcium permanganate, Ca(MnO4)2
Lithium permanganate, LiMnO4
Potassium permanganate, KMnO4
Sodium permanganate, NaMnO4
Silver permanganate, AgMnO4
Safety
The fatal dose of permanganate is about 10 g, and several fatal intoxications have occurred. The strong oxidative effect leads to necrosis of the mucous membrane. For example, the esophagus is affected if the permanganate is swallowed. Only a limited amount is absorbed by the intestines, but this small amount shows severe effects on the kidneys and on the liver.
See also
Perchlorate, a similar ion with a chlorine(VII) center
Permanganate index
Chromate, which is isoelectronic with permanganate
Pertechnetate
References
Transition metal oxyanions
Oxometallates | Permanganate | [
"Chemistry"
] | 1,069 | [
"Oxidizing agents",
"Permanganates"
] |
1,729,337 | https://en.wikipedia.org/wiki/Planckian%20locus | In physics and color science, the Planckian locus or black body locus is the path or locus that the color of an incandescent black body would take in a particular chromaticity space as the blackbody temperature changes. It goes from deep red at low temperatures through orange, yellowish, white, and finally bluish white at very high temperatures.
A color space is a three-dimensional space; that is, a color is specified by a set of three numbers (the CIE coordinates X, Y, and Z, for example, or other values such as hue, colorfulness, and luminance) which specify the color and brightness of a particular homogeneous visual stimulus. A chromaticity is a color projected into a two-dimensional space that ignores brightness. For example, the standard CIE XYZ color space projects directly to the corresponding chromaticity space specified by the two chromaticity coordinates known as x and y, making the familiar chromaticity diagram shown in the figure. The Planckian locus, the path that the color of a black body takes as the blackbody temperature changes, is often shown in this standard chromaticity space.
Planckian locus in the XYZ color space
In the CIE XYZ color space, the three coordinates defining a color are given by X, Y, and Z:
where M(λ,T) is the spectral radiant exitance of the light being viewed, and X(λ), Y(λ) and Z(λ) are the color matching functions of the CIE standard colorimetric observer, shown in the diagram on the right, and λ is the wavelength. The Planckian locus is determined by substituting into the above equations the black body spectral radiant exitance, which is given by Planck's law:
where:
c1 = 2hc2 is the first radiation constant
c2 = hc/k is the second radiation constant
and
M is the black body spectral radiant exitance (power per unit area per unit wavelength: watt per square meter per meter (W/m3))
T is the temperature of the black body
h is the Planck constant
c is the speed of light
k is the Boltzmann constant
This will give the Planckian locus in CIE XYZ color space. If these coordinates are XT, YT, ZT where T is the temperature, then the CIE chromaticity coordinates will be
Note that in the above formula for Planck's Law, you might as well use c1L = 2hc2 (the first radiation constant for spectral radiance) instead of c1 (the “regular” first radiation constant), in which case the formula would give the spectral radiance L(λ,T) of the black body instead of the spectral radiant exitance M(λ,T). However, this change only affects the absolute values of XT, YT and ZT, not the values relative to each other. Since XT, YT and ZT are usually normalized to YT = 1 (or YT = 100) and are normalized when xT and yT are calculated, the absolute values of XT, YT and ZT do not matter. For practical reasons, c1 might therefore simply be replaced by 1.
Approximation
The Planckian locus in xy space is depicted as a curve in the chromaticity diagram above. While it is possible to compute the CIE xy co-ordinates exactly given the above formulas, it is faster to use approximations. Since the mired scale changes more evenly along the locus than the temperature itself, it is common for such approximations to be functions of the reciprocal temperature. Kim et al. use a cubic spline:
The Planckian locus can also be approximated in the CIE 1960 color space, which is used to compute CCT and CRI, using the following expressions:
This approximation is accurate to within and for . Alternatively, one can use the chromaticity (x, y) coordinates estimated from above to derive the corresponding (u, v), if a larger range of temperatures is required.
The inverse calculation, from chromaticity co-ordinates (x, y) on or near the Planckian locus to correlated color temperature, is discussed in .
Correlated color temperature
The mathematical procedure for determining the correlated color temperature involves finding the closest point to the light source's white point on the Planckian locus. Since the CIE's 1959 meeting in Brussels, the Planckian locus has been computed using the CIE 1960 color space, also known as MacAdam's (u,v) diagram. Today, the CIE 1960 color space is deprecated for other purposes:
Owing to the perceptual inaccuracy inherent to the concept, it suffices to calculate to within 2 K at lower CCTs and 10 K at higher CCTs to reach the threshold of imperceptibility.
International Temperature Scale
The Planckian locus is derived by the determining the chromaticity values of a Planckian radiator using the standard colorimetric observer. The relative spectral power distribution (SPD) of a Planckian radiator follows Planck's law, and depends on the second radiation constant, . As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of this constant, with the International Temperature Scale (and briefly, the International Practical Temperature Scale). These successive revisions caused a shift in the Planckian locus and, as a result, the correlated color temperature scale. Before ceasing publication of standard illuminants, the CIE worked around this problem by explicitly specifying the form of the SPD, rather than making references to black bodies and a color temperature. Nevertheless, it is useful to be aware of previous revisions in order to be able to verify calculations made in older texts:
= (ITS-27). Note: Was in effect during the standardization of Illuminants A, B, C (1931), however the CIE used the value recommended by the U.S. National Bureau of Standards, 1.435 × 10−2
= (IPTS-48). In effect for Illuminant series D (formalized in 1967).
= (ITS-68), (ITS-90). Often used in recent papers.
= (CODATA 2010)
= (CODATA 2014)
= (CODATA 2018). Current value, as of 2020. The 2019 revision of the SI fixed the Boltzmann constant to an exact value. Since the Planck constant and the speed of light were already fixed to exact values, that means that c2 is now an exact value as well. Note that ... doesn't indicate a repeating fraction; it merely means that of this exact value only the first ten digits are shown.
See also
Ultraviolet catastrophe
References
External links
Numerical table of color temperature and the corresponding xy and sRGB coordinates for both the 1931 and 1964 CMFs, by Mitchell Charity.
Color space | Planckian locus | [
"Mathematics"
] | 1,427 | [
"Color space",
"Space (mathematics)",
"Metric spaces"
] |
1,729,464 | https://en.wikipedia.org/wiki/6V6 | The 6V6 is a beam-power tetrode vacuum tube. The first of this family of tubes to be introduced was the 6V6G by Ken-Rad Tube & Lamp Corporation in late 1936, with the availability by December of both Ken-Rad and Raytheon 6V6G tubes announced. It is still in use in audio applications, especially electric guitar amplifiers.
Following the introduction in July 1936 of the 6L6, the potential of the scaled down version that became the 6V6 was soon realized. The lower-powered 6V6 was better-suited for average home use, and became common in the audio-output-stages of "farmhouse" table-top radios, where power pentodes such as the 6F6 had previously been used. The 6V6 required less heater power and produced less distortion than the 6F6, while yielding higher output in both single-ended and push-pull configurations.
Although the 6V6 was originally designed especially for use in automobile radios, the clip-in Loctal base 7C5, from early 1939, or the lower heater current 12V6GT, both with the identical characteristics to the 6V6, but with the smaller T-9 glass envelope, soon became the tubes of choice for many automotive radios manufacturers. Additionally, the 6V6 had applications in portable battery-operated radios.
The data sheet information supplied by the tube manufacturers' design-centers list the typical operation of an audio output stage for a single 6V6 as producing about 5W of continuous power, and a push-pull-pair about 14W. Amplifier manufacturers soon realized that the tube was capable of being used at ratings above the recommended maximums, and guitar amplifiers with 400V on the plates of a pair of 6V6GTA claim to produce an output power of 20W RMS at 5%THD with 40W Peak Music Power, and with 490V on the plates, as much as 30 W RMS.
History
Following the 6V6G, RMA Release #96 - 09 Nov. 1936, sponsored by Ken-Rad Tube & Lamp Corporation, with the ST 14 shouldered glass envelope, the 6V6 was announced with a metal mantel in January 1937 by Hygrade Sylvania Corporation. The RMA Release #125 – 03 Jan.1938, Sponsored by RCA. for the 6V6 tube has led to some confusion as to the origins of the 6V6. The 6V6G but not the 6V6 is in the RCA manual RC-13 from July 1937, but the 6V6 is to be found in the 1937 tube manuals of other manufacturers, such as Raytheon.
Tube manufacturers were keen to promote the superiority of the metal tube construction that was introduced on April 1, 1935. Large quantities of the 6V6 tube were produced in the following decade, many as military supply JAN tubes. The price of the metal and glass versions were held to closely the same retail price level for the first few years of their production. The introduction of the 6V6GT, RMA Release #201 – 10 July 1939, was sponsored by Hytron Corporation. By 1940, the 6V6G production was largely superseded by this smaller "GT" T-9 glass envelope. On April 17, 1942, the War Production Board ordered radio tube manufacturers to discontinue within seven days the production for civilian use of 349 of the 710 types of radio tubes on the market, amongst these were the 6V6G and 6V6GX. By 1943, the price of the metal version was almost twice that of the GT version, and this proportional difference in price seems to have remained constant, right through to the end of the 1970s. The 6V6GTA – RMA Release #1681 – 2 July 1956, sponsored by Hygrade Sylvania Corporation, has a controlled warm-up period.
The various different NOS (new old stock) tubes of the 6V6 family, depending on manufacturer, model, series, strength and condition, will vary enormously in scarcity and therefore usually in price. The metal NOS 6V6 tube, once costing almost twice the price of its now highly valued glass enveloped counterparts, is now considered to be fairly common, and is usually the cheapest NOS tube available, with many current production tubes costing more than its 60 to 80 year older classic predecessor. In the final years of U.S. production, several of the major manufacturers switched to using the so-called "coin" based GT bulb.
Current use
Now, over eighty-six years after its introduction, and still retaining its original characteristics, the 6V6 has one of the longest active lifetimes of any electronic component, having never been out of production in all this long period of time. Although historically widely used in all manner of electronic goods, many of which are still in service, it is in guitar amplifiers where its use has become archetypal. Not only are there very many existing amplifiers in regular use that rely on the 6V6, with contemporary reproductions of the more iconic models still being made, modern designers are still keen to develop new creations that rely on its use.
Generally speaking, 6V6 tubes are sturdy and can be operated beyond their published specifications (the Soviet made 6P6S, and early Chinese 6V6 versions were not as permissive of exceeding design limits, although current production has improved). Because of this, the 6V6 soon proved itself to be suitable for use in consumer-market musical instrument amplifiers, particularly combo-style guitar amps such as the Gibson GA-40, and the Fender Amplifiers; Champ, Princeton, and Deluxe, some of which drive their 6V6s well in excess of the datasheet specified maximum rating. This ongoing demand encourages Chinese, Slovakian and Russian tube factories not only to keep the 6V6 in production to this day, but to further develop the supply.
The 6V6 Family and equivalents
6V6G – Glass "Shouldered Tube" ST envelope.
6V6GX – Glass "Shouldered Tube" ST envelope, Ceramic Base.
6V6 – Metal jacketed envelope.
The metal envelope of 6V6 is connected to pin 1 of the base, and was normally used as a ground.
Pin 1 of the other members of the 6V6 family of tubes are usually not internally connected, although some may have the gray RF shield connected.
6V6GT – smaller "Glass Tube" T-9 envelope.
6V6GTA – with a controlled warm-up period.
6V6GTY – a GT with a low loss micanol brown base.
6V6GTX = HY6V6GTX – a GT "Bantam" selected for high gain, with a ceramic base, 15W plate dissipation rating, produced for a limited period around 1941 by Hytron.
5871 – Ruggedized 6V6GT for operation under severe vibrations found in aircraft and similar applications. Radio Valve Co. of Canada Ltd., 1954 : RMA #859A.
5992 – Premium, ruggedized 6V6GT with heater current raised to 600mA. Bendix and GE known manufacturers.
7408 – 6V6GT with additional zero-bias characteristics.
6V6S – A modern production, large plated tube, heater current 500mA, with a higher plate and screen voltage rating. Made by JJ Electronic.
6V6GT(A)(B)-STR – Modern production valve, STR signifying "Special Test Requirement." Claiming to be heavy duty, suitable for high plate voltage.
Military specification 6V6 tubes and their equivalents
American military services contracted tubes from many sources through the U.S. War Department. They used a Joint Army-Navy Nomenclature System (AN System. JAN) Most of these tubes bear the JAN marking as well as a VT number (VT = vacuum tube).
VT-107 – Metal 6V6.
6V6Y – Metal, with a low loss micanol brown base.
VT-107A – 6V6GT.
VT-107B – 6V6G.
British Ministry Of Supply valves for the Military & other governmental agencies have a CV number (CV = common valve). Old stores reference numbers with the prefix ZA are also sometimes used. Supplied by Mullard & Brimar.
CV509 = ZA5306 – 6V6G
CV510 – 6V6
CV511 – 6V6GT & 6V6GTY
The British GPO also used their own VT (Valve - Thermionic) numbering system
VT196 = CV509 = 6V6G – General Post Office (GPO)
Swedish Military supplier Bofors, had tubes made by Standard Radiofabrik (SRF) at the Ulvsunda plant in Stockholm.
5S2D - Premium, ruggedized 6V6GT with triple micas, low loss micanol brown base.
Other tubes cited as being equivalent
6P6S (6П6С in Cyrillic.) Also 6П2 - In the Soviet Union a version of the 6V6GT was produced since the late 1940s which appears to be a close copy of the 1940s Sylvania-issue 6V6GT – initially under its American designation (in both Latin and Cyrillic lettering), but later, the USSR adopted its own system of designations.
6P11S (6П11С in Cyrillic.) = 6П6С-Y2 - Military consignment, ruggedized 6P6S. OTK tested. Higher Voltage ratings.
1515 - Premium version of the 6P6S (6П6С).
6P6P - Chinese version of the 6V6GT made by Shuguang, but now obsolete, different from Shuguang's current production 6V6GT.
3106 – East German production 6V6GT from OSW (Oberspreewerk Berlin, later HF, then WF with 6П6С & 6V6 marking). Open, split-plate design.
6AY5 – East German production 6V6GT
VT227 = 7184 – Cited equivalent, made by Ken-Rad, inadequate documentation, no RMA registration.
WT-210-00-82 – Cited equivalent, inadequate documentation, no RMA registration.
WTT-123 – Cited equivalent, inadequate documentation, no RMA registration.
6V6HD – NOT a 6V6, but relabeled Sovtek 6L6GA / 6P3S.
Similar tubes
These tubes have very similar characteristics to the 6V6, but differ either in the heater rating, or use a different socket and pin-out
5V6GT - Same as the 6V6GT, but with different heater ratings - 4.7V, 0.6A, controlled 11 sec. warm-up time.
12V6GT - Same as the 6V6GT, but with different heater ratings - 12.6V, 0.225A, suitable for automotive receiver applications.
7C5 - Clip-in Loctal B8G base, T-9 Bulb. Raytheon - 1939 RMA #162. Other version of this tube are 7C5-TV, 2C48, 2C50, N148, CV885
7C5LT = CV886 - Same as the 7C5, except with a small wafer Octalox base 8-pin T-9 Bulb. RCA – 1940 RMA #234.
14C5 - Same as 7C5 but with 12.6V Heater.
6BW6 = CV2136 - British-made miniature-tube 12W equivalent of the 6V6, 9-pin base B9A.
6061 – Premium, ruggedized 6BW6. Brimar STC London. 1951: RMA #965
CV4043 - British-made miniature-tube 13.5W equivalent of the 6BW6, 9-pin base B9A.
6AQ5 - slightly lower specifications to the 6V6GT, miniature glass envelope, 7-pin base B7G. Other equivalents of this tube are the CV1862, EL90, 6005, 6095, 6669, 6928, BPM 04, CK-6005, M8249, N727, 6L31
12AQ5 - Same as the 6AQ5, but with different heater ratings.
6P1P (6П1П in Cyrillic.) - 9-pin B9A Noval base Soviet version, not identical characteristics, but very close
6973 - US-version, 9 pin Noval base tube, higher plate voltage ratings, intended for high fidelity output applications.
6CM6 - 9 pin Noval base tube, equivalent type primarily intended for vertical deflection amplifiers in television receivers.
12CM6 - Same as the 6CM6, but with different heater ratings.
12AB5 - 9 pin Noval base tube with 12V heater, suitable for automotive receiver applications. Equivalent to the 7061
The pentode EL84/6BQ5 - 9 pin Noval base tube, that although different enough from the 6V6 not to justify rating it as an equivalent, because of its popularity and ready availability, plus having a close-enough similarity to make it possible, if bias is altered, adapters have been developed commercially to allow an amplifier designed for 6V6 use to accept the noval-based EL84 tube. Likewise, the inverse adapter is also available.
See also
List of vacuum tubes
KT66
KT88
6L6
6CA7/EL34/KT77
807
Beam tetrode
References
General references
Stokes, John. 70 Years of Radio Tubes and Valves. NY: Vestal Press, 1982.
Radio News magazine, March 1937, page 567, "The Radio Workshop."
Radio-Craft magazine, October 1937, page 204, "New Tubes for the Radio Experimenter."
RMA (Radio Manufacturers Association) "Electron Tube Registration List"
Fender Musical Instruments, Amplifier Owners Manual's, 1983.
Jim Kelley Amplifiers, Amplifier Owners Manual.
Guitar Player Magazine, June 1983.
O'Connor, Kevin. T.U.T. Vol.5. Powerpress Publishing, 2004.
Receiving Tube Manual, RC-20. RCA corporation. 1964.
External links
Duncan's Amps TDSL.
Tube Collector's Association website.
American Radio History website
Several tube datasheets.
Vacuum tubes
Guitar amplification tubes | 6V6 | [
"Physics"
] | 3,040 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
1,729,818 | https://en.wikipedia.org/wiki/Gabriel%20synthesis | The Gabriel synthesis is a chemical reaction that transforms primary alkyl halides into primary amines. Traditionally, the reaction uses potassium phthalimide. The reaction is named after the German chemist Siegmund Gabriel.
The Gabriel reaction has been generalized to include the alkylation of sulfonamides and imides, followed by deprotection, to obtain amines (see Alternative Gabriel reagents).
The alkylation of ammonia is often an unselective and inefficient route to amines. In the Gabriel method, phthalimide anion is employed as a surrogate of H2N−.
Traditional Gabriel synthesis
In this method, the sodium or potassium salt of phthalimide is N-alkylated with a primary alkyl halide to give the corresponding N-alkylphthalimide.
Upon workup by acidic hydrolysis the primary amine is liberated as the amine salt. Alternatively the workup may be via the Ing–Manske procedure, involving reaction with hydrazine. This method produces a precipitate of phthalhydrazide (C6H4(CO)2N2H2) along with the primary amine:
C6H4(CO)2NR + N2H4 → C6H4(CO)2N2H2 + RNH2
Gabriel synthesis generally fails with secondary alkyl halides.
The first technique often produces low yields or side products. Separation of phthalhydrazide can be challenging. For these reasons, other methods for liberating the amine from the phthalimide have been developed. Even with the use of the hydrazinolysis method, the Gabriel method suffers from relatively harsh conditions.
Alternative Gabriel reagents
Many alternative reagents have been developed to complement the use of phthalimides. Most such reagents (e.g. the sodium salt of saccharin and di-tert-butyl-iminodicarboxylate) are electronically similar to the phthalimide salts, consisting of imido nucleophiles. In terms of their advantages, these reagents hydrolyze more readily, extend the reactivity to secondary alkyl halides, and allow the production of secondary amines.
See also
Robinson–Gabriel synthesis – also developed by Siegmund Gabriel
Delépine reaction – primary amines from benzyl or alkyl halides
References
External links
An animation of the mechanism of the Gabriel synthesis
Substitution reactions
Name reactions | Gabriel synthesis | [
"Chemistry"
] | 526 | [
"Name reactions"
] |
1,729,907 | https://en.wikipedia.org/wiki/Hofmann%20elimination | Hofmann elimination is an elimination reaction of an amine to form alkenes. The least stable alkene (the one with the fewest substituents on the carbons of the double bond), called the Hofmann product, is formed. This tendency, known as the Hofmann alkene synthesis rule, is in contrast to usual elimination reactions, where Zaitsev's rule predicts the formation of the most stable alkene. It is named after its discoverer, August Wilhelm von Hofmann.
The reaction starts with the formation of a quaternary ammonium iodide salt by treatment of the amine with excess methyl iodide (exhaustive methylation), followed by treatment with silver oxide and water to form a quaternary ammonium hydroxide. When this salt is decomposed by heat, the Hofmann product is preferentially formed due to the steric bulk of the leaving group causing the hydroxide to abstract the more easily accessible hydrogen.
In the Hofmann elimination, the least substituted alkene is typically favored due to intramolecular steric interactions. The quaternary ammonium group is large, and interactions with alkyl groups on the rest of the molecule are undesirable. As a result, the conformation necessary for the formation of the Zaitsev product is less energetically favorable than the conformation required for the formation of the Hofmann product. As a result, the Hofmann product is formed preferentially. The Cope elimination is very similar to the Hofmann elimination in principle, but occurs under milder conditions. It also favors the formation of the Hofmann product, and for the same reasons.
An example of a Hofmann elimination (not involving a contrast between a Zaitsev product and a Hofmann product) is the synthesis of trans-cyclooctene. The trans isomer is selectively trapped as a complex with silver nitrate (in this diagram the trans form looks like a cis form, but see the trans-cyclooctene article for better images):
In a related chemical test, known as the Herzig–Meyer alkimide group determination, a tertiary amine with at least one methyl group and lacking a beta-proton is allowed to react with hydrogen iodide to the quaternary ammonium salt which when heated degrades to methyl iodide and the secondary amine.
See also
Cope elimination
Emde degradation
References
External links
An animation of the mechanism of the Hofmann elimination
Elimination reactions
Olefination reactions
Name reactions | Hofmann elimination | [
"Chemistry"
] | 538 | [
"Name reactions",
"Olefination reactions",
"Organic reactions"
] |
1,730,328 | https://en.wikipedia.org/wiki/Superconducting%20quantum%20computing | Superconducting quantum computing is a branch of solid state physics and quantum computing that implements superconducting electronic circuits using superconducting qubits as artificial atoms, or quantum dots. For superconducting qubits, the two logic states are the ground state and the excited state, denoted respectively. Research in superconducting quantum computing is conducted by companies such as Google, IBM, IMEC, BBN Technologies, Rigetti, and Intel. Many recently developed QPUs (quantum processing units, or quantum chips) use superconducting architecture.
, up to 9 fully controllable qubits are demonstrated in the 1D array, and up to 16 in 2D architecture. In October 2019, the Martinis group, partnered with Google, published an article demonstrating novel quantum supremacy, using a chip composed of 53 superconducting qubits.
Background
Classical computation models rely on physical implementations consistent with the laws of classical mechanics. Classical descriptions are accurate only for specific systems consisting of a relatively large number of atoms. A more general description of nature is given by quantum mechanics. Quantum computation studies quantum phenomena applications beyond the scope of classical approximation, with the purpose of performing quantum information processing and communication. Various models of quantum computation exist, but the most popular models incorporate concepts of qubits and quantum gates (or gate-based superconducting quantum computing).
Superconductors are implemented due to the fact that at low temperatures they have infinite conductivity and zero resistance. Each qubit is built using semiconductor circuits with an LC circuit: a capacitor and an inductor.
Superconducting capacitors and inductors are used to produce a resonant circuit that dissipates almost no energy, as heat can disrupt quantum information. The superconducting resonant circuits are a class of artificial atoms that can be used as qubits. Theoretical and physical implementations of quantum circuits are widely different. Implementing a quantum circuit had its own set of challenges and must abide by DiVincenzo's criteria, conditions proposed by theoretical physicist David P DiVincenzo, which is set of criteria for the physical implementation of superconducting quantum computing, where the initial five criteria ensure that the quantum computer is in line with the postulates of quantum mechanics and the remaining two pertaining to the relaying of this information over a network.
We map the ground and excited states of these atoms to the 0 and 1 state as these are discrete and distinct energy values and therefore it is in line with the postulates of quantum mechanics. In such a construction however an electron can jump to multiple other energy states and not be confined to our excited state; therefore, it is imperative that the system be limited to be affected only by photons with energy difference required to jump from the ground state to the excited state. However, this leaves one major issue, we require uneven spacing between our energy levels to prevent photons with the same energy from causing transitions between neighboring pairs of states. Josephson junctions are superconducting elements with a nonlinear inductance, which is critically important for qubit implementation. The use of this nonlinear element in the resonant superconducting circuit produces uneven spacings between the energy levels.
Qubits
A qubit is a generalization of a bit (a system with two possible states) capable of occupying a quantum superposition of both states. A quantum gate, on the other hand, is a generalization of a logic gate describing the transformation of one or more qubits once a gate is applied given their initial state. Physical implementation of qubits and gates is challenging for the same reason that quantum phenomena are difficult to observe in everyday life given the minute scale on which they occur. One approach to achieving quantum computers is by implementing superconductors whereby quantum effects are macroscopically observable, though at the price of extremely low operation temperatures.
Superconductors
Unlike typical conductors, superconductors possess a critical temperature at which resistivity plummets to nearly zero and conductivity is drastically increased. In superconductors, the basic charge carriers are pairs of electrons (known as Cooper pairs), rather than single fermions as found in typical conductors. Cooper pairs are loosely bound and have an energy state lower than that of Fermi Energy. Electrons forming Cooper pairs possess equal and opposite momentum and spin so that the total spin of the Cooper pair is an integer spin. Hence, Cooper pairs are bosons. Two such superconductors which have been used in superconducting qubit models are niobium and tantalum, both d-band superconductors.
Bose–Einstein condensates
Once cooled to nearly absolute zero, a collection of bosons collapse into their lowest energy quantum state (the ground state) to form a state of matter known as Bose–Einstein condensate. Unlike fermions, bosons may occupy the same quantum energy level (or quantum state) and do not obey the Pauli exclusion principle. Classically, Bose-Einstein Condensate can be conceptualized as multiple particles occupying the same position in space and having equal momentum. Because interactive forces between bosons are minimized, Bose-Einstein Condensates effectively act as a superconductor. Thus, superconductors are implemented in quantum computing because they possess both near infinite conductivity and near zero resistance. The advantages of a superconductor over a typical conductor, then, are twofold in that superconductors can, in theory, transmit signals nearly instantaneously and run infinitely with no energy loss. The prospect of actualizing superconducting quantum computers becomes all the more promising considering NASA's recent development of the Cold Atom Lab in outer space where Bose-Einstein Condensates are more readily achieved and sustained (without rapid dissipation) for longer periods of time without the constraints of gravity.
Electrical circuits
At each point of a superconducting electronic circuit (a network of electrical elements), the condensate wave function describing charge flow is well-defined by some complex probability amplitude. In typical conductor electrical circuits, this same description is true for individual charge carriers except that the various wave functions are averaged in macroscopic analysis, making it impossible to observe quantum effects. The condensate wave function becomes useful in allowing design and measurement of macroscopic quantum effects. Similar to the discrete atomic energy levels in the Bohr model, only discrete numbers of magnetic flux quanta can penetrate a superconducting loop. In both cases, quantization results from complex amplitude continuity. Differing from microscopic implementations of quantum computers (such as atoms or photons), parameters of superconducting circuits are designed by setting (classical) values to the electrical elements composing them such as by adjusting capacitance or inductance.
To obtain a quantum mechanical description of an electrical circuit, a few steps are required. Firstly, all electrical elements must be described by the condensate wave function amplitude and phase rather than by closely related macroscopic current and voltage descriptions used for classical circuits. For instance, the square of the wave function amplitude at any arbitrary point in space corresponds to the probability of finding a charge carrier there. Therefore, the squared amplitude corresponds to a classical charge distribution. The second requirement to obtain a quantum mechanical description of an electrical circuit is that generalized Kirchhoff's circuit laws are applied at every node of the circuit network to obtain the system's equations of motion. Finally, these equations of motion must be reformulated to Lagrangian mechanics such that a quantum Hamiltonian is derived describing the total energy of the system.
Technology
Manufacturing
Superconducting quantum computing devices are typically designed in the radio-frequency spectrum, cooled in dilution refrigerators below 15 mK and addressed with conventional electronic instruments, e.g. frequency synthesizers and spectrum analyzers. Typical dimensions fall on the range of micrometers, with sub-micrometer resolution, allowing for the convenient design of a Hamiltonian system with well-established integrated circuit technology. Manufacturing superconducting qubits follows a process involving lithography, depositing of metal, etching, and controlled oxidation as described in. Manufacturers continue to improve the lifetime of superconducting qubits and have made significant improvements since the early 2000s.
Josephson junctions
One distinguishable attribute of superconducting quantum circuits is the use of Josephson junctions. Josephson junctions are an electrical element which does not exist in normal conductors. Recall that a junction is a weak connection between two leads of wire (in this case a superconductive wire) on either side of a thin layer of insulator material only a few atoms thick, usually implemented using shadow evaporation technique. The resulting Josephson junction device exhibits the Josephson Effect whereby the junction produces a supercurrent. An image of a single Josephson junction is shown to the right. The condensate wave function on the two sides of the junction are weakly correlated, meaning that they are allowed to have different superconducting phases. This distinction of nonlinearity contrasts continuous superconducting wire for which the wave function across the junction must be continuous. Current flow through the junction occurs by quantum tunneling, seeming to instantaneously "tunnel" from one side of the junction to the other. This tunneling phenomenon is unique to quantum systems. Thus, quantum tunneling is used to create nonlinear inductance, essential for qubit design as it allows a design of anharmonic oscillators for which energy levels are discretized (or quantized) with nonuniform spacing between energy levels, denoted . In contrast, the quantum harmonic oscillator cannot be used as a qubit as there is no way to address only two of its states, given that the spacing between every energy level and the next is exactly the same.
Qubit archetypes
The three primary superconducting qubit archetypes are the phase, charge and flux qubit. Many hybridizations of these archetypes exist including the fluxonium, transmon, Xmon, and quantronium. For any qubit implementation the logical quantum states are mapped to different states of the physical system (typically to discrete energy levels or their quantum superpositions). Each of the three archetypes possess a distinct range of Josephson energy to charging energy ratio. Josephson energy refers to the energy stored in Josephson junctions when current passes through, and charging energy is the energy required for one Cooper pair to charge the junction's total capacitance. Josephson energy can be written as
,
where is the critical current parameter of the Josephson junction, is (superconducting) flux quantum, and is the phase difference across the junction. Notice that the term indicates nonlinearity of the Josephson junction. Charge energy is written as
,
where is the junction's capacitance and is electron charge. Of the three archetypes, phase qubits allow the most of Cooper pairs to tunnel through the junction, followed by flux qubits, and charge qubits allow the fewest.
Phase qubit
The phase qubit possesses a Josephson to charge energy ratio on the order of magnitude . For phase qubits, energy levels correspond to different quantum charge oscillation amplitudes across a Josephson junction, where charge and phase are analogous to momentum and position respectively as analogous to a quantum harmonic oscillator. Note that in this context phase is the complex argument of the superconducting wave function (also known as the superconducting order parameter), not the phase between the different states of the qubit.
Flux qubit
The flux qubit (also known as a persistent-current qubit) possesses a Josephson to charging energy ratio on the order of magnitude . For flux qubits, the energy levels correspond to different integer numbers of magnetic flux quanta trapped in a superconducting ring.
Fluxonium
Fluxonium qubits are a specific type of flux qubit whose Josephson junction is shunted by a linear inductor of where . In practice, the linear inductor is usually implemented by a Josephson junction array that is composed of a large number (can be often ) of large-sized Josephson junctions connected in a series. Under this condition, the Hamiltonian of a fluxonium can be written as:
.
One important property of the fluxonium qubit is the longer qubit lifetime at the half flux sweet spot, which can exceed 1 millisecond. Another crucial advantage of the fluxonium qubit biased at the sweet spot is the large anharmonicity, which allows fast local microwave control and mitigates spectral crowding problems, leading to better scalability.
Charge qubit
The charge qubit, also known as the Cooper pair box, possesses a Josephson to charging energy ratio on the order of magnitude . For charge qubits, different energy levels correspond to an integer number of Cooper pairs on a superconducting island (a small superconducting area with a controllable number of charge carriers). Indeed, the first experimentally realized qubit was the Cooper pair box, achieved in 1999.
Transmon
Transmons are a special type of qubit with a shunted capacitor specifically designed to mitigate noise. The transmon qubit model was based on the Cooper pair box (illustrated in the table above in row one column one). It was also the first qubit to demonstrate quantum supremacy. The increased ratio of Josephson to charge energy mitigates noise. Two transmons can be coupled using a coupling capacitor. For this 2-qubit system the Hamiltonian is written
,
where is current density and is surface charge density.
Xmon
The Xmon is very similar in design to a transmon in that it originated based on the planar transmon model. An Xmon is essentially a tunable transmon. The major distinguishing difference between transmon and Xmon qubits is the Xmon qubits is grounded with one of its capacitor pads.
Gatemon
Another variation of the transmon qubit is the Gatemon. Like the Xmon, the Gatemon is a tunable variation of the transmon. The Gatemon is tunable via gate voltage.
Unimon
In 2022 researchers from IQM Quantum Computers, Aalto University, and VTT Technical Research Centre of Finland discovered a novel superconducting qubit known as the Unimon. A relatively simple qubit, the Unimon consists of a single Josephson junction shunted by a linear inductor (possessing an inductance not depending on current) inside a (superconducting) resonator. Unimons have increased anharmocity and display faster operation time resulting in lower susceptibility to noise errors. In addition to increased anharmocity, other advantages Unimon qubit include decreased susceptibility to flux noise and complete insensitivity to dc charge noise.
In the table above, the three superconducting qubit archetypes are reviewed. In the first row, the qubit's electrical circuit diagram is presented. The second row depicts a quantum Hamiltonian derived from the circuit. Generally, the Hamiltonian is the sum of the system's kinetic and potential energy components (analogous to a particle in a potential well). For the Hamiltonians denoted, is the superconducting wave function phase difference across the junction, is the capacitance associated with the Josephson junction, and is the charge on the junction capacitance. For each potential depicted, only solid wave functions are used for computation. The qubit potential is indicated by a thick red line, and schematic wave function solutions are depicted by thin lines, lifted to their appropriate energy level for clarity.
Note that particle mass corresponds to an inverse function of the circuit capacitance and that the shape of the potential is governed by regular inductors and Josephson junctions. Schematic wave solutions in the third row of the table show the complex amplitude of the phase variable. Specifically, if a qubit's phase is measured while the qubit occupies a particular state, there is a non-zero probability of measuring a specific value only where the depicted wave function oscillates. All three rows are essentially different presentations of the same physical system.
Single qubits
The GHz energy gap between energy levels of a superconducting qubit is designed to be compatible with available electronic equipment, due to the terahertz gap (lack of equipment in the higher frequency band). The superconductor energy gap implies a top limit of operation below ~1THz beyond which Cooper pairs break, so energy level separation cannot be too high. On the other hand, energy level separation cannot be too small due to cooling considerations: a temperature of 1 K implies energy fluctuations of 20 GHz. Temperatures of tens of millikelvins are achieved in dilution refrigerators and allow qubit operation at a ~5 GHz energy level separation. Qubit energy level separation is frequently adjusted by controlling a dedicated bias current line, providing a "knob" to fine tune the qubit parameters.
Single qubit gates
A single qubit gate is achieved by rotation in the Bloch sphere. Rotations between different energy levels of a single qubit are induced by microwave pulses sent to an antenna or transmission line coupled to the qubit with a frequency resonant with the energy separation between levels. Individual qubits may be addressed by a dedicated transmission line or by a shared one if the other qubits are off resonance. The axis of rotation is set by quadrature amplitude modulation of microwave pulse, while pulse length determines the angle of rotation.
More formally (following the notation of ) for a driving signal
of frequency , a driven qubit Hamiltonian in a rotating wave approximation is
,
where is the qubit resonance and are Pauli matrices.
To implement a rotation about the axis, one can set and apply a microwave pulse at frequency for time . The resulting transformation is
.
This is exactly the rotation operator by angle about the axis in the Bloch sphere. A rotation about the axis can be implemented in a similar way. Showing the two rotation operators is sufficient for satisfying universality as every single qubit unitary operator may be presented as (up to a global phase which is physically inconsequential) by a procedure known as the decomposition. Setting results in the transformation
up to the global phase and is known as the NOT gate.
Coupling qubits
The ability to couple qubits is essential for implementing 2-qubit gates. Coupling two qubits can be achieved by connecting both to an intermediate electrical coupling circuit. The circuit may be either a fixed element (such as a capacitor) or be controllable (like the DC-SQUID). In the first case, decoupling qubits during the time the gate is switched off is achieved by tuning qubits out of resonance one from another, making the energy gaps between their computational states different. This approach is inherently limited to nearest-neighbor coupling since a physical electrical circuit must be laid out between connected qubits. Notably, D-Wave Systems' nearest-neighbor coupling achieves a highly connected unit cell of 8 qubits in Chimera graph configuration. Quantum algorithms typically require coupling between arbitrary qubits. Consequently, multiple swap operations are necessary, limiting the length of quantum computation possible before processor decoherence.
Quantum bus
Another method of coupling two or more qubits is by way of a quantum bus, by pairing qubits to this intermediate. A quantum bus is often implemented as a microwave cavity modeled by a quantum harmonic oscillator. Coupled qubits may be brought in and out of resonance with the bus and with each other, eliminating the nearest-neighbor limitation. Formalism describing coupling is cavity quantum electrodynamics. In cavity quantum electrodynamics, qubits are analogous to atoms interacting with an optical photon cavity with a difference of GHz (rather than the THz regime of electromagnetic radiation). Resonant excitation exchange among these artificial atoms is potentially useful for direct implementation of multi-qubit gates. Following the dark state manifold, the Khazali-Mølmer scheme performs complex multi-qubit operations in a single step, providing a substantial shortcut to the conventional circuit model.
Cross resonant gate
One popular gating mechanism uses two qubits and a bus, each tuned to different energy level separations. Applying microwave excitation to the first qubit, with a frequency resonant with the second qubit, causes a rotation of the second qubit. Rotation direction depends on the state of the first qubit, allowing a controlled phase gate construction.
Following the notation of, the drive Hamiltonian describing the excited system through the first qubit driving line is formally written
,
where is the shape of the microwave pulse in time, is resonance frequency of the second qubit, are the Pauli matrices, is the coupling coefficient between the two qubits via the resonator, is qubit detuning, is stray (unwanted) coupling between qubits, and is the reduced Planck constant. The time integral over determines the angle of rotation. Unwanted rotations from the first and third terms of the Hamiltonian can be compensated for with single qubit operations. The remaining component, combined with single qubit rotations, forms a basis for the su(4) Lie algebra.
Geometric phase gate
Higher levels (outside of the computational subspace) of a pair of coupled superconducting circuits can be used to induce a geometric phase on one of the computational states of the qubits. This leads to an entangling conditional phase shift of the relevant qubit states. This effect has been implemented by flux-tuning the qubit spectra and by using selective microwave driving. Off-resonant driving can be used to induce differential ac-Stark shift, allowing the implementation of all-microwave controlled-phase gates.
Heisenberg interactions
The Heisenberg model of interactions, written as
,
serves as the basis for analog quantum simulation of spin systems and the primitive for an expressive set of quantum gates, sometimes referred to as fermionic simulation (or fSim) gates. In superconducting circuits, this interaction model has been implemented using flux-tunable qubits with flux-tunable coupling, allowing the demonstration of quantum supremacy. In addition, it can also be realized in fixed-frequency qubits with fixed-coupling using microwave drives. The fSim gate family encompasses arbitrary XY and ZZ two-qubit unitaries, including the iSWAP, the CZ, and the SWAP gates (see Quantum logic gate).
Qubit readout
Architecture-specific readout, or measurement, mechanisms exist. Readout of a phase qubit is explained in the qubit archetypes table above. A flux qubit state is often read using an adjustable DC-SQUID magnetometer. States may also be measured using an electrometer. A more general readout scheme includes a coupling to a microwave resonator, where resonance frequency of the resonator is dispersively shifted by the qubit state. Multi-level systems (qudits) can be readout using electron shelving.
DiVincenzo's criteria
DiVincenzo's criteria is a list describing the requirements for a physical system to be capable of implementing a logical qubit. DiVincenzo's criteria is satisfied by superconducting quantum computing implementation. Much of the current development effort in superconducting quantum computing aim to achieve interconnect, control, and readout in the 3rd dimension with additional lithography layers.The list of DiVincenzo's criteria for a physical system to implement a logical qubit is satisfied by the implementation of superconducting qubits. Although DiVincenzo's criteria as originally proposed consists of five criteria required for physically implementing a quantum computer, the more complete list consists of seven criteria as it takes into account communication over a computer network capable of transmitting quantum information between computers, known as the “quantum internet”. Therefore, the first five criteria ensure successful quantum computing, while the final two criteria allow for quantum communication.
A scalable physical system with well characterized qubits. "Well characterized implies that that Hamiltonian function must be well-defined i.e. the energy eigenstates of the qubit should be able to be quantified.. A scalable system is self-explanatory, it indicates that this ability to regulate a qubit should be augmentable for multiple more qubits. Herein lies the major issue Quantum Computers face, as more qubits are implemented it leads to an exponential increase in cost and other physical implementations which pale in comparison to the enhanced speed it may offer. As superconducting qubits are fabricated on a chip, the many-qubit system is readily scalable. Qubits are allocated on the 2D surface of the chip. The demand for well characterized qubits is fulfilled with (a) qubit non-linearity (accessing only two of the available energy levels) and (b) accessing a single qubit at a time (rather than the entire many-qubit system) by way of per-qubit dedicated control lines and/or frequency separation, or tuning out, of different qubits.
Ability to initialize the state of qubits to a simple fiducial state. A fiducial state is one that is easily and consistently replicable and is useful in quantum computing as it may be used to guarantee the initial state of qubits. One simple way to initialize a superconducting qubit is to wait long enough for the qubits to relax to the ground state. Controlling qubit potential with tuning knobs allows faster initialization mechanisms.
Long relevant decoherence times. Decoherence of superconducting qubits is affected by multiple factors. Most decoherence is attributed to the quality of the Josephson junction and imperfections in the chip substrate. Due to their mesoscopic scale, superconducting qubits are relatively short lived. Nevertheless, thousands of gate operations have been demonstrated in these many-qubit systems. Recent strategies to improve device coherence include purifying the circuit materials and designing qubits with decreased sensitivity to noise sources.
A "universal" set of quantum gates. Superconducting qubits allow arbitrary rotations in the Bloch sphere with pulsed microwave signals, implementing single qubit gates. and couplings are shown for most implementations and for complementing the universal gate set. This criterion may also be satisfied by coupling two transmons with a coupling capacitor.
Qubit-specific measurement ability. In general, single superconducting qubits are used for control or for measurement.
Interconvertibility of stationary and flying qubits. While stationary qubits are used to store information or perform calculations, flying qubits transmit information macroscopically. Qubits should be capable of converting from being a stationary qubit to being a flying qubit and vice versa.
Reliable transmission of flying qubits between specified locations.
The final two criteria have been experimentally proven by research performed by ETH with two superconducting qubits connected by a coaxial cable.
Challenges
One of the primary challenges of superconducting quantum computing is the extremely low temperatures at which superconductors like Bose-Einstein Condensates exist. Other basic challenges in superconducting qubit design are shaping the potential well and choosing particle mass such that energy separation between two specific energy levels is unique, differing from all other interlevel energy separation in the system, since these two levels are used as logical states of the qubit.
Superconducting quantum computing must also mitigate quantum noise (disruptions of the system caused by its interaction with an environment) as well as leakage (information being lost to the surrounding environment). One way to reduce leakage is with parity measurements. Another strategy is to use qubits with large anharmonicity. Many current challenges faced by superconducting quantum computing lie in the field of microwave engineering. As superconducting quantum computing approaches larger scale devices, researchers face difficulties in qubit coherence, scalable calibration software, efficient determination of fidelity of quantum states across an entire chip, and qubit and gate fidelity. Moreover, superconducting quantum computing devices must be reliably reproducible at increasingly large scales such that they are compatible with these improvements.
Journey of superconducting quantum computing:
Although not the newest development, the focus began to shift onto superconducting qubits in the latter half of the 1990s when quantum tunneling across Josephson junctions became apparent which allowed for the realization that quantum computing could be achieved through these superconducting qubits.
At the end of the century in 1999, a paper was published by Yasunobu Nakamura, which exhibited the initial design of a superconducting qubit which is now known as the "charge qubit". This is the primary basis point on which later designs amended upon. These initial qubits had their limitations in respect to maintaining long coherence times and destructive measurements. The further amendment to this initial breakthrough lead to the invention of the phase and flux qubit and subsequently resulting in the transmon qubit which is now widely and primarily used in Superconducting Quantum Computing.The transmon qubit has enhanced original designs and has further cushioned charge noise from the qubit.
The journey has been long, arduous and full of breakthroughs but has seen significant advancements in the recent history and has massive potential for revolutionizing computing.
Future of superconducting quantum computing:
The sector's leading industry giants, like Google, IBM and Baidu, are using superconducting quantum computing and transmon qubits to make leaps and bounds in the area of quantum computing.
In August 2022, Baidu released its plans to build a fully integrated top to bottom quantum computer which incorporated superconducting qubits. This computer will be all encompassing with hardware, software and applications fully integrated. This is a first in the world of quantum computing and will lead to ground-breaking advancements.
IBM released the following roadmap publicly that they have set for their quantum computers which also incorporated superconducting qubits and the transmon qubit.
2021: In 2021, IBM came out with their 127-qubit processor.
2022: On November 9, IBM announced its 433 qubit processor called "Osprey".
2023: IBM plan on releasing their Condor quantum processor with 1,121 qubits.
2024: IBM plan on releasing their Flamingo quantum processor with 1,386+ qubits.
2025: IBM plan on releasing their Kookaburra quantum processor with 4,158+ qubits.
2026 and beyond: IBM plan on releasing a quantum processor that scaled beyond 10,000 qubits to a 100,000 qubits.
Google in 2016, implemented 16 qubits to convey a demonstration of the Fermi-Hubbard Model. In another recent experiment, Google used 17 qubits to optimize the Sherrington-Kirkpatrick model. Google produced the Sycamore quantum computer which performed a task in 200 seconds that would have taken 10,000 years on a classical computer.
References
Further reading
External links
IBM Quantum offers access to over 20 quantum computer systems.
The IBM Quantum Experience offers free access to writing quantum algorithms and executing them on 5 qubit quantum computers.
IBM's roadmap for quantum computing shows 65 qubit systems available in 2020 and 127 qubits to be available sometime in 2021.
Quantum information science
Quantum electronics
Superconductivity | Superconducting quantum computing | [
"Physics",
"Materials_science",
"Engineering"
] | 6,579 | [
"Physical quantities",
"Quantum electronics",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Nanotechnology",
"Electrical resistance and conductance"
] |
1,731,689 | https://en.wikipedia.org/wiki/R%C3%A9nyi%20entropy | In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.
The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors.
Definition
The Rényi entropy of order , where and , is defined as
It is further defined at as
Here, is a discrete random variable with possible outcomes in the set and corresponding probabilities for . The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e.
If the probabilities are for all , then all the Rényi entropies of the distribution are equal: .
In general, for all discrete random variables , is a non-increasing function in .
Applications often exploit the following relation between the Rényi entropy and the α-norm of the vector of probabilities:
Here, the discrete probability distribution is interpreted as a vector in with and .
The Rényi entropy for any is Schur concave. Proven by the Schur–Ostrowski criterion.
Special cases
As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of . The limit for is the Shannon entropy. As approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.
Hartley or max-entropy
is where is the number of non-zero probabilities. If the probabilities are all nonzero, it is simply the logarithm of the cardinality of the alphabet () of , sometimes called the Hartley entropy of ,
Shannon entropy
The limiting value of as is the Shannon entropy:
Collision entropy
Collision entropy, sometimes just called "Rényi entropy", refers to the case ,
where and are independent and identically distributed. The collision entropy is related to the index of coincidence. It is the negative logarithm of the Simpson diversity index.
Min-entropy
In the limit as , the Rényi entropy converges to the min-entropy :
Equivalently, the min-entropy is the largest real number such that all events occur with probability at most .
The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.
In this sense, it is the strongest way to measure the information content of a discrete random variable.
In particular, the min-entropy is never larger than the Shannon entropy.
The min-entropy has important applications for randomness extractors in theoretical computer science:
Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.
Inequalities for different orders α
That is non-increasing in for any given distribution of probabilities ,
which can be proven by differentiation, as
which is proportional to Kullback–Leibler divergence (which is always non-negative), where
. In particular, it is strictly positive except when the distribution is uniform.
At the limit, we have .
In particular cases inequalities can be proven also by Jensen's inequality:
For values of , inequalities in the other direction also hold. In particular, we have
On the other hand, the Shannon entropy can be arbitrarily high for a random variable that has a given min-entropy. An example of this is given by the sequence of random variables for such that and since but .
Rényi divergence
As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.
The Rényi divergence of order or alpha-divergence of a distribution from a distribution is defined to be
when and . We can define the Rényi divergence for the special values by taking a limit, and in particular the limit gives the Kullback–Leibler divergence.
Some special cases:
: minus the log probability under that ;
: minus twice the logarithm of the Bhattacharyya coefficient; ()
: the Kullback–Leibler divergence;
: the log of the expected ratio of the probabilities;
: the log of the maximum ratio of the probabilities.
The Rényi divergence is indeed a divergence, meaning simply that is greater than or equal to zero, and zero only when . For any fixed distributions and , the Rényi divergence is nondecreasing as a function of its order , and it is continuous on the set of for which it is finite, or for the sake of brevity, the information of order obtained if the distribution is replaced by the distribution .
Financial interpretation
A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows
where is the distribution defining the official odds (i.e. the "market") for the game, is the investor-believed distribution and is the investor's risk aversion (the Arrow–Pratt relative risk aversion).
If the true distribution is (not necessarily coinciding with the investor's belief ), the long-term realized rate converges to the true expectation which has a similar mathematical structure
Properties specific to α = 1
The value , which gives the Shannon entropy and the Kullback–Leibler divergence, is the only value at which the chain rule of conditional probability holds exactly:
for the absolute entropies, and
for the relative entropies.
The latter in particular means that if we seek a distribution which minimizes the divergence from some underlying prior measure , and we acquire new information which only affects the distribution of , then the distribution of remains , unchanged.
The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when and are independent, so that if , then
and
The stronger properties of the quantities allow the definition of conditional information and mutual information from communication theory.
Exponential families
The Rényi entropies and divergences for an exponential family admit simple expressions
and
where
is a Jensen difference divergence.
Physical meaning
The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers.
The limit of the quantum mechanical Rényi entropy as is the von Neumann entropy.
See also
Diversity indices
Tsallis entropy
Generalized entropy index
Notes
References
Information theory
Entropy and information | Rényi entropy | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,579 | [
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Entropy and information",
"Computer science",
"Entropy",
"Information theory",
"Dynamical systems"
] |
1,731,760 | https://en.wikipedia.org/wiki/Small-signal%20model | Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point.
Overview
Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform.
In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE.
However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC behavior of the circuit to be calculated easily. In these circuits a steady DC current or voltage from the power supply, called a bias, is applied to each nonlinear component such as a transistor and vacuum tube to set its operating point, and the time-varying AC current or voltage which represents the signal to be processed is added to it. The point on the graph of the characteristic curve representing the bias current and voltage is called the quiescent point (Q point). In the above circuits the AC signal is small compared to the bias, representing a small perturbation of the DC voltage or current in the circuit about the Q point. If the characteristic curve of the device is sufficiently flat over the region occupied by the signal, using a Taylor series expansion the nonlinear function can be approximated near the bias point by its first order partial derivative (this is equivalent to approximating the characteristic curve by a straight line tangent to it at the bias point). These partial derivatives represent the incremental capacitance, resistance, inductance and gain seen by the signal, and can be used to create a linear equivalent circuit giving the response of the real circuit to a small AC signal. This is called the "small-signal model".
The small signal model is dependent on the DC bias currents and voltages in the circuit (the Q point). Changing the bias moves the operating point up or down on the curves, thus changing the equivalent small-signal AC resistance, gain, etc. seen by the signal.
Any nonlinear component whose characteristics are given by a continuous, single-valued, smooth (differentiable) curve can be approximated by a linear small-signal model. Small-signal models exist for electron tubes, diodes, field-effect transistors (FET) and bipolar transistors, notably the hybrid-pi model and various two-port networks. Manufacturers often list the small-signal characteristics of such components at "typical" bias values on their data sheets.
Variable notation
DC quantities (also known as bias), constant values with respect to time, are denoted by uppercase letters with uppercase subscripts. For example, the DC input bias voltage of a transistor would be denoted . For example, one might say that .
Small-signal quantities, which have zero average value, are denoted using lowercase letters with lowercase subscripts. Small signals typically used for modeling are sinusoidal, or "AC", signals. For example, the input signal of a transistor would be denoted as . For example, one might say that .
Total quantities, combining both small-signal and large-signal quantities, are denoted using lower case letters and uppercase subscripts. For example, the total input voltage to the aforementioned transistor would be denoted as . The small-signal model of the total signal is then the sum of the DC component and the small-signal component of the total signal, or in algebraic notation, . For example,
PN junction diodes
The (large-signal) Shockley equation for a diode can be linearized about the bias point or quiescent point (sometimes called Q-point) to find the small-signal conductance, capacitance and resistance of the diode. This procedure is described in more detail under diode modelling#Small-signal_modelling, which provides an example of the linearization procedure followed in small-signal models of semiconductor devices.
Differences between small signal and large signal
A large signal is any signal having enough magnitude to reveal a circuit's nonlinear behavior. The signal may be a DC signal or an AC signal or indeed, any signal. How large a signal needs to be (in magnitude) before it is considered a large signal depends on the circuit and context in which the signal is being used. In some highly nonlinear circuits practically all signals need to be considered as large signals.
A small signal is part of a model of a large signal. To avoid confusion, note that there is such a thing as a small signal (a part of a model) and a small-signal model (a model of a large signal).
A small signal model consists of a small signal (having zero average value, for example a sinusoid, but any AC signal could be used) superimposed on a bias signal (or superimposed on a DC constant signal) such that the sum of the small signal plus the bias signal gives the total signal which is exactly equal to the original (large) signal to be modeled. This resolution of a signal into two components allows the technique of superposition to be used to simplify further analysis. (If superposition applies in the context.)
In analysis of the small signal's contribution to the circuit, the nonlinear components, which would be the DC components, are analyzed separately taking into account nonlinearity.
See also
Diode modelling
Hybrid-pi model
Early effect
SPICE – Simulation Program with Integrated Circuit Emphasis, a general purpose analog electronic circuit simulator capable of solving small signal models.
References
Electronic device modeling | Small-signal model | [
"Physics"
] | 1,393 | [
"Electronic device modeling"
] |
16,862,071 | https://en.wikipedia.org/wiki/Pipe%20flow | In fluid mechanics, pipe flow is a type of fluid flow within a closed conduit, such as a pipe, duct or tube. It is also called as Internal flow. The other type of flow within a conduit is open channel flow. These two types of flow are similar in many ways, but differ in one important aspect. Pipe flow does not have a free surface which is found in open-channel flow. Pipe flow, being confined within closed conduit, does not exert direct atmospheric pressure, but does exert hydraulic pressure on the conduit.
Not all flow within a closed conduit is considered pipe flow. Storm sewers are closed conduits but usually maintain a free surface and therefore are considered open-channel flow. The exception to this is when a storm sewer operates at full capacity, and then can become pipe flow.
Energy in pipe flow is expressed as head and is defined by the Bernoulli equation. In order to conceptualize head along the course of flow within a pipe, diagrams often contain a hydraulic grade line (HGL). Pipe flow is subject to frictional losses as defined by the Darcy-Weisbach formula.
Laminar-turbulence transition
The behavior of pipe flow is governed mainly by the effects of viscosity and gravity relative to the inertial forces of the flow. Depending on the effect of viscosity relative to inertia, as represented by the Reynolds number, the flow can be either laminar or turbulent. For circular pipes of different surface roughness, at a Reynolds number below the critical value of approximately 2000 pipe flow will ultimately be laminar, whereas above the critical value turbulent flow can persist, as shown in Moody chart. For non-circular pipes, such as rectangular ducts, the critical Reynolds number is shifted, but still depending on the aspect ratio. Earlier transition to turbulence, happening at Reynolds number one order of magnitude smaller, i.e. , can happen in channels with special geometrical shapes, such as the Tesla valve.
Flow through pipes can roughly be divided into two:
Laminar flow - see Hagen-Poiseuille flow
Turbulent flow - see Moody diagram
See also
Mathematical equations and concepts
Bernoulli equation
Darcy–Weisbach equation
Torricelli's law
Fields of study
Hydraulics
Fluid Mechanics
Types of fluid flow
Open channel flow
Plug flow
Fluid properties
Viscosity
Fluid phenomena
Head
References
Further reading
Chow, V. T. (1959/2008). Open-Channel Hydraulics. Caldwell, New Jersey: Blackburn Press. .
Fluid dynamics
Fluid mechanics
Piping | Pipe flow | [
"Chemistry",
"Engineering"
] | 526 | [
"Building engineering",
"Chemical engineering",
"Civil engineering",
"Mechanical engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
16,862,854 | https://en.wikipedia.org/wiki/RNA%20polymerase%20II%20holoenzyme | RNA polymerase II holoenzyme is a form of eukaryotic RNA polymerase II that is recruited to the promoters of protein-coding genes in living cells. It consists of RNA polymerase II, a subset of general transcription factors, and regulatory proteins known as .
RNA polymerase II
RNA polymerase II (also called RNAP II and Pol II) is an enzyme found in eukaryotic cells. It catalyzes the transcription of DNA to synthesize precursors of mRNA and most snRNA and microRNA. In humans, RNAP II consists of seventeen protein molecules (gene products encoded by POLR2A-L, where the proteins synthesized from POLR2C, POLR2E, and POLR2F form homodimers).
General transcription factors
General transcription factors (GTFs) or basal transcription factors are protein transcription factors that have been shown to be important in the transcription of class II genes to mRNA templates. Many of them are involved in the formation of a preinitiation complex, which, together with RNA polymerase II, bind to and read the single-stranded DNA gene template. The cluster of RNA polymerase II and various transcription factors is known as a basal transcriptional complex (BTC).
Preinitiation complex
The preinitiation complex (PIC) is a large complex of proteins that is necessary for the transcription of protein-coding genes in eukaryotes and archaea. The PIC helps position RNA polymerase II over gene transcription start sites, denatures the DNA, and positions the DNA in the RNA polymerase II active site for transcription.
The typical PIC is made up of six general transcription factors: TFIIA (GTF2A1, GTF2A2), TFIIB (GTF2B), B-TFIID (BTAF1, TBP), TFIID (BTAF1, BTF3, BTF3L4, EDF1, TAF1-15, 16 total), TFIIE, TFIIF, TFIIH and TFIIJ.
The construction of the polymerase complex takes place on the gene promoter. The TATA box is one well-studied example of a promoter element that occurs in approximately 10% of genes. It is conserved in many (though not all) model eukaryotes and is found in a fraction of the promoters in these organisms. The sequence TATA (or variations) is located at approximately 25 nucleotides upstream of the Transcription Start Point (TSP). In addition, there are also some weakly conserved features including the TFIIB-Recognition Element (BRE), approximately 5 nucleotides upstream (BREu) and 5 nucleotides downstream (BREd) of the TATA box.
Assembly of the PIC
Although the sequence of steps involved in the assembly of the PIC can vary, in general, they follow step 1, binding to the promoter.
The TATA-binding protein (TBP, a subunit of TFIID), TBPL1, or TBPL2 can bind the promoter or TATA box. Most genes lack a TATA box and use an initiator element (Inr) or downstream core promoter instead. Nevertheless, TBP is always involved and is forced to bind without sequence specificity. TAFs from TFIID can also be involved when the TATA box is absent. A TFIID TAF will bind sequence specifically, and force the TBP to bind non-sequence specifically, bringing the remaining portions of TFIID to the promoter.
TFIIA interacts with the TBP subunit of TFIID and aids in the binding of TBP to TATA-box containing promoter DNA. Although TFIIA does not recognize DNA itself, its interactions with TBP allow it to stabilize and facilitate formation of the PIC.
The N-terminal domain of TFIIB brings the DNA into proper position for entry into the active site of RNA polymerase II. TFIIB binds partially sequence specifically, with some preference for BRE. The TFIID-TFIIA-TFIIB (DAB)-promoter complex subsequently recruits RNA polymerase II and TFIIF.
TFIIF (two subunits, RAP30 and RAP74, showing some similarity to bacterial sigma factors) and Pol II enter the complex together. TFIIF helps to speed up the polymerization process.
TFIIE joins the growing complex and recruits TFIIH. TFIIE may be involved in DNA melting at the promoter: it contains a zinc ribbon motif that can bind single-stranded DNA. TFIIE helps to open and close the Pol II’s Jaw-like structure, which enables movement down the DNA strand.
DNA may be wrapped one complete turn around the preinitiation complex and it is TFIIF that helps keep this tight wrapping. In the process, the torsional strain on the DNA may aid in DNA melting at the promoter, forming the transcription bubble.
TFIIH enters the complex. TFIIH is a large protein complex that contains among others the CDK7/cyclin H kinase complex and a DNA helicase. TFIIH has three functions: It binds specifically to the template strand to ensure that the correct strand of DNA is transcribed and melts or unwinds the DNA (ATP-dependent) to separate the two strands using its helicase activity. It has a kinase activity that phosphorylates the C-terminal domain (CTD) of Pol II at the amino acid serine. This switches the RNA polymerase to start producing RNA. Finally it is essential for Nucleotide Excision Repair (NER) of damaged DNA. TFIIH and TFIIE strongly interact with one another. TFIIE affects TFIIH's catalytic activity. Without TFIIE, TFIIH will not unwind the promoter.
TFIIH helps create the transcription bubble and may be required for transcription if the DNA template is not already denatured or if it is supercoiled.
Mediator then encases all the transcription factors and Pol II. It interacts with enhancers, areas very far away (upstream or downstream) that help regulate transcription.
The formation of the preinitiation complex (PIC) is analogous to the mechanism seen in bacterial initiation. In bacteria, the sigma factor recognizes and binds to the promoter sequence. In eukaryotes, the transcription factors perform this role.
Mediator complex
Mediator is a multiprotein complex that functions as a transcriptional coactivator. The Mediator complex is required for the successful transcription of nearly all class II gene promoters in yeast. It works in the same manner in mammals.
The mediator functions as a coactivator and binds to the C-terminal domain (CTD) of RNA polymerase II holoenzyme, acting as a bridge between this enzyme and transcription factors.
C-terminal domain (CTD)
The carboxy-terminal domain (CTD) of RNA polymerase II is that portion of the polymerase that is involved in the initiation of DNA transcription, the capping of the RNA transcript, and attachment to the spliceosome for RNA splicing. The CTD typically consists of up to 52 repeats (in humans) of the sequence Tyr-Ser-Pro-Thr-Ser-Pro-Ser. The carboxy-terminal repeat domain (CTD) is essential for life. Cells containing only RNAPII with none or only up to one-third of its repeats are inviable.
The CTD is an extension appended to the C terminus of RPB1, the largest subunit of RNA polymerase II. It serves as a flexible binding scaffold for numerous nuclear factors, determined by the phosphorylation patterns on the CTD repeats. Each repeat contains an evolutionary conserved and repeated heptapeptide, Tyr1-Ser2-Pro3-Thr4-Ser5-Pro6-Ser7, which is subjected to reversible phosphorylations during each transcription cycle. This domain is inherently unstructured yet evolutionarily conserved, and in eukaryotes it comprises from 25 to 52 tandem copies of the consensus repeat heptad. As the CTD is frequently not required for general transcription factor (GTF)-mediated initiation and RNA synthesis, it does not form a part of the catalytic essence of RNAPII, but performs other functions.
CTD phosphorylation
RNAPII can exist in two forms: RNAPII0, with a highly phosphorylated CTD, and RNAPIIA, with a nonphosphorylated CTD. Phosphorylation occurs principally on Ser2 and Ser5 of the repeats, although these positions are not equivalent. The phosphorylation state changes as RNAPII progresses through the transcription cycle: The initiating RNAPII is form IIA, and the elongating enzyme is form II0. While RNAPII0 does consist of RNAPs with hyperphosphorylated CTDs, the pattern of phosphorylation on individual CTDs can vary due to differential phosphorylation of Ser2 versus Ser5 residues and/or to differential phosphorylation of repeats along the length of the CTD. The PCTD (phosphoCTD of an RNAPII0) physically links pre-mRNA processing to transcription by tethering processing factors to elongating RNAPII, e.g., 5′-end capping, 3′-end cleavage, and polyadenylation.
Ser5 phosphorylation (Ser5PO4) near the 5′ ends of genes depends principally on the kinase activity of TFIIH (Kin28 in yeast; CDK7 in metazoans). The transcription factor TFIIH is a kinase and will hyperphosphorylate the CTD of RNAP, and in doing so, causes the RNAP complex to move away from the initiation site. Subsequent to the action of TFIIH kinase, Ser2 residues are phosphorylated by CTDK-I in yeast (CDK9 kinase in metazoans). Ctk1 (CDK9) acts in complement to phosphorylation of serine 5 and is, thus, seen in middle to late elongation.
CDK8 and cyclin C (CCNC) are components of the RNA polymerase II holoenzyme that phosphorylate the carboxy-terminal domain (CTD). CDK8 regulates transcription by targeting the CDK7/cyclin H subunits of the general transcription initiation factor IIH (TFIIH), thereby providing a link between the mediator and the basal transcription machinery.
The gene CTDP1 encodes a phosphatase that interacts with the carboxy-terminus of transcription initiation factor TFIIF, a transcription factor that regulates elongation as well as initiation by RNA polymerase II.
Also involved in the phosphorylation and regulation of the RPB1 CTD is cyclin T1 (CCNT1). Cyclin T1 tightly associates and forms a complex with CDK9 kinase, both of which are involved in the phosphorylation and regulation.
ATP + [DNA-directed RNA polymerase II] <=> ADP + [DNA-directed RNA polymerase II] phosphate : catalyzed by CDK9 EC 2.7.11.23.
TFIIF and FCP1 cooperate for RNAPII recycling. FCP1, the CTD phosphatase, interacts with RNA polymerase II. Transcription is regulated by the state of phosphorylation of a heptapeptide repeat. The nonphosphorylated form, RNAPIIA, is recruited to the initiation complex, whereas the elongating polymerase is found with RNAPII0. RNAPII cycles during transcription. CTD phosphatase activity is regulated by two GTFs (TFIIF and TFIIB). The large subunit of TFIIF (RAP74) stimulates the CTD phosphatase activity, whereas TFIIB inhibits TFIIF-mediated stimulation. Dephosphorylation of the CTD alters the migration of the largest subunit of RNAPII (RPB1).
5' capping
The carboxy-terminal domain is also the binding site of the cap-synthesizing and cap-binding complex. In eukaryotes, after transcription of the 5' end of an RNA transcript, the cap-synthesizing complex on the CTD will remove the gamma-phosphate from the 5'-phosphate and attach a GMP, forming a 5',5'-triphosphate linkage. The synthesizing complex falls off and the cap then binds to the cap-binding complex (CBC), which is bound to the CTD.
The 5'cap of eukaryotic RNA transcripts is important for binding of the mRNA transcript to the ribosome during translation, to the CTD of RNAP, and prevents RNA degradation.
Spliceosome
The carboxy-terminal domain is also the binding site for spliceosome factors that are part of RNA splicing. These allow for the splicing and removal of introns (in the form of a lariat structure) during RNA transcription.
Mutation in the CTD
Major studies in which knockout of particular amino acids was achieved in the CTD have been carried out. The results indicate that RNA polymerase II CTD truncation mutations affect the ability to induce transcription of a subset of genes in vivo, and the lack of response to induction maps to the upstream activating sequences of these genes.
Genome surveillance complex
Several protein members of the BRCA1-associated genome surveillance complex (BASC) associate with RNA polymerase II and play a role in transcription.
The transcription factor TFIIH is involved in transcription initiation and DNA repair. MAT1 (for 'ménage à trois-1') is involved in the assembly of the CAK complex. CAK is a multisubunit protein that includes CDK7, cyclin H (CCNH), and MAT1. CAK is an essential component of the transcription factor TFIIH that is involved in transcription initiation and DNA repair.
The nucleotide excision repair (NER) pathway is a mechanism to repair damage to DNA. ERCC2 is involved in transcription-coupled NER and is an integral member of the basal transcription factor BTF2/TFIIH complex. ERCC3 is an ATP-dependent DNA helicase that functions in NER. It also is a subunit of basal transcription factor 2 (TFIIH) and, thus, functions in class II transcription. XPG (ERCC5) forms a stable complex with TFIIH, which is active in transcription and NER. ERCC6 encodes a DNA-binding protein that is important in transcription-coupled excision repair. ERCC8 interacts with Cockayne syndrome type B (CSB) protein, with p44 (GTF2H2), a subunit of the RNA polymerase II transcription factor IIH, and ERCC6. It is involved in transcription-coupled excision repair.
Higher error ratios in transcription by RNA polymerase II are observed in the presence of Mn2+ compared to Mg2+.
Transcription coactivators
The EDF1 gene encodes a protein that acts as a transcriptional coactivator by interconnecting the general transcription factor TATA element-binding protein (TBP) and gene-specific activators.
TFIID and human mediator coactivator (THRAP3) complexes (mediator complex, plus THRAP3 protein) assemble cooperatively on promoter DNA, from which they become part of the RNAPII holoenzyme.
Transcription initiation
The completed assembly of the holoenzyme with transcription factors and RNA polymerase II bound to the promoter forms the eukaryotic transcription initiation complex. Transcription in the archaea domain is similar to transcription in eukaryotes.
Transcription begins with matching of NTPs to the first and second in the DNA sequence. This, like most of the remainder of transcription, is an energy-dependent process, consuming adenosine triphosphate (ATP) or other NTP.
Promoter clearance
After the first bond is synthesized, the RNA polymerase must clear the promoter. During this time, there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation and is common for both eukaryotes and prokaryotes. Abortive initiation continues to occur until the σ factor rearranges, resulting in the transcription elongation complex (which gives a 35 bp-moving footprint). The σ factor is released before 80 nucleotides of mRNA are synthesized. Once the transcript reaches approximately 23 nucleotides, it no longer slips and elongation can occur.
Initiation regulation
Due to the range of genes that Pol II transcribes, this is the polymerase that experiences the most regulation by a range of factors at each stage of transcription. It is also one of the most complex in terms of polymerase cofactors involved.
Initiation is regulated by many mechanisms. These can be separated into two main categories:
Protein interference.
Regulation by phosphorylation.
Regulation by protein interference
Protein interference is the process where in some signaling protein interacts, either with the promoter or with some stage of the partially constructed complex, to prevent further construction of the polymerase complex, so preventing initiation. In general, this is a very rapid response and is used for fine level, individual gene control and for 'cascade' processes for a group of genes useful under a specific conditions (for example, DNA repair genes or heat shock genes).
Chromatin structure inhibition is the process wherein the promoter is hidden by chromatin structure. Chromatin structure is controlled by post-translational modification of the histones involved and leads to gross levels of high or low transcription levels. See: chromatin, histone, and nucleosome.
These methods of control can be combined in a modular method, allowing very high specificity in transcription initiation control.
Regulation by phosphorylation
The largest subunit of Pol II (Rpb1) has a domain at its C-terminus called the CTD (C-terminal domain). This is the target of kinases and phosphatases. The phosphorylation of the CTD is an important regulation mechanism, as this allows attraction and rejection of factors that have a function in the transcription process. The CTD can be considered as a platform for transcription factors.
The CTD consists of repetitions of an amino acid motif, YSPTSPS, of which Serines and Threonines can be phosphorylated. The number of these repeats varies; the mammalian protein contains 52, while the yeast protein contains 26. Site-directed-mutagenesis of the yeast protein has found at least 10 repeats are needed for viability. There are many different combinations of phosphorylations possible on these repeats and these can change rapidly during transcription. The regulation of these phosphorylations and the consequences for the association of transcription factors plays a major role in the regulation of transcription.
During the transcription cycle, the CTD of the large subunit of RNAP II is reversibly phosphorylated. RNAP II containing unphosphorylated CTD is recruited to the promoter, whereas the hyperphosphorylated CTD form is involved in active transcription. Phosphorylation occurs at two sites within the heptapeptide repeat, at Serine 5 and Serine 2. Serine 5 phosphorylation is confined to promoter regions and is necessary for the initiation of transcription, whereas Serine 2 phosphorylation is important for mRNA elongation and 3'-end processing.
Elongation
The process of elongation is the synthesis of a copy of the DNA into messenger RNA. RNA Pol II matches complementary RNA nucleotides to the template DNA by Watson-Crick base pairing. These RNA nucleotides are ligated, resulting in a strand of messenger RNA.
Unlike DNA replication, mRNA transcription can involve multiple RNA polymerases on a single DNA template and multiple rounds of transcription (amplification of particular mRNA), so many mRNA molecules can be rapidly produced from a single copy of a gene.
Elongation also involves a proofreading mechanism that can replace incorrectly incorporated bases. In eukaryotes, this may correspond with short pauses during transcription that allow appropriate RNA editing factors to bind. These pauses may be intrinsic to the RNA polymerase or due to chromatin structure.
Elongation regulation
RNA Pol II elongation promoters can be summarised in three classes:
Drug/sequence-dependent arrest affected factors, e.g., SII (TFIIS) and P-TEFb protein families.
Chromatin structure oriented factors. Based on histone post translational modifications – phosphorylation, acetylation, methylation and ubiquination.
See: chromatin, histone, and nucleosome
RNA Pol II catalysis improving factors. Improve the Vmax or Km of RNA Pol II, so improving the catalytic quality of the polymerase enzyme. E.g. TFIIF, Elongin and ELL families.
See: Enzyme kinetics, Henri–Michaelis–Menten kinetics, Michaelis constant, and Lineweaver–Burk plot
As for initiation, protein interference, seen as the "drug/sequence-dependent arrest affected factors" and "RNA Pol II catalysis improving factors" provide a very rapid response and is used for fine level individual gene control. Elongation downregulation is also possible, in this case usually by blocking polymerase progress or by deactivating the polymerase.
Chromatin structure-oriented factors are more complex than for initiation control. Often the chromatin-altering factor becomes bound to the polymerase complex, altering the histones as they are encountered and providing a semi-permanent 'memory' of previous promotion and transcription.
Termination
Termination is the process of breaking up the polymerase complex and ending the RNA strand. In eukaryotes using RNA Pol II, this termination is very variable (up to 2000 bases), relying on post transcriptional modification.
Little regulation occurs at termination, although it has been proposed newly transcribed RNA is held in place if proper termination is inhibited, allowing very fast expression of genes given a stimulus. This has not yet been demonstrated in eukaryotes.
Transcription factory
Active RNA Pol II transcription holoenzymes can be clustered in the nucleus, in discrete sites called transcription factories. There are ~8,000 such factories in the nucleoplasm of a HeLa cell, but only 100–300 RNAP II foci per nucleus in erythroid cells, as in many other tissue types. The number of transcription factories in tissues is far more restricted than indicated by previous estimates from cultured cells. As an active transcription unit is usually associated with only one Pol II holoenzyme, a polymerase II factory may contain on average ~8 holoenzymes. Colocalization of transcribed genes has not been observed when using cultured fibroblast-like cells. Differentiated or committed tissue types have a limited number of available transcription sites. Estimates show that erythroid cells express at least 4,000 genes, so many genes are obliged to seek out and share the same factory.
The intranuclear position of many genes is correlated with their activity state. During transcription in vivo, distal active genes are dynamically organized into shared nuclear subcompartments and colocalize to the same transcription factory at high frequencies. Movement into or out of these factories results in activation (On) or abatement (Off) of transcription, rather than by recruiting and assembling a transcription complex. Usually, genes migrate to preassembled factories for transcription.
An expressed gene is preferentially located outside of its chromosome territory, but a closely linked, inactive gene is located inside.
Holoenzyme stability
RNA polymerase II holoenzyme stability determines the number of base pairs that can be transcribed before the holoenzyme loses its ability to transcribe. The length of the CTD is essential for RNA polymerase II stability. RNA polymerase II stability has been shown to be regulated by post-translation proline hydroxylation. The von Hippel–Lindau tumor suppressor protein (pVHL, human GeneID: 7428) complex binds the hyperphosphorylated large subunit of the RNA polymerase II complex, in a proline hydroxylation- and CTD phosphorylation-dependent manner, targeting it for ubiquitination.
See also
RNA polymerase I
RNA polymerase III
Post-transcriptional modification
Transcription (genetics)
Eukaryotic transcription
References
RNA Polymerase: Components of the Transcription Initiation Machinery
External links
More information at Berkeley National Lab
Enzymes
Protein complexes
Gene expression | RNA polymerase II holoenzyme | [
"Chemistry",
"Biology"
] | 5,239 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
16,864,252 | https://en.wikipedia.org/wiki/Superinsulator | A superinsulator is a material that at low but finite temperatures does not conduct electricity, i.e. has an infinite resistance so that no electric current passes through it. The phenomenon of superinsulation can be regarded as an exact dual to superconductivity.
The superinsulating state can be destroyed by increasing the temperature and applying an external magnetic field and voltage. A superinsulator was first predicted by M. C. Diamantini, P. Sodano, and C. A. Trugenberger in 1996 who found a superinsulating ground state dual to superconductivity, emerging at the insulating side of the superconductor-insulator transition in the Josephson junction array due to electric-magnetic duality. Superinsulators were independently rediscovered by T. Baturina and V. Vinokur in 2008 on the basis of duality between two different symmetry realizations of the uncertainty principle and experimentally found in titanium nitride (TiN) films. The 2008 measurements revealed giant resistance jumps interpreted as manifestations of the voltage threshold transition to a superinsulating state which was identified as the low-temperature confined phase emerging below the charge Berezinskii-Kosterlitz-Thouless transition. These jumps were similar to earlier findings of the resistance jumps in indium oxide (InO) films. The finite-temperature phase transition into the superinsulating state was finally confirmed by Mironov et al. in NbTiN films in 2018.
Other researchers have seen the similar phenomenon in disordered indium oxide films.
Mechanism
Both superconductivity and superinsulation rest on the pairing of conduction electrons into Cooper pairs. In superconductors, all the pairs move coherently, allowing for the electric current without resistance. In superinsulators, both Cooper pairs and normal excitations are confined and the electric current cannot flow. A mechanism behind superinsulation is the proliferation of magnetic monopoles at low temperatures. In two dimensions (2D), magnetic monopoles are quantum tunneling events (instantons) that are often referred to as monopole “plasma”. In three dimensions (3D), monopoles form a Bose condensate. Monopole plasma or monopole condensate squeezes Faraday's electric field lines into thin electric flux filaments or strings dual to Abrikosov vortices in superconductors. Cooper pairs of opposite charges at the end of these electric strings feel an attractive linear potential. When the corresponding string tension is large, it is energetically favorable to pull out of vacuum many charge-anticharge pairs and to form many short strings rather than to continue stretching the original one. As a consequence, only neutral “electric pions” exist as asymptotic states and the electric conduction is absent. This mechanism is a single-color version of the confinement mechanism that binds quarks into hadrons.
Because the electric forces are much weaker than strong forces of the particle physics, the typical size of “electric pions” well exceeds the size of corresponding elementary particles. This implies that preparing the samples that are sufficiently small, one can peer inside an “electric pion,” where electric strings are loose and Coulomb interactions are screened, hence electric charges are effectively unbound and move as if they were in the metal. The low-temperature saturation of the resistance to metallic behavior has been observed in TiN films with small lateral dimensions.
Future applications
Superinsulators could potentially be used as a platform for high-performance sensors and logical units. Combined with superconductors, superinsulators could be used to create switching electrical circuits with no energy loss as heat.
References
External links
Superconductivity
Insulators
Dielectrics | Superinsulator | [
"Physics",
"Materials_science",
"Engineering"
] | 775 | [
"Electrical resistance and conductance",
"Physical quantities",
"Superconductivity",
"Materials science",
"Materials",
"Condensed matter physics",
"Dielectrics",
"Matter"
] |
16,865,499 | https://en.wikipedia.org/wiki/Earthquake%20shaking%20table | There are different experimental techniques which can be used to test the response of structures and soil or rock slopes to verify their seismic performance. One of these is using an earthquake shaking table (a shaking table or shake table). This device is used for shaking scaled slopes, structural models or building components with a wide range of simulated ground motions, including reproductions of recorded earthquake time-histories.
While modern tables typically consist of a rectangular platform that is driven in up to six degrees of freedom (DOF) by servo-hydraulic or other types of actuators, the earliest shake table, invented at the University of Tokyo in 1893 to categorize types of building construction, ran on a simple wheel mechanism. Test specimens are fixed to the platform and shaken, often to the point of failure. Using video records and data from transducers, it is possible to interpret the dynamic behaviour of the specimen. Earthquake shaking tables are used extensively in seismic research, as they provide the means to excite structures such that they are subjected to conditions representative of true earthquake ground motions.
They are also used in other fields of engineering to test and qualify vehicles and components of vehicles that must respect heavy vibration requirements and standards. Some applications include testing according to aerospace, electrical, and military standards. Earthquake shaking tables are essential in model testing contests, where participants evaluate designs developed within specific guidelines against simulated seismic activity. Simple shake tables are also used in architecture and structural engineering primarily for educational purposes, helping students learn how structures respond to earthquakes through hands-on model testing.
See also
Earthquake engineering
References
Further reading
IEEE 693-2018: "IEEE Recommended Practice for Seismic Design of Substations", Institute of Electrical and Electronics Engineers, 2018.
External links
Hydra shaker – European Space Agency
Earthquake engineering
Mechanical tests | Earthquake shaking table | [
"Engineering"
] | 360 | [
"Structural engineering",
"Mechanical tests",
"Civil engineering",
"Mechanical engineering",
"Earthquake engineering"
] |
16,867,295 | https://en.wikipedia.org/wiki/Superexchange | Superexchange or Kramers–Anderson superexchange interaction, is a prototypical indirect exchange coupling between neighboring magnetic moments (usually next-nearest neighboring cations, see the schematic illustration of MnO below) by virtue of exchanging electrons through a non-magnetic anion known as the superexchange center. In this way, it differs from direct exchange, in which there is direct overlap of electron wave function from nearest neighboring cations not involving an intermediary anion or exchange center. While direct exchange can be either ferromagnetic or antiferromagnetic, the superexchange interaction is usually antiferromagnetic, preferring opposite alignment of the connected magnetic moments. Similar to the direct exchange, superexchange calls for the combined effect of Pauli exclusion principle and Coulomb's repulsion of the electrons. If the superexchange center and the magnetic moments it connects to are non-collinear, namely the atomic bonds are canted, the superexchange will be accompanied by the antisymmetric exchange known as the Dzyaloshinskii–Moriya interaction, which prefers orthogonal alignment of neighboring magnetic moments. In this situation, the symmetric and antisymmetric contributions compete with each other and can result in versatile magnetic spin textures such as magnetic skyrmions.
Superexchange was theoretically proposed by Hendrik Kramers in 1934, when he noticed that in crystals like Manganese(II) oxide (MnO), there are manganese atoms that interact with one another despite having nonmagnetic oxygen atoms between them. Phillip Anderson later refined Kramers' model in 1950.
A set of semi-empirical rules were developed by John B. Goodenough and in the 1950s. These rules, now referred to as the Goodenough–Kanamori rules, have proven highly successful in rationalizing the magnetic properties of a wide range of materials on a qualitative level. They are based on the symmetry relations and electron occupancy of the overlapping atomic orbitals (assuming the localized Heitler–London, or valence-bond, model is more representative of the chemical bonding than is the delocalized, or Hund–Mulliken–Bloch, model). Essentially, the Pauli exclusion principle dictates that between two magnetic ions with half-occupied orbitals, which couple through an intermediary non-magnetic ion (e.g. O2−), the superexchange will be strongly anti-ferromagnetic while the coupling between an ion with a filled orbital and one with a half-filled orbital will be ferromagnetic. The coupling between an ion with either a half-filled or filled orbital and one with a vacant orbital can be either antiferromagnetic or ferromagnetic, but generally favors ferromagnetic. When multiple types of interactions are present simultaneously, the antiferromagnetic one is generally dominant, since it is independent of the intra-atomic exchange term. For simple cases, the Goodenough–Kanamori rules readily allow the prediction of the net magnetic exchange expected for the coupling between ions. Complications begin to arise in various situations:
when direct exchange and superexchange mechanisms compete with one another;
when the cation–anion–cation bond angle deviates away from 180°;
when the electron occupancy of the orbitals is non-static, or dynamical;
and when spin–orbit coupling becomes important.
Double exchange is a related magnetic coupling interaction proposed by Clarence Zener to account for electrical transport properties. It differs from superexchange in the following manner: in superexchange, the occupancy of the d-shell of the two metal ions is the same or differs by two, and the electrons are localized. For other occupations (double exchange), the electrons are itinerant (delocalized); this results in the material displaying magnetic exchange coupling, as well as metallic conductivity.
Manganese oxide
The p orbitals from oxygen and d orbitals from manganese can form a direct exchange.
There is antiferromagnetic order because the singlet state is energetically favoured. This configuration allows a delocalization of the involved electrons due to a lowering of the kinetic energy.
Quantum-mechanical perturbation theory results in an antiferromagnetic interaction of the spins of neighboring Mn atoms with the energy operator (Hamiltonian)
where tMn,O is the so-called hopping energy between a Mn 3d and the oxygen p orbitals, while U is a so-called Hubbard energy for Mn. The expression is the scalar product between the Mn spin-vector operators (Heisenberg model).
Superexchange Interactions in general
It has been proven, that due to the multiple energy scales present in the model for superexchange, perturbation theory is not in general convergent, and is thus not an appropriate method for deriving this interaction between spins and that this undoubtedly accounts for the incorrect qualitative characterization of some transition-metal oxide compounds as Mott-Hubbard, rather than Charge-Transfer, insulators. This is particularly apparent whenever the p-d orbital energy difference is not extremely large, compared with the d-electron correlation energy U.
References
External links
Condensed matter physics
Magnetic exchange interactions | Superexchange | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,102 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
18,056,178 | https://en.wikipedia.org/wiki/Adiabatic%20circuit | Adiabatic circuits are low-power electronic circuits which use "reversible logic" to conserve energy. The term "adiabatic" refers to an ideal thermodynamic process in which no heat or mass is exchanged with the surrounding environment, alluding to the ability of the circuits to reduce energy loss as heat.
Unlike traditional CMOS circuits, which dissipate energy during switching, adiabatic circuits reduce dissipation by following two key rules:
Never turn on a transistor when there is a voltage potential between the source and drain.
Never turn off a transistor when current is flowing through it.
Because of the second law of thermodynamics, it is not possible to completely convert energy into useful work. However, the term "adiabatic logic" is used to describe logic families that could theoretically operate without losses. The term "quasi-adiabatic logic" is used to describe logic that operates with a lower power than static CMOS logic, but which still has some theoretical non-adiabatic losses. In both cases, the nomenclature is used to indicate that these systems are capable of operating with substantially less power dissipation than traditional static CMOS circuits.
History
"Adiabatic" is a term of Greek origin that has spent most of its history associated with classical thermodynamics. It refers to a system in which a transition occurs without energy (usually in the form of heat) being either lost to or gained from the system. In the context of electronic systems, rather than heat, electronic charge is preserved. Thus, an ideal adiabatic circuit would operate without the loss or gain of electronic charge.
The first usage of the term "adiabatic" in the context of circuitry appears to be traceable back to a paper presented in 1992 at the Second Workshop on Physics and Computation. Although an earlier suggestion of the possibility of energy recovery was made by Charles H. Bennett where in relation to the energy used to perform computation, he stated "This energy could in principle be saved and reused".
Principles
There are several important principles that are shared by all of these low-power adiabatic systems. These include only turning switches on when there is no potential difference across them, only turning switches off when no current is flowing through them, and using a power supply that is capable of recovering or recycling energy in the form of electric charge. To achieve this, in general, the power supplies of adiabatic logic circuits have used constant current charging (or an approximation thereto), in contrast to more traditional non-adiabatic systems that have generally used constant voltage charging from a fixed-voltage power supply.
Power supply
The power supplies of adiabatic logic circuits have also used circuit elements capable of storing energy. This is often done using inductors, which store the energy by converting it to magnetic flux. There are a number of synonyms that have been used by other authors to refer to adiabatic logic type systems, these include: "charge recovery logic", "charge recycling logic", "clock-powered logic", "energy recovery logic" and "energy recycling logic". Because of the reversibility requirements for a system to be fully adiabatic, most of these synonyms actually refer to, and can be used inter-changeably, to describe quasi-adiabatic systems. These terms are succinct and self-explanatory, so the only term that warrants further explanation is "clock-powered logic". This has been used because many adiabatic circuits use a combined power supply and clock, or a "power-clock". This a variable, usually multi-phase, power-supply which controls the operation of the logic by supplying energy to it, and subsequently recovering energy from it.
Because high-Q inductors are not available in CMOS, inductors must be off-chip, so adiabatic switching with inductors are limited to designs which use only a few inductors.
Quasi-adiabatic stepwise charging avoids inductors entirely by storing recovered energy in capacitors.
Stepwise charging (SWC) can use on-chip capacitors.
Asynchrobatic logic, introduced in 2004,
is a CMOS logic family design style using internal stepwise charging
that attempts to combine the low-power benefits of the seemingly contradictory ideas of "clock-powered logic" (adiabatic circuits)
and "circuits without clocks" (asynchronous circuits).
CMOS adiabatic circuits
There are some classical approaches to reduce the dynamic power such as reducing supply voltage, decreasing physical capacitance and reducing switching activity. These techniques are not fit enough to meet today's power requirement. However, most research has focused on building adiabatic logic, which is a promising design for low power applications.
Adiabatic logic works with the concept of switching activities which reduces the power by giving stored energy back to the supply. Thus, the term adiabatic logic is used in low-power VLSI circuits which implements reversible logic. In this, the main design changes are focused in power clock which plays the vital role in the principle of operation. Each phase of the power clock gives user to achieve the two major design rules for the adiabatic circuit design.
Never turn on a transistor if there is a voltage across it (VDS > 0)
Never turn off a transistor if there is a current through it (IDS ≠ 0)
Never pass current through a diode
If these conditions with regard to the inputs, in all the four phases of power clock, recovery phase will restore the energy to the power clock, resulting considerable energy saving. Yet some complexities in adiabatic logic design perpetuate. Two such complexities, for instance, are circuit implementation for time-varying power sources needs to be done and computational implementation by low overhead circuit structures needs to be followed.
There are two big challenges of energy recovering circuits; first, slowness in terms of today's standards, second it requires ~50% of more area than conventional CMOS, and simple circuit designs get complicated.
See also
References
Further reading
External links
Asymptotically Zero Energy Computing Using Split-Level Charge Recovery Logic
Digital electronics | Adiabatic circuit | [
"Engineering"
] | 1,290 | [
"Electronic engineering",
"Digital electronics"
] |
5,905,152 | https://en.wikipedia.org/wiki/Intersubband%20polariton | Intersubband transitions (also known as intraband transitions) are dipolar allowed optical excitations between the quantized electronic energy levels within the conduction band of semiconductor heterostructures. Intersubband transitions when coupled with an optical resonator form new, mixed-state photons. This mixing is referred to as an intersubband cavity-polariton. These transitions exhibit an anticrossing in energy with a separation known as vacuum-Rabi splitting, similar to level repulsion in atomic physics.
Quantum cascade laser
A cascading of intersubband transitions is the mechanism behind a quantum cascade laser which produces a monochromatic coherent light-source at infrared wavelengths.
Color of metals
Most metals reflect almost all visible light, due to the presence of free charges, and are therefore silvery in color or mirror-like. However, some metals like gold and copper are more reddish, and this is due to absorption from intersubband transitions that occur at blue wavelengths.
See also
Fluorescence (interband transitions)
References
Quantum mechanics
Quantum electronics
Quasiparticles | Intersubband polariton | [
"Physics",
"Materials_science"
] | 223 | [
"Matter",
"Quantum electronics",
"Theoretical physics",
"Quantum physics stubs",
"Quantum mechanics",
"Nanotechnology",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
5,906,036 | https://en.wikipedia.org/wiki/Optical%20equivalence%20theorem | The optical equivalence theorem in quantum optics asserts an equivalence between the expectation value of an operator in Hilbert space and the expectation value of its associated function in the phase space formulation with respect to a quasiprobability distribution. The theorem was first reported by George Sudarshan in 1963 for normally ordered operators and generalized later that decade to any ordering.
Let Ω be an ordering of the non-commutative creation and annihilation operators, and let be an operator that is expressible as a power series in the creation and annihilation operators that satisfies the ordering Ω. Then the optical equivalence theorem is succinctly expressed as
Here, is understood to be the eigenvalue of the annihilation operator on a coherent states and is replaced formally in the power series expansion of . The left side of the above equation is an expectation value in the Hilbert space whereas the right hand side is an expectation value with respect to the quasiprobability distribution.
We may write each of these explicitly for better clarity. Let be the density operator and be the ordering reciprocal to Ω. The quasiprobability distribution associated with Ω is given, then, at least formally, by
The above framed equation becomes
For example, let Ω be the normal order. This means that can be written in a power series of the following form:
The quasiprobability distribution associated with the normal order is the Glauber–Sudarshan P representation. In these terms, we arrive at
This theorem implies the formal equivalence between expectation values of normally ordered operators in quantum optics and the corresponding complex numbers in classical optics.
References
Quantum optics
Theorems in quantum mechanics | Optical equivalence theorem | [
"Physics",
"Mathematics"
] | 332 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum optics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
5,907,652 | https://en.wikipedia.org/wiki/Hardware%20bug | A hardware bug is a bug in computer hardware. It is the hardware counterpart of software bug, a defect in software. A bug is different from a glitch which describes an undesirable behavior as more quick, transient and repeated than constant, and different from a quirk which is a behavior that may be considered useful even though not intentionally designed.
Errata, corrections to the documentation, may be published by the manufacturer to describe hardware bugs, and errata is sometimes used as a term for the bugs themselves.
History
Unintended operation
Sometimes users take advantage of the unintended or undocumented operation of hardware to serve some purpose, in which case a flaw may be considered a feature. This gives rise to the often ironically employed acronym INABIAF, "It's Not A Bug It's A Feature". For example, undocumented instructions, known as illegal opcodes, on the MOS Technology 6510 of the Commodore 64 and MOS Technology 6502 of the Apple II computers are sometimes utilized.
Security vulnerabilities
Some flaws in hardware may lead to security vulnerabilities where memory protection or other features fail to work properly. Starting in 2017 a series of security vulnerabilities were found in the implementations of speculative execution on common processor architectures that allowed a violation of privilege level.
In 2019 researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature on Intel Platform Controller Hubs, known as chipsets, which made the mode accessible with a normal motherboard possibly leading to a security vulnerability.
Pentium bugs
The Intel Pentium series of CPUs had two well-known bugs discovered after it was brought to market, the FDIV bug affecting floating point division which resulted in a recall in 1994, and the F00F bug discovered in 1997 which causes the processor to stop operating until rebooted.
References
Hardware bugs
Engineering concepts | Hardware bug | [
"Engineering"
] | 390 | [
"nan"
] |
5,908,991 | https://en.wikipedia.org/wiki/Mass%20chromatogram | A mass chromatogram is a representation of mass spectrometry data as a chromatogram, where the x-axis represents time and the y-axis represents signal intensity. The source data contains mass information; however, it is not graphically represented in a mass chromatogram in favor of visualizing signal intensity versus time. The most common use of this data representation is when mass spectrometry is used in conjunction with some form of chromatography, such as in liquid chromatography–mass spectrometry or gas chromatography–mass spectrometry. In this case, the x-axis represents retention time, analogous to any other chromatogram. The y-axis represents signal intensity or relative signal intensity. There are many different types of metrics that this intensity may represent, depending on what information is extracted from each mass spectrum.
Total ion current chromatogram (TICC)
The total ion current chromatogram (TICC) represents the summed intensity across the entire range of masses being detected at every point in the analysis. The range is typically several hundred mass-to-charge units or more. In complex samples, the TICC often provides limited information as multiple analytes elute simultaneously, obscuring individual species.
Base peak chromatogram
The base peak chromatogram is similar to the TICC, however it monitors only the most intense peak in each spectrum. This means that the base peak chromatogram represents the intensity of the most intense peak at every point in the analysis. Base peak chromatograms often have a cleaner look and thus are more informative than TIC chromatograms because the background is reduced by focusing on a single analyte at every point.
Extracted-ion chromatogram (EIC or XIC)
In an extracted-ion chromatogram (EIC or XIC), also called a reconstructed-ion chromatogram (RIC), one or more m/z values representing one or more analytes of interest are recovered ('extracted') from the entire data set for a chromatographic run. The total intensity or base peak intensity within a mass tolerance window around a particular analyte's mass-to-charge ratio is plotted at every point in the analysis. The size of the mass tolerance window typically depends on the mass accuracy and mass resolution of the instrument collecting the data. This is useful for re-examining data to detect previously-unsuspected analytes, to highlight potential isomers, resolve suspected co-eluting substances, or to provide clean chromatograms of compounds of interest. An extracted-ion chromatogram is generated by separating the ions of interest from a data file containing the full mass spectrum over time after the fact; this is different from selected-ion chromatograms, discussed below, in which data is collected only for specific m/z values. A closely related term is extracted-compound chromatogram (ECC).
Selected-ion monitoring chromatogram (SIM)
A selected-ion monitoring (SIM) chromatogram is similar to an EIC/XIC, with the exception that the mass spectrometer is operated in SIM mode, such that only preselected m/z values are detected in the analysis. SIM experiments can be performed using mass spectrometry (MS) or tandem mass spectrometry (MS/MS) instruments. They are more common on MS instruments. This differs significantly from the extracted-ion chromatogram mentioned above in that only data for the ion(s) of interest are collected in a SIM experiment; for extracted-ion chromatograms (EIC or XIC), data for an entire mass range are collected during the run and then examined for analytes of interest after the completion of the run.
Selected-reaction monitoring chromatogram (SRM, MRM)
The selected-reaction monitoring (SRM) experiment is very similar to the SIM experiment except that tandem mass spectrometry is used and a specific product ion of a specific parent ion is detected. The mass of the parent analyte is first selected while other ions are filtered away. The parent analyte ion is then fragmented in the gas phase and a specific fragment ion is monitored. This experiment has very high specificity because the SRM chromatogram represents only ions of a particular mass that fragment in a manner that produce a very specific product mass. This type of experiment can only be performed using tandem mass spectrometry. The technology progress in the MS/MS area lead to the development of MRM, Multiple Reaction Monitoring, which allows simultaneous detection of several coeluting analytes with different parent and/or product ions.
See also
Mass spectrum
References
Mass spectrometry | Mass chromatogram | [
"Physics",
"Chemistry"
] | 1,005 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
3,301,527 | https://en.wikipedia.org/wiki/Ventricular%20assist%20device | A ventricular assist device (VAD) is an electromechanical device that provides support for cardiac pump function, which is used either to partially or to completely replace the function of a failing heart. VADs can be used in patients with acute (sudden onset) or chronic (long standing) heart failure, which can occur due to coronary artery disease, atrial fibrillation, valvular disease, and other conditions.
Categorization of VADs
VADs may be used to manage a variety of cardiac diseases and can be categorized based on which ventricle the device is assisting, and whether the VAD will be temporary or permanent.
Ventricular Assistance
First, VADs can be categorized based on whether they are designed to assist the right ventricle (RVAD) or the left ventricle (LVAD) or to both ventricles (BiVAD). The type of VAD implanted depends on the type of underlying heart disease (e.g. patients with right ventricular failure from pulmonary arterial hypertension may require an RVAD, versus those with left ventricular failure from a myocardial infarction may require an LVAD). The LVAD is the most common device applied to a defective heart (it is sufficient in most cases; the right side of the heart is then often able to make use of the heavily increased blood flow), but when the pulmonary arterial resistance is high, then an (additional) right ventricular assist device (RVAD) might be necessary to resolve the problem of cardiac circulation. If both an LVAD and an RVAD are needed a BiVAD is normally used, rather than a separate LVAD and RVAD.
Duration
VADs can further be divided by the duration of their use (i.e. temporary versus permanent). Some VADs are for short-term use, typically for patients recovering from myocardial infarction (heart attack) and for patients recovering from cardiac surgery; some are for long-term use (months to years to perpetuity), typically for patients with advanced heart failure
Temporary use of VADs may vary in scale (e.g. days to months) depending on a patient's condition. Certain types of VADS may be used in patients with signs of acute (sudden onset) heart failure or cardiogenic shock as a result of an infarction, valvular disease, among other causes. In patients with acute signs of heart failure, small percutaneous (introduced to the heart through the skin into a blood vessel rather than through an incision) VADs such as the Impella 5.5, Impella RP, and others can be introduced to either the left or right ventricle (depending on the patient-specific needs) using a wire and that is introduced through the arteries or veins of the neck, axilla, or groin.
Long-term use of VADs may also vary in its scale (i.e. months to permanently). VADs that are intended for long term use are also termed "durable" VADS, due to their design to function for longer periods of time compared to short term VADs (e.g. Impella, etc.). The long-term VADs can be used in a variety of scenarios. First, VADs may be used as bridge to transplantation (BTT) – keeping the patient alive, and in reasonably good condition, and able to await heart transplant outside of the hospital. Other "bridges" include bridge to candidacy (used when a patient has a contraindication to heart transplantation but is expected to improve with the VADs support) , bridge to decision (used to support a patient while their candidacy status is decided), and bridge to recovery (used until a patient’s native heart function improves after which the device would be removed). In some instances, VADs are also used as destination therapy (DT) which indicates that the VAD will remain implanted indefinitely. VADs as destination therapy are used in circumstances where patients are not candidates for transplantation and will thus rely on the VAD for the remainder of their life.
Other Cardiac Support Devices
Some devices are designed to support the heart and its various components/function but are not considered VADs, below are some common examples.
Pacemakers and Internal Cardiac Defibrillators (ICDs) – the function of a VAD differs from that of an artificial cardiac pacemaker in that a VAD pumps blood, whereas a pacemaker delivers electrical impulses to the heart muscle.
Total Artificial Heart – VADs are distinct from artificial hearts, which are designed to assume cardiac function, and generally require the removal of the patient's heart.
Extracorporeal Membrane Oxygenation (ECMO) – is a form of mechanical circulatory support typically used in critically ill patients in cardiogenic shock that is established by introducing cannula into the arteries and or veins of the neck, axilla or groin. Generally, a venous cannula pulls deoxygenated blood from the patient's veins into an oxygenating device at the patient's bedside, after which a motor powered pump moves the oxygenated blood is back to the body (either into a vein or the arterial system, typically the aorta). There are different ECMO configurations (venoarterial ECMO, venovenous ECMO, etc.) the end goal remains the same; to oxygenate blood and return it to the body. In this sense, the ECMO circuit bypasses one or both ventricles and is therefore not in contact with the patient's native ventricle and is generally not considered a type of VAD.
Design
Pumps
The pumps used in VADs can be divided into two main categories – pulsatile pumps, which mimic the natural pulsing action of the heart, and continuous-flow pumps. Pulsatile VADs use positive displacement pumps. In some pulsatile pumps (that use compressed air as an energy source), the volume occupied by blood varies during the pumping cycle. If the pump is contained inside the body then a vent tube to the outside air is required.
Continuous-flow VADs are smaller and have proven to be more durable than pulsatile VADs. They normally use either a centrifugal pump or an axial flow pump. Both types have a central rotor containing permanent magnets. Controlled electric currents running through coils contained in the pump housing apply forces to the magnets, which in turn cause the rotors to spin. In the centrifugal pumps, the rotors are shaped to accelerate the blood circumferentially and thereby cause it to move toward the outer rim of the pump, whereas in the axial flow pumps the rotors are more or less cylindrical with blades that are helical, causing the blood to be accelerated in the direction of the rotor's axis.
An important issue with continuous flow pumps is the method used to suspend the rotor. Early versions used solid bearings; however, newer pumps, some of which are approved for use in the EU, use either magnetic levitation ("maglev") or hydrodynamic suspension.
History
The first left ventricular assist device (LVAD) system was created by Domingo Liotta at Baylor College of Medicine in Houston in 1962. The first LVAD was implanted in 1963 by Liotta and E. Stanley Crawford. The first successful implantation of an LVAD was completed in 1966 by Liotta along with Dr. Michael E. DeBakey. The patient was a 37-year-old woman, and a paracorporeal (external) circuit was able to provide mechanical support for 10 days after the surgery. The first successful long-term implantation of an LVAD was conducted in 1988 by Dr. William F. Bernhard of Boston Children's Hospital Medical Center and Thermedics, Inc. of Woburn, MA, under a National Institutes of Health (NIH) research contract which developed HeartMate, an electronically controlled assist device. This was funded by a three-year $6.2 million contract to Thermedics and Children's Hospital, Boston, MA, from the National Heart, Lung, and Blood Institute, a program of the NIH. The early VADs emulated the heart by using a "pulsatile" action where blood is alternately sucked into the pump from the left ventricle then forced out into the aorta. Devices of this kind include the HeartMate IP LVAS, which was approved for use in the US by the Food and Drug Administration (FDA) in October 1994. These devices began to gain acceptance in the late 1990s as heart surgeons including Eric Rose, O. H. Frazier and Mehmet Oz began popularizing the concept that patients could live outside the hospital. Media coverage of outpatients with VADs underscored these arguments.
More recent work has concentrated on continuous-flow pumps, which can be roughly categorized as either centrifugal pumps or axial flow impeller driven pumps. These pumps have the advantage of greater simplicity resulting in smaller size and greater reliability. These devices are referred to as second-generation VADs. A side effect is that the user will not have a pulse,
or that the pulse intensity will be seriously reduced.
A very different approach in the early stages of development was the use of an inflatable cuff around the aorta. Inflating the cuff contracts the aorta and deflating the cuff allows the aorta to expand – in effect the aorta becomes a second left ventricle. A proposed refinement is to use the patient's skeletal muscle, driven by a pacemaker, to power this device – which would make it truly self-contained. However, a similar operation (cardiomyoplasty) was tried in the 1990s with disappointing results.
At one time Peter Houghton was the longest surviving recipient of a VAD for permanent use. He received an experimental Jarvik 2000 LVAD in June 2000. Since then, he completed a 91-mile charity walk, published two books, lectured widely, hiked in the Swiss Alps and the American West, flew in an ultra-light aircraft, and traveled extensively around the world. He died of acute kidney injury in 2007 at the age of 69. Since then, patient Lidia Pluhar has exceeded Houghton's longevity on a VAD, having received a HeartMate II in March 2011 at age 75, and currently continues to use the device. In August 2007 the International Consortium of Circulatory Assist Clinicians (ICCAC) was founded by Anthony "Tony" Martin, a nurse practitioner (NP) and clinical manager of the mechanical circulatory support (MCS) program at Newark Beth Israel Medical Center, Newark, N.J. The ICCAC was developed as a 501c3 organization, dedicated to the development of best practices and education related to the care of individuals requiring MCS as a bridge to heart transplantation or as destination therapy in those individuals who don't meet the criteria for heart transplantation.
Studies and outcomes
Recent developments
In August 2007 The International Consortium of Circulatory Assist Clinicians (ICCAC) was founded by Anthony "Tony" Martin. A nurse practitioner (NP) and clinical manager of the mechanical circulatory support (MCS) program at Newark Beth Israel Medical Center, Newark, N.J..
In July 2009 in England, surgeons removed a donor heart that had been implanted in a toddler next to her native heart, after her native heart had recovered. This technique suggests mechanical assist device, such as an LVAD, can take some or all the work away from the native heart and allow it time to heal.
In July 2009, 18-month follow-up results from the HeartMate II Clinical Trial concluded that continuous-flow LVAD provides effective hemodynamic support for at least 18 months in patients awaiting transplantation, with improved functional status and quality of life.
Heidelberg University Hospital reported in July 2009 that the first HeartAssist5, known as the modern version of the DeBakey VAD, was implanted there. The HeartAssist5 weighs 92 grams, is made of titanium and plastic, and serves to pump blood from the left ventricle into the aorta.
A phase 1 clinical trial is underway (as of August 2009), consisting of patients with coronary artery bypass grafting and patients in end-stage heart failure who have a left ventricular assist device. The trial involves testing a patch called Anginera which contains cells that secrete hormone-like growth factors stimulating other cells to grow. The patches are seeded with heart muscle cells and then implanted onto the heart with the goal of getting the muscle cells to start communicating with native tissues in a way that allows for regular contractions.
In September 2009, a New Zealand news outlet, Stuff, reported that in another 18 months to two years, a new wireless device will be ready for clinical trial that will power VADs without direct contact. If successful, this may reduce the chance of infection as a result of the power cable through the skin.
The National Institutes of Health (NIH) awarded a $2.8 million grant to develop a "pulse-less" total artificial heart using two VADs by Micromed, initially created by Michael DeBakey and George Noon. The grant was renewed for a second year of research in August 2009. The total artificial heart was created using two HeartAssist5 VADs, whereby one VAD pumps blood throughout the body and the other circulates blood to and from the lungs.
HeartWare International announced in August 2009 that it had surpassed 50 implants of their HeartWare Ventricular Assist System in their ADVANCE Clinical Trial, an FDA-approved IDE study. The study is to assess the system as bridge-to-transplantation for patients with end-stage heart failure. The study, Evaluation of the HeartWare LVAD System for the Treatment of Advanced Heart Failure, is a multi-center study that started in May 2009.
On 27 June 2014 Hannover Medical School in Hannover, Germany performed the first human implant of HeartMate III under the direction of Professor Axel Haverich M.D., chief of the Cardiothoracic, Transplantation and Vascular Surgery Department and surgeon Jan Schmitto, M.D., PhD
On 21 January 2015 a study was published in Journal of American College of Cardiology suggesting that long-term use of LVAD may induce heart regeneration.
Hall-of-Fame Baseball Player Rod Carew had congestive heart failure and was fitted with a HeartMate II. He struggled with wearing the equipment, so he joined efforts to help supply the most helpful wear to assist the HeartMate II and HeartMate III.
In December 2018, two clinical cases were performed in Kazakhstan and a fully wireless LVAD system of Jarvik 2000 combine with Leviticus Cardio FiVAD (Fully Implantable Ventricular Assist Device) were implanted in humans. The Wireless power transfer technology based on technique called Coplanar Energy Transfer (CET) which is capable of transferring energy from an external transmitting coil to a small receiving coil that is implanted in the human body. In the early postoperative phase, CET operation was accomplished as expected in both patients, which powered the pump and maintained the battery charged to allow medical and nursing procedures. The Leviticus Cardio FiVAD System with wireless, coplanar energy transfer technology which ameliorates infection risk by driveline elimination while providing successful energy transmission allowing for a substantial (approximately 6 hours) unholstered support of the LVAD.
On June 3, 2021, Medtronic issued an urgent medical device notice stating that their HVAD devices should no longer be implanted due to higher rates of neurological events and mortality with the HVAD vs. other available devices
The majority of VADs on the market today are somewhat bulky. The smallest device approved by the FDA, the HeartMate II, weighs about and measures . This has proven particularly important for women and children, for whom alternatives would have been too large. As of 2017, HeartMate III has been approved by the FDA. It is smaller than its predecessor HeartMate II and uses a full maglev impeller instead of the cup-and-ball bearing system found in HeartMate II.
The HeartWare HVAD works similarly to the VentrAssist—albeit much smaller and not requiring an abdominal pocket to be implanted into. The device has obtained CE Mark in Europe, and FDA approval in the U.S. The HeartWare HVAD could be implanted through limited access without sternotomy, however in 2021 Medtronic discontinued the device.
In a small number of cases left ventricular assist devices, combined with drug therapy, have enabled the heart to recover sufficiently for the device to be able to be removed (explanted). Several surgical approaches, including interventional decommissioning, off-pump explantation using a custom-made plug and complete LVAD removal through redo sternotomy, have been described with a 5-year survival of up to 80%.
HeartMate II LVAD pivotal study
A series of studies involving the use of the HeartMate II LVAD have proven useful in establishing the viability and risks of using LVADs for bridge-to-transplantation and destination therapy.
The HeartMate II pivotal trial began in 2005 and included the evaluation of HeartMate II for two indications: Bridge to transplantation (BTT) and destination therapy (DT), or long-term, permanent support. Thoratec Corp. announced that this was the first time the FDA had approved a clinical trial to include both indications in one protocol.
A multicenter study in the United States from 2005 to 2007 with 113 patients (of which 100 reported principal outcomes) showed that significant improvements in function were prevalent after three months, and a survival rate of 68% after twelve months.
Based on one-year follow up data from the first 194 patients enrolled in the trial, the FDA approved HeartMate II for bridge-to-transplantation. The trial provided clinical evidence of improved survival rates and quality of life for a broad range of patients.
Eighteen-month follow up data on 281 patients who had either reached the study end-point or completed 18 months of post-operative follow-up showed improved survival, less frequent adverse events and greater reliability with continuous flow LVADS compared to pulsatile flow devices. Of the 281 patients, 157 patients had undergone transplant, 58 patients were continuing with LVADs in their body and seven patients had the LVAD removed because their heart recovered; the remaining 56 had died. The results showed that the patients' NYHA Class of heart failure had significantly improved after six months of LVAD support compared to the pre-LVAD baseline. Although this trial involved bridge-to-transplant indication, the results provide early evidence that continuous flow LVADs have advantages in terms of durability and reliability for patients receiving mechanical support for destination therapy.
Following the FDA approval of HeartMate II LVAD for bridge-to-transplantation purposes, a post-approval ("registry") study was undertaken to assess the efficacy of the device in a commercial setting. The study found that the device improved outcomes, both compared to other LVAD treatments and baseline patients. Specifically, HeartMate II patients showed lower creatinine levels, 30-day survival rates were considerably higher at 96%, and 93% reached successful outcomes (transplant, cardiac recovery, or long-term LVAD).
HARPS
The Harefield Recovery Protocol Study (HARPS) is a clinical trial to evaluate whether advanced heart failure patients requiring VAD support can recover sufficient myocardial function to allow device removal (known as explantation). HARPS combines an LVAD (the HeartMate XVE) with conventional oral heart failure medications, followed by the novel β2 agonist clenbuterol. This opens the possibility that some advanced heart failure patients may forgo heart transplantation.
REMATCH
The REMATCH (Randomized Evaluation of Mechanical Assistance for the Treatment of Congestive Heart Failure) clinical trial began in May 1998 and ran through July 2001 in 20 cardiac transplant centers around the USA. The trial was designed to compare long-term implantation of left ventricular assist devices with optimal medical management for patients with end-stage heart failure who require, but do not qualify to receive cardiac transplantation. As a result of the clinical outcomes, the device received FDA approval for both indications, in 2001 and 2003, respectively.
According to a retrospective cohort study comparing patients treated with a left ventricular assist device versus inotrope therapy while awaiting heart transplantation, the group treated with LVAD had improved clinical and metabolic function at the time of transplant with better blood pressure, sodium, blood urea nitrogen, and creatinine. After transplant, 57.7% of the inotrope group had kidney failure versus 16.6% in the LVAD group; 31.6% of the inotrope group had right heart failure versus 5.6% in the LVAD group; and event-free survival was 15.8% in the inotrope group versus 55.6% in the LVAD group.
Complications and side effects
There are a number of potential risks associated with VADs. The most common of these are bleeding events, stroke, pump thrombosis, and infections.
Bleeding
Because the VADs generally result in blood flowing over a non-biologic surface (e.g. metal, synthetic polymers, etc.) this can result in formation of blood clots, also referred to as thrombosis. Due to these clotting abnormalities, anticoagulation medications are used to decrease the risk of thrombosis. One device, the HeartMate XVE, is designed with a biologic surface derived from fibrin and does not require long term anticoagulation (except aspirin); unfortunately, this biologic surface may also predispose the patient to infection through selective reduction of certain types of leukocytes, however this device was phased out of use starting in 2009 in favor of newer devices.
Due to the use of anticoagulation, bleeding is the most common postoperative early complication after implantation or explantation of VADs, necessitating reoperation in up to 60% of recipients. Most commonly bleeding occurs in the gastrointestinal tract resulting in dark or bright red stools, however if trauma to the head occurs, intracranial bleeding may also occur. Bleeding events may require massive blood transfusions and incur certain risks including infection, pulmonary insufficiency, increased costs, right heart failure, allosensitization, and viral transmission, which can prove fatal or preclude transplantation. When bleeding occurs, it impacts the one year Kaplan-Meier mortality. In addition to complexity of the patient population and the complexity of these procedures contributing to bleeding, the devices themselves may contribute to the severe coagulopathy that can ensue when these devices are implanted.
Ischemic Stroke and Pump Thrombosis
In patients with VADs, ischemic strokes and pump thrombosis occur when there is inadequate anticoagulation to counter act the blood's tendency to form blood clots when exposed to the foreign materials in a VAD. Stroke risk varies based on the type of VAD in place and other risk factors. Both atrial fibriliation and high blood pressure may increase risk of stroke and high blood pressure can increase a patient's risk of stroke in the setting of VAD use. However, it is difficult to measure blood pressure in LVAD patients using standard blood pressure monitoring and the current practice is to measure by Doppler ultrasonography in outpatients and invasive arterial blood pressure monitoring in inpatients.
Infections
Infections in VAD patients occur because the artificial surfaces of the devices serve as a surface for bacterial and or fungal growth. Most infections are classified as driveline infections, which are infections that occur where the device's power cord enters the skin (usually in the upper abdomen)
VAD-related infection can be caused by a large number of different organisms:
Gram positive bacteria (Staphylococci, especially Staph. aureus, Enterococci)
Gram negative bacteria (Pseudomonas aeruginosa, Enterobacter species, Klebsiella species)
Fungi, especially Candida species
Other immune system related problems include immunosuppression. Some of the polyurethane components used in the devices cause the deletion of a subset of immune cells when blood comes in contact with them. This predisposes the patient to fungal and some viral infections necessitating appropriate prophylactic therapy.
Considering the multitude of risks and lifestyle modifications associated with ventricular assist device implants, it is important for prospective patients to be informed prior to decision making. In addition to physician consult, various Internet-based patient directed resources are available to assist in patient education.
List of implantable VAD devices
This is a partial list and may never be complete
Referenced additions are welcome
See also
Intra-aortic balloon pump
Pump thrombosis
References
External links
MyLVAD.com—Non-branded site with information on various LVADs
DECIDE-LVAD Patient Decision Aid - Non-branded site with information on decision making for LVAD
Implants (medicine)
Cardiology
Prosthetics
Interventional cardiology
Medical devices | Ventricular assist device | [
"Biology"
] | 5,276 | [
"Medical devices",
"Medical technology"
] |
3,301,590 | https://en.wikipedia.org/wiki/Enterotoxigenic%20Escherichia%20coli | Enterotoxigenic Escherichia coli (ETEC) is a type of Escherichia coli and one of the leading bacterial causes of diarrhea in the developing world, as well as the most common cause of travelers' diarrhea. Insufficient data exists, but conservative estimates suggest that each year, about 157,000 deaths occur, mostly in children, from ETEC. A number of pathogenic isolates are termed ETEC, but the main hallmarks of this type of bacterium are expression of one or more enterotoxins and presence of fimbriae used for attachment to host intestinal cells. The bacterium was identified by the Bradley Sack lab in Kolkata in 1968.
Signs and symptoms
Infection with ETEC can cause profuse, watery diarrhea with no blood or leukocytes and abdominal cramping. Fever, nausea with or without vomiting, chills, loss of appetite, headache, muscle aches and bloating can also occur, but are less common.
Enterotoxins
Enterotoxins produced by ETEC include heat-labile enterotoxin (LT) and heat-stable enterotoxin (ST).
Prevention
To date, no licensed vaccines specifically target ETEC, though several are in various stages of development. Studies indicate that protective immunity to ETEC develops after natural or experimental infection, suggesting that vaccine-induced ETEC immunity should be feasible and could be an effective preventive strategy. Prevention through vaccination is a critical part of the strategy to reduce the incidence and severity of diarrheal disease due to ETEC, particularly among children in low-resource settings. The development of a vaccine against this infection has been hampered by technical constraints, insufficient support for coordination, and a lack of market forces for research and development. Most vaccine development efforts are taking place in the public sector or as research programs within biotechnology companies. ETEC is a longstanding priority and target for vaccine development for the World Health Organization.
Management
Treatment for ETEC infection includes rehydration therapy and antibiotics, although ETEC is frequently resistant to common antibiotics. Improved sanitation is also key. Since the transmission of this bacterium is fecal contamination of food and water supplies, one way to prevent infection is by improving public and private health facilities. Another simple prevention of infection is by drinking factory bottled water—this is especially important for travelers and traveling military—though it may not be feasible in developing countries, which carry the greatest disease burden.
See also
Gastroenteritis
References
External links
US Centers for Disease Control and Prevention: Enterotoxigenic Escherichia coli
Vaccine Resource Library: Shigellosis and enterotoxigenic Escherichia coli (ETEC)
Escherichia coli | Enterotoxigenic Escherichia coli | [
"Biology"
] | 554 | [
"Model organisms",
"Escherichia coli"
] |
3,302,845 | https://en.wikipedia.org/wiki/Uniform-machines%20scheduling | Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m different machines. The goal is to minimize the makespan - the total time required to execute the schedule. The time that machine i needs in order to process job j is denoted by pi,j. In the general case, the times pi,j are unrelated, and any matrix of positive processing times is possible. In the specific variant called uniform machine scheduling, some machines are uniformly faster than others. This means that, for each machine i, there is a speed factor si, and the run-time of job j on machine i is pi,j = pj / si.
In the standard three-field notation for optimal job scheduling problems, the uniform-machine variant is denoted by Q in the first field. For example, the problem denoted by " Q||" is a uniform machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. A special case of uniform machine scheduling is identical-machines scheduling, in which all machines have the same speed. This variant is denoted by P in the first field.
In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs); it is denoted by Q||. More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight. This is denoted by Q||.
Algorithms
Minimizing the average completion time
Minimizing the average completion time can be done in polynomial time:
The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far. It runs in time O(n log n), and minimizes the average completion time on identical machines, P||.
Horowitz and Sahni present an exact algorithm, with run time O(n log m n), for minimizing the average completion time on uniform machines, Q||.
Bruno, Coffman and Sethi present an algorithm, running in time , for minimizing the average completion time on unrelated machines, R||.
Minimizing the weighted-average completion time
Minimizing the weighted average completion time is NP-hard even on identical machines, by reduction from the knapsack problem. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the partition problem.
Sahni presents an exponential-time algorithm and a polynomial-time approximation algorithm for identical machines.
Horowitz and Sahni presented:
Exact dynamic programming algorithms for minimizing the weighted-average completion time on uniform machines. These algorithms run in exponential time.
Polynomial-time approximation schemes, which for any ε>0, attain at most (1+ε)OPT. For minimizing the weighted average completion time on two uniform machines, the run-time is = , so it is an FPTAS. They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case. They do not present an algorithm for weighted-average completion time on unrelated machines.
Minimizing the maximum completion time (makespan)
Minimizing the maximum completion time is NP-hard even for identical machines, by reduction from the partition problem.
A constant-factor approximation is attained by the Longest-processing-time-first algorithm (LPT).
Horowitz and Sahni presented:
Exact dynamic programming algorithms for minimizing the maximum completion time on both uniform and unrelated machines. These algorithms run in exponential time (recall that these problems are all NP-hard).
Polynomial-time approximation schemes, which for any ε>0, attain at most (1+ε)OPT. For minimizing the maximum completion time on two uniform machines, their algorithm runs in time , where is the smallest integer for which . Therefore, the run-time is in , so it is an FPTAS. For minimizing the maximum completion time on two unrelated machines, the run-time is = . They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case.
Hochbaum and Shmoys presented several approximation algorithms for any number of identical machines. Later, they developed a PTAS for uniform machines.
Epstein and Sgall generalized the PTAS for uniform machines to handle more general objective functions. Let Ci (for i between 1 and m) be the makespan of machine i in a given schedule. Instead of minimizing the objective function max(Ci), one can minimize the objective function max(f(Ci)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(Ci)).
Monotonicity and Truthfulness
In some settings, the machine speed is the machine's private information, and we want to incentivize machines to reveal their true speed, that is, we want a truthful mechanism. An important consideration for attaining truthfulness is monotonicity. It means that, if a machine reports a higher speed, and all other inputs remain the same, then the total processing time allocated to the machine weakly increases. For this problem:
Auletta, De Prisco, Penna and Persiano presented a 4-approximation monotone algorithm, which runs in polytime when the number of machines is fixed.
Ambrosio and Auletta proved that the Longest Processing Time algorithm is monotone whenever the machine speeds are powers of some c ≥ 2, but not when c ≤ 1.78. In contrast, List scheduling is not monotone for c > 2.
Andelman, Azar and Sorani presented a 5-approximation monotone algorithm, which runs in polytime even when the number of machines is variable.
Kovacz presented a 3-approximation monotone algorithm.
Extensions
Dependent jobs: In some cases, the jobs may be dependent. For example, take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of ordering exists between the tasks. In fact it is clear that it can be modelled with partial ordering. Then, by definition, the set of tasks constitute a lattice structure. This adds further complication to the multiprocessor scheduling problem.
Static versus Dynamic: Machine scheduling algorithms are static or dynamic. A scheduling algorithm is static if the scheduling decisions as to what computational tasks will be allocated to what processors are made before running the program. An algorithm is dynamic if it is taken at run time. For static scheduling algorithms, a typical approach is to rank the tasks according to their precedence relationships and use a list scheduling technique to schedule them onto the processors.
Multi-stage jobs: In various settings, each job might have several operations that must be executed in parallel. Some such settings are handled by open shop scheduling, flow shop scheduling and job shop scheduling.
External links
Summary of parallel machine problems without preemtion
References
Optimal scheduling
NP-complete problems | Uniform-machines scheduling | [
"Mathematics",
"Engineering"
] | 1,538 | [
"Optimal scheduling",
"Industrial engineering",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
3,303,019 | https://en.wikipedia.org/wiki/Robinson%E2%80%93Dadson%20curves | The Robinson–Dadson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by D. W. Robinson and R. S. Dadson.
Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though the re-determination carried out by Robinson and Dadson in 1956, became the basis for an ISO standard ISO 226 which was only revised recently.
It is now better to use the term equal-loudness contours as the generic term, especially as a recent survey by ISO redefined the curves in a new standard, ISO 226:2003.
According to the ISO report, the Robinson-Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. It comments that it is fortunate that the 40-Phon Fletcher-Munson curve on which the A-weighting standard was based turns out to have been in good agreement with modern determinations.
The article also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
The equipment used was not properly calibrated.
The criteria used for judging equal loudness (which is tricky) differed.
Different races actually vary greatly in this respect (possible, and most recent determinations were by the Japanese).
Subjects were not properly rested for days in advance or were exposed to loud noise in travelling to the tests which tensed the tensor timpani and stapedius muscles controlling low-frequency mechanical coupling.
See also
A-weighting
ITU-R 468 noise weighting
References
External links
ISO Standard
Fletcher–Munson is not Robinson–Dadson
Full Revision of International Standards for Equal-Loudness Level Contours (ISO 226)
Hearing curves and on-line hearing test
Equal-loudness contours by Robinson and Dadson
Acoustics
Hearing
Audio engineering
Sound
Psychoacoustics | Robinson–Dadson curves | [
"Physics",
"Engineering"
] | 407 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
3,303,981 | https://en.wikipedia.org/wiki/Quantum%20gyroscope | A quantum gyroscope is a very sensitive device to measure angular rotation based on quantum mechanical principles. The first of these was built by Richard Packard and his colleagues at the University of California, Berkeley. The extreme sensitivity means that theoretically, a larger version could detect effects like minute changes in the rotational rate of the Earth.
Principle
In 1962, Cambridge University PhD student Brian Josephson hypothesized that an electric current could travel between two superconducting materials even when they were separated by a thin insulating layer.
The term Josephson effect has come to refer generically to the different behaviors that occur in any two weakly connected macroscopic quantum systems—systems composed of molecules that all possess identical wavelike properties.
Among other things, the Josephson effect means that when two superfluids (zero friction fluids) are connected using a weak link and pressure is applied to the superfluid on one side of a weak link, the fluid will oscillate from one side of the weak link to the other.
This phenomenon, known as quantum whistling, occurs when pressure is applied to push a superfluid through a very small hole, somewhat as sound is produced by blowing air through an ordinary whistle. A ring-shaped tube full of superfluid, blocked by a barrier containing a tiny hole, could in principle be used to detect pressure differences caused by changes in rotational motion of the ring, in effect functioning as a sensitive gyroscope. Superfluid whistling was first demonstrated using helium-3, which has the disadvantage of being scarce and expensive, and requiring extremely low temperature (a few thousandths of a Kelvin). Common helium-4, which remains superfluid at 2 Kelvin, is much more practical, but its quantum whistling is too weak to be heard with a single practical-sized hole. This problem was overcome by using barriers with thousands of holes, in effect a chorus of quantum whistles producing sound waves that reinforced one another by constructive interference.
Equation
Where is the rotation vector, A is the area vector, and is the quantum of circulation of helium-3.
References
See also
Polariton interferometer
Ring laser gyroscope
Gyroscope
Vibrating structure gyroscope
Inertial measurement unit
Hemispherical resonator gyroscope
Superconductivity
Gyroscopes
Quantum mechanics | Quantum gyroscope | [
"Physics",
"Materials_science",
"Engineering"
] | 479 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Applications of quantum mechanics",
"Electrical resistance and conductance"
] |
3,304,705 | https://en.wikipedia.org/wiki/HPV%20vaccine | Human papillomavirus (HPV) vaccines are vaccines intended to provide acquired immunity against infection by certain types of human papillomavirus (HPV). The first HPV vaccine became available in 2006. Currently there are six licensed HPV vaccines: three bivalent (protect against two types of HPV), two quadrivalent (against four), and one nonavalent vaccine (against nine) All have excellent safety profiles and are highly efficacious, or have met immunobridging standards. All of them protect against HPV types 16 and 18, which are together responsible for approximately 70% of cervical cancer cases globally. The quadrivalent vaccines provide additional protection against HPV types 6 and 11. The nonavalent provides additional protection against HPV types 31, 33, 45, 52 and 58. It is estimated that HPV vaccines may prevent 70% of cervical cancer, 80% of anal cancer, 60% of vaginal cancer, 40% of vulvar cancer, and show more than 90% effectiveness in preventing HPV-positive oropharyngeal cancers. They also protect against penile cancer. They additionally prevent genital warts (also known as anogenital warts), with the quadrivalent and nonavalent vaccines providing virtually complete protection. The WHO recommends a one or two-dose schedule for girls aged 9–14 years, the same for girls and women aged 15–20 years, and two doses with a 6-month interval for women older than 21 years. The vaccines provide protection for at least five to ten years.
The primary target group in most of the countries recommending HPV vaccination is young adolescent girls, aged 9–14. The vaccination schedule depends on the age of the vaccine recipient. As of 2023, 27% of girls aged 9–14 years worldwide received at least one dose (37 countries were implementing the single-dose schedule, 45% of girls aged 9–14 years old vaccinated in that year). As of September 2024, 57 countries are implementing the single-dose schedule. At least 144 countries (at least 74% of WHO member states) provided the HPV vaccine in their national immunization schedule for girls, as of November 2024. As of 2022, 47 countries (24% of WHO member states) also did it for boys. Vaccinating a large portion of the population may also benefit the unvaccinated by way of herd immunity.
The HPV vaccine is on the World Health Organization's List of Essential Medicines. The World Health Organization (WHO) recommends HPV vaccines as part of routine vaccinations in all countries, along with other prevention measures. The WHO's priority purpose of HPV immunization is the prevention of cervical cancer, which accounts for 82% of all HPV-related cancers and more than 95% of which are caused by HPV. 88% (2020 figure) of cervical cancers and 90% of deaths occur in low- and middle-income countries and 2% (2020 figure) in high-income countries. The WHO-recommended primary target population for HPV vaccination is girls aged 9–14 years before they become sexually active. It aims the introduction of the HPV vaccine in all countries and has set a target of reaching a coverage of 90% of girls fully vaccinated with HPV vaccine by age 15 years. Females aged ≥15 years, boys, older males or men who have sex with men (MSM) are secondary target populations. HPV vaccination is the most cost-effective public health measure against cervical cancer, particularly in resource-constrained settings. Cervical cancer screening is still required following vaccination.
Preventive vaccines
A growing number of vaccine products initially prequalified for use in a 2-dose schedule can now be used in a single-dose schedule.
Cecolin (WHO prequalified HPV vaccine product, confirmed for use in a single-dose schedule), in the second edition of WHO's technical document on considerations for HPV vaccine product choice
Cervarix (bivalent)
Gardasil (quadrivalent) and Gardasil 9 nonavalent vaccine)
Walrinvax (WHO prequalified with a two-dose schedule on 2 August 2024)
Medical uses
HPV vaccines are used to prevent HPV infection and therefore in particular cervical cancer. Vaccinating females between the ages of nine to thirteen is typically recommended, with many countries also vaccinating males in that age range. In the United States, the Centers for Disease Control and Prevention (CDC) recommends that all 11- to 12-year-olds receive two doses of HPV vaccine, administered 6 to 12 months apart. The vaccines require three doses for those ages 15 and above. Gardasil is a three-dose (injection) vaccine. HPV vaccines are recommended in the United States for women and men who are 9–26 years of age and are also approved for those who are 27–45 years of age.
HPV vaccination of a large percentage of people within a population has been shown to decrease rates of HPV infections, with part of the benefit from herd immunity. Since the vaccines only cover some high-risk types of HPV, cervical cancer screening is recommended even after vaccination. In the US, the recommendation is for women to receive routine Pap smears beginning at age 21. In Australia, the national screening program has changed from the two yearly cytology (pap smears) to being based on tests for HPV DNA, based on work by Karen Canfell and others. As of 2021, the World Health Organization recommends HPV DNA testing as the preferred screening method.
Efficacy
The HPV vaccine has been shown to prevent cervical dysplasia from the high-risk HPV types 16 and 18 and provide some protection against a few closely related high-risk HPV types. However, other high-risk HPV types are not affected by the vaccine. The protection against HPV 16 and 18 has lasted at least eight years after vaccination for Gardasil and more than nine years for Cervarix. It is thought that booster vaccines will not be necessary.
As of September 2024, 57 countries are implementing the single-dose schedule. A growing number of vaccine products initially prequalified for use in a 2-dose schedule can now be used in a single-dose schedule. Before, it was unsure whether two doses of the vaccine may work as well as three doses. The US Centers for Disease Control and Prevention (CDC) recommends two doses in those less than 15 years and three doses in those over 15 years. A single dose might be effective.
A study with 9vHPV, a 9-valent HPV vaccine that protects against HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58, came to the result that the rate of high-grade cervical, vulvar, or vaginal disease was the same as when using a quadrivalent HPV vaccine. A lack of a difference may have been caused by the study design of including women 16 to 26 years of age, who may largely already have been infected with the five additional HPV types that are additionally covered by the 9-valent vaccine.
Neither Cervarix nor Gardasil prevent other sexually transmitted infections, and they do not treat existing HPV infections or cervical cancer.
Gardasil
When Gardasil was first introduced, it was recommended as a prevention for cervical cancer for women 25 years old or younger. Evidence suggests that HPV vaccines are effective in preventing cervical cancer for women up to 45 years of age.
Gardasil and Gardasil 9 protect against HPV types 6 and 11 which can cause genital warts, with the quadrivalent and nonavalent vaccines providing virtually complete protection.
Adenocarcinoma
HPV types 16, 18, and 45 contribute to 94% of cervical adenocarcinoma (cancers originating in the glandular cells of the cervix). While most cervical cancer arises in the squamous cells, adenocarcinomas make up a sizable minority of cancers. Further, Pap smears are not as effective at detecting adenocarcinomas, so where Pap screening programs are in place, a larger proportion of the remaining cancers are adenocarcinomas. Trials suggest that HPV vaccines may also reduce the incidence of adenocarcinoma.
Males
As of 2022, 47 countries (24% of WHO member states) have introduced HPV vaccine in their national immunization programme for boys. For instance, it is the case in Switzerland, Portugal, Canada, Australia, Ireland, South Korea, Hong Kong, the United Kingdom, New Zealand, the Netherlands, and the United States.
In males also, Gardasil and Gardasil 9 protect against HPV types 6 and 11 which can cause genital warts, with the quadrivalent and nonavalent vaccines providing virtually complete protection. They reduce their risk of precancerous lesions caused by HPV. This reduction in precancerous lesions is predicted to reduce the rates of penile and anal cancer in men. Gardasil has been shown to also be effective in preventing high-risk HPV types 16 and 18 in males. While Gardasil and the Gardasil 9 vaccines have been approved for males, a third HPV vaccine, Cervarix, has not. Unlike the Gardasil-based vaccines, Cervarix does not protect against genital warts.
Since penile and anal cancers are much less common than cervical cancer, HPV vaccination of young men is likely to be much less cost-effective than for young women.
Gardasil is also used among men who have sex with men (MSM), who are at higher risk for genital warts, penile cancer, and anal cancer.
Recommendations by national bodies
Australia
Australia introduced HPV vaccination for boys in 2013.
Ireland
Ireland introduced HPV vaccination for boys aged 13 as part of their National Immunization Plan in 2019.
UK
UK introduced HPV vaccination for boys aged 12 as part of their National Immunization Plan in 2019.
Portugal
Portugal introduced universal HPV vaccination for boys aged 10 years and above as part of its National Immunization Plan in 2020.
United States
On 9 September 2009, an advisory panel recommended that the Food and Drug Administration (FDA) of the USA license Gardasil in the United States for boys and men ages 9–26 for the prevention of genital warts. Soon after that, the vaccine was approved by the FDA for use in males aged 9 to 26 for prevention of genital warts and anal cancer.
In 2011, an advisory panel for the US Centers for Disease Control and Prevention (CDC) recommended the vaccine for boys ages 11–12. This was intended to prevent genital warts and anal cancers in males, and possibly prevent head and neck cancer (though the vaccine's effectiveness against head and neck cancers has not yet been proven). The committee also made the vaccination recommendation for males 13 to 21 years who have not been vaccinated previously or who have not completed the three-dose series. For those under the age of 27 who have not been fully vaccinated the CDC recommends vaccination.
Also in 2011, Harald zur Hausen's support for vaccinating boys (so that they will be protected, and thereby so will women) was joined by professors Harald Moi and Ole-Erik Iversen.
In 2018, the US Food and Drug Administration (FDA) released a summary basis for regulatory action and approval for expansion of usage and indication for Gardasil 9, the 9-valent HPV vaccine, to include men and women 27 to 45 years of age.
Public health
World Health Organization (WHO)
The HPV vaccine is on the WHO Model List of Essential Medicines. The WHO recommends HPV vaccines as part of routine vaccinations in all countries, along with other prevention measures. The WHO's priority purpose of HPV immunization is the prevention of cervical cancer, which accounts for 82% of all HPV-related cancers and more than 95% of which are caused by HPV. The WHO has a global strategy for cervical cancer elimination. Its first pillar is having 90% of girls fully vaccinated with the HPV vaccine by 15 years of age. The WHO-recommended primary target population for HPV vaccination is girls aged 9–14 years before they become sexually active. Females aged ≥15 years, boys, older males or MSM are secondary target populations. Cervical cancer screening is still required following vaccination.
Global
Cervical cancer
The large majority of cervical cancer cases in 2020 (88%) occurred in LMICs, where they account for 17% of all cancers in women, compared with only 2% in high-income countries (HICs). In sub-Saharan Africa, the region with the highest rates of young WLWH, approximately 20% of cervical cancer cases occur in WLWH [women living with HIV]. HPV infection is more likely to persist and to progress to cancer in WLWH.33 Mortality rates vary
50-fold between countries, ranging from <2 per 100 000 women in some HICs to >40 per 100 000 in some countries of sub-Saharan Africa.
Of the 20 hardest hit countries by cervical cancer, 19 are in Africa.
The US National Cancer Institute states "Widespread vaccination has the potential to reduce cervical cancer deaths around the world by as much as two-thirds if all women were to take the vaccine and if protection turns out to be long-term. In addition, the vaccines can reduce the need for medical care, biopsies, and invasive procedures associated with the follow-up from abnormal Pap tests, thus helping to reduce health care costs and anxieties related to abnormal Pap tests and follow-up procedures."
In 2004, preventive vaccines already protected against the two HPV types (16 and 18) that cause about 70% of cervical cancers worldwide. Because of the distribution of HPV types associated with cervical cancer, the vaccines were likely to be most effective in Asia, Europe, and North America. Some other high-risk types cause a larger percentage of cancers in other parts of the world. Vaccines that protect against more of the types common in cancers would prevent more cancers, and be less subject to regional variation. For instance, a vaccine against the seven types most common in cervical cancers (16, 18, 45, 31, 33, 52, 58) would prevent an estimated 87% of cervical cancers worldwide.
In 2008, only 41% of women with cervical cancer in the developing world got medical treatment. Therefore, prevention of HPV by vaccination may be a more effective way of lowering the disease burden in developing countries than cervical screening. The European Society of Gynecological Oncology sees the developing world as most likely to benefit from HPV vaccination. However, individuals in many resource-limited nations, Kenya for example, are unable to afford the vaccine.
In more developed countries, populations that do not receive adequate medical care, such as the poor or minorities in the United States or parts of Europe also have less access to cervical screening and appropriate treatment, and are similarly more likely to benefit. In 2009, Dr. Diane Harper, a researcher for the HPV vaccines, questioned whether the benefits of the vaccine outweigh its risks in countries where Pap smear screening is common. She has also encouraged women to continue pap screening after they are vaccinated and to be aware of potential adverse effects.
United States
In 2012, according to the CDC, the use of the HPV vaccine had cut rates of infection with HPV-6, -11, -16, and -18 in half in American teenagers (from 11.5% to 4.3%) and by one-third in American women in their early twenties (from 18.5% to 12.1%).
Side effects
HPV vaccines are safe and well tolerated and can be used in persons who are immunocompromised or HIV-infected. Pain at the site of injection occurs in between 35% and 88% of people Redness and swelling at the site and fever may also occur. No link to Guillain–Barré syndrome has been found. There is no increased risk of serious adverse effects. Extensive clinical trial and post-marketing safety surveillance data indicate that both Gardasil and Cervarix are well tolerated and safe. When comparing the HPV vaccine to a placebo (control) vaccine taken by women, there is no difference in the risk of severe adverse events.
United States
, there were more than 57 million doses of Gardasil vaccine distributed in the United States, though it is unknown how many were administered. There have been 22,000 Vaccine Adverse Event Reporting System (VAERS) reports following the vaccination. 92% were reports of events considered to be non-serious (e.g., fainting, pain, and swelling at the injection site (arm), headache, nausea, and fever), and the rest were considered to be serious (death, permanent disability, life-threatening illness, and hospitalization). However, VAERS reports include any reported effects whether coincidental or causal. In response to concerns regarding the rates of adverse events associated with the vaccine, the CDC stated: "When evaluating data from VAERS, it is important to note that for any reported event, no cause-and-effect relationship has been established. VAERS receives reports on all potential associations between vaccines and adverse events."
, in the US there were 44 reports of death in females after receiving the vaccine. None of the 27 confirmed deaths of women and girls who had taken the vaccine were linked to the vaccine. There is no evidence suggesting that Gardasil causes or raises the risk of Guillain–Barré syndrome. Additionally, there have been rare reports of blood clots forming in the heart, lungs, and legs. A 2015 review conducted by the European Medicines Agency's Pharmacovigilance Risk Assessment Committee concluded that evidence does not support the idea that HPV vaccination causes complex regional pain syndrome or postural orthostatic tachycardia syndrome.
, the CDC continued to recommend Gardasil vaccination for the prevention of four types of HPV. The manufacturer of Gardasil has committed to ongoing research assessing the vaccine's safety.
According to the Centers for Disease Control and Prevention (CDC) and the FDA, the rate of adverse side effects related to Gardasil immunization in the safety review was consistent with what has been seen in the safety studies carried out before the vaccine was approved and were similar to those seen with other vaccines. However, a higher proportion of syncope (fainting) was seen with Gardasil than is usually seen with other vaccines. The FDA and CDC have reminded healthcare providers that, to prevent falls and injuries, all vaccine recipients should remain seated or lying down and be closely observed for 15 minutes after vaccination. The HPV vaccination does not appear to reduce the willingness of women to undergo pap tests.
Contraindications
While the use of HPV vaccines can help reduce cervical cancer deaths by two-thirds around the world, not everyone is eligible for vaccination. Some factors exclude people from receiving HPV vaccines. These factors include:
People with history of immediate hypersensitivity to vaccine components. Patients with a hypersensitivity to yeast should not receive Gardasil since yeast is used in its production.
People with moderate or severe acute illnesses. This does not completely exclude patients from vaccination but postpones the time of vaccination until the illness has improved.
Pregnancy
In the Gardasil clinical trials, 1,115 pregnant women received the HPV vaccine. Overall, the proportions of pregnancies with an adverse outcome were comparable in subjects who received Gardasil and subjects who received a placebo. However, the clinical trials had a relatively small sample size. , the vaccine is not recommended for pregnant women.
The FDA has classified the HPV vaccine as a pregnancy Category B, meaning there is no apparent harm to the fetus in animal studies. HPV vaccines have not been causally related to adverse pregnancy outcomes or adverse effects on the fetus. However, data on vaccination during pregnancy is very limited, and vaccination during the pregnancy term should be delayed until more information is available. If a woman is found to be pregnant during the three-dose series of vaccination, the series should be postponed until pregnancy has been completed. While there is no indication for intervention for vaccine dosages administered during pregnancy, patients and healthcare providers are encouraged to report exposure to vaccines to the appropriate HPV vaccine pregnancy registry.
Mechanism of action
The HPV vaccines are based on hollow virus-like particles (VLPs) assembled from recombinant HPV coat proteins. The natural virus capsid is composed of two proteins, L1 and L2, but vaccines only contain L1.
Gardasil contains inactive L1 proteins from four different HPV strains: 6, 11, 16, and 18, synthesized in the yeast Saccharomyces cerevisiae. Each vaccine dose contains 225 μg of aluminum, 9.56 mg of sodium chloride, 0.78 mg of L-histidine, 50 μg of polysorbate 80, 35 μg of sodium borate, and water. The combination of ingredients totals 0.5 mL.
HPV types 16 and 18 cause about 70% of all cervical cancer. Gardasil also targets HPV types 6 and 11, which together cause about 90 percent of all cases of genital warts.
Gardasil and Cervarix are designed to elicit virus-neutralizing antibody responses that prevent initial infection with the HPV types represented in the vaccine. The vaccines have been shown to offer 100 percent protection against the development of cervical pre-cancers and genital warts caused by the HPV types in the vaccine, with few or no side effects. The protective effects of the vaccine are expected to last a minimum of 4.5 years after the initial vaccination.
While the study period was not long enough for cervical cancer to develop, the prevention of these cervical precancerous lesions (or dysplasias) is believed highly likely to result in the prevention of those cancers.
History
In 1983, Harald zur Hausen culminated decades of research with the discovery that certain variants of human papillomaviruses (HPVs) could be found in a majority of tested cervical cancer specimens. This provided strong scientific evidence for a link between the viral infection and cervical cancer, and provided strong motivations for further research into HPVs.
In 1990, Ian Frazer partnered with Jian Zhou and Xiao-Yi Sun at the University of Queensland in Australia to create synthetic HPVs for study in the lab. While working towards this goal, they were able to synthetically produce some of the capsid proteins of the HPVs, L1 and L2. Recognizing the potential of these proteins to form the basis of a vaccine, they filed a provisional patent on their production process in Australia in 1991.
The further invention then stalled while convincing developers of the market for the vaccine, and also while patent offices determined who the discovery belonged to. Three other organizations, the US National Cancer Institute, Georgetown University, and University of Rochester, were also vying for the patent as a result of contributions in the space. After providing evidence of the correctness of their L1 sequencing in 2004, the US patent court of appeals accorded priority to the University of Queensland in 2009. As a result, the University of Queensland receives royalty payments from the sale of these vaccines even today.
By the early 2000s, developers, convinced of the market of the vaccine, had begun refining, researching, and trialing L1-based HPV vaccines. In 2006, the FDA approved the first preventive HPV vaccine, marketed by Merck & Co. under the trade name Gardasil. According to a Merck press release, by the second quarter of 2007 it had been approved in 80 countries, many under fast-track or expedited review. Early in 2007, GlaxoSmithKline filed for approval in the United States for a similar preventive HPV vaccine, known as Cervarix. In June 2007, this vaccine was licensed in Australia, and it was approved in the European Union in September 2007. Cervarix was approved for use in the US in October 2009.
Harald zur Hausen was awarded half of the $1.4 million Nobel Prize in Medicine in 2008 for his work showing that cervical cancer is caused by certain types of HPVs.
In December 2014, the US Food and Drug Administration (FDA) approved a vaccine called Gardasil 9 to protect females between the ages of 9 and 26 and males between the ages of 9 and 15 against nine strains of HPV. Gardasil 9 protects against infection from the strains covered by the first generation of Gardasil (HPV-6, HPV-11, HPV-16, and HPV-18) and protects against five other HPV strains responsible for 20% of cervical cancers (HPV-31, HPV-33, HPV-45, HPV-52, and HPV-58).
Society and culture
Economics
, vaccinating girls and young women was estimated to be cost-effective in the low and middle-income countries, especially in places without organized programs for screening cervical cancer. When the cost of the vaccine itself, or the cost of administering it to individuals, were higher, or if cervical cancer screening were readily available, then vaccination was less likely to be cost-effective.
From a public health point of view, vaccinating men as well as women decreases the virus pool within the population but is only cost-effective to vaccinate men when the uptake in the female population is extremely low. In the United States, the cost per quality-adjusted life year is greater than US$100,000 for vaccinating the male population, compared to less than US$50,000 for vaccinating the female population. This assumes a 75% vaccination rate.
In 2013, the two companies that sell the most common vaccines announced a price cut to less than US$5 per dose to poor countries, as opposed to US$130 per dose in the US.
Brand names
The vaccine is sold under various brand names including Gardasil, Cervarix, Cecolin, and Walrinvax.
Vaccine implementation
The primary target group in most of the countries recommending HPV vaccination is young adolescent girls, aged 9–14. It's particularly cost-effective in resource-constrained settings. The vaccination schedule depends on the age of the vaccine recipient. As of 2023, 27% of girls aged 9–14 years worldwide received at least one dose (37 countries were implementing the single-dose schedule). Global coverage for the first dose of HPV vaccine in girls grew from 20% in 2022 to 27% in 2023. As of 10 September 2024, 57 countries are implementing the single-dose schedule. Vaccinating a large portion of the population may also benefit the unvaccinated by way of herd immunity.
HPV vaccine introductions have been hampered by global supply shortages since 2018. Between 2019 and 2021, due to the COVID-19 pandemic, HPV vaccination programs have been significantly affected in the United States, low-income and lower-middle-income countries.
In developed countries, the widespread use of cervical "Pap smear" screening programs has reduced the incidence of invasive cervical cancer by 50% or more. Preventive vaccines reduce but do not eliminate the chance of getting cervical cancer. Therefore, experts recommend that women combine the benefits of both programs by seeking regular Pap smear screening, even after vaccination. School-entry vaccination requirements were found to increase the use of the HPV vaccine.
HPV vaccine included in national immunization program
At least 144 countries (at least 74% of WHO member states) provided the HPV vaccine in their national immunization schedule for girls, as of November 2024. As of 2022, 47 countries (24% of WHO member states) also did it for boys.
Africa
Of the 20 hardest hit countries by cervical cancer, 19 are in Africa.
In 2013, with support from Gavi, the Vaccine Alliance, eight low-income countries, mainly in sub-Saharan Africa, began the rollout of the HPV vaccine.
Algeria
No
Angola
No
Chad
No
Central African Republic
No
Democratic Republic of Congo
No
Ghana
No (GAVI support in 2013)
Guinea-Bissau
No
Kenya
Both Cervarix and Gardasil are approved for use within Kenya by the Pharmacy and Poisons Board. However, at a cost of 20,000 Kenyan shillings, which is more than the average annual income for a family, the director of health promotion in the Ministry of Health, Nicholas Muraguri, states that many Kenyans are unable to afford the vaccine. It has received GAVI support in 2013.
Madagascar
No (GAVI support in 2013)
Malawi
Yes (GAVI support in 2013)
Mozambique
Yes (GAVI support for HPV demonstration projects in 2014)
Niger
No (GAVI support in 2013)
Nigeria
Yes
Rwanda
Yes (GAVI support in 2014)
Senegal
Yes
Sierra Leone
Yes (GAVI support in 2013)
South Africa
Cervical cancer represents the most common cause of cancer-related deaths—more than 3,000 deaths per year—among women in South Africa because of high HIV prevalence, making the introduction of the vaccine highly desirable. A Papanicolaou test program was established in 2000 to help screen for cervical cancer, but since this program has not been implemented widely, vaccination would offer more efficient form of prevention. In May 2013 the Minister of Health of South Africa, Aaron Motsoaledi, announced the government would provide free HPV vaccines for girls aged 9 and 10 in the poorest 80% of schools starting in February 2014 and the fifth quintile later on. South Africa will be the first African country with an immunisation schedule that includes vaccines to protect people from HPV infection, but because the effectiveness of the vaccines in women who later become infected with HIV is not yet fully understood, it is difficult to assess how cost-effective the vaccine will be. Negotiations are currently underway for more affordable HPV vaccines since they are up to 10 times more expensive than others already included in the immunization schedule.
United Republic of Tanzania
Yes (GAVI support in 2013)
Zimbabwe
Yes (GAVI support for HPV demonstration projects in 2014)
Australia
In April 2007, Australia became the second country—after Austria—to introduce a government-funded National Human Papillomavirus (HPV) Vaccination Program to protect young women against HPV infections that can lead to cancers and disease. The National HPV Vaccination Program is listed on the National Immunisation Program (NIP) Schedule and funded under the Immunise Australia Program. The Immunise Australia Program is a joint Federal, State, and Territory Government initiative to increase immunisation rates for vaccine-preventable diseases.
The National HPV Vaccination Program for females was made up of two components: an ongoing school-based program for 12- and 13-year-old girls; and a time-limited catch-up program (females aged 14–26 years) delivered through schools, general practices, and community immunization services, which ceased on 31 December 2009.
During 2007–2009, an estimated 83% of females aged 12–17 years received at least one dose of HPV vaccine and 70% completed the 3-dose HPV vaccination course. By 2017, HPV coverage data on the Immunise Australia website show that by 15 years of age, over 82% of Australian females had received all three doses.
Since the National HPV Vaccination Program commenced in 2007, there has been a reduction in HPV-related infections in young women. A study published in The Journal of Infectious Diseases in October 2012 found the prevalence of vaccine-preventable HPV types (6, 11, 16, and 18) in Papanicolaou test results of women aged 18–24 years has significantly decreased from 28.7% to 6.7% four years after the introduction of the National HPV Vaccination Program. A 2011 report published found the diagnosis of genital warts (caused by HPV types 6 and 11) had also decreased in young women and men.
In October 2010, the Australian regulatory agency, the Therapeutic Goods Administration, extended the registration of the quadrivalent vaccine (Gardasil) to include use in males aged 9 through 26 years of age, for the prevention of external genital lesions and infection with HPV types 6, 11, 16 and 18.
In November 2011, the Pharmaceutical Benefits Advisory Committee (PBAC) recommended the extension of the National HPV Vaccination Program to include males. The PBAC made its recommendation on the preventive health benefits that can be achieved, such as a reduction in the incidence of anal and penile cancers and other HPV-related diseases. In addition to the direct benefit to males, it was estimated that routine HPV vaccination of adolescent males would contribute to the reduction of vaccine HPV-type infection and associated disease in women through herd immunity.
In 2012, the Australian Government announced it would be extending the National HPV Vaccination Program to include males, through the National Immunisation Program Schedule.
Updated results were reported in 2014.
Since February 2013, free HPV vaccine has been provided through school-based programs for:
males and females aged 12–13 years (ongoing program); and
males aged between 14 and 15 years – until the end of the school year in 2014 (catch-up program).
Canada
HPV vaccines were first approved in Canada in July 2006 for use in females, and February 2010 for use in males.
The vaccines Cervarix, Gardasil, and Gardasil 9 are authorized for use in Canada, with Gardasil 9 the primary vaccine used. All provinces and territories (except Quebec) administer Gardasil 9 on a two or three-dose schedule: individuals under age 15 are given two doses, while individuals who are immunocompromised, living with HIV, or age 15+ are given three doses. Quebec provides two doses to individuals under 18 years (the first dose is Gardasil 9, and the second dose is Cervarix) and three doses of Gardasil 9 to people age 18+.
The administration of free vaccination programs is provided by individual province and territory governments. All provincial and territorial governments offer free vaccination for school-aged children, irrespective of gender. The school grades in which the vaccine is provided varies by province and territory: grade 4 and secondary 3 (Quebec); grade 6 (British Columbia, Manitoba, Newfoundland and Labrador, Nunavut, Prince Edward Island, Saskatchewan, Yukon); grades 6 and 9 (Alberta); grades 4-6 (Northwest Territories); or grade 7 (New Brunswick, Nova Scotia, Ontario). Publicly funded HPV vaccines are also provided in certain provinces and territories for other groups of people, such as men who have sex with men, individuals living with HIV, and individuals who identify as transgender. Individuals who do not qualify for any of the publicly funded programs can privately purchase the three-dose HPV vaccine series for $510 to $630.
China
GlaxoSmithKline China announced in 2016, that Cervarix (HPV vaccine 16 and 18) had been approved by the China Food and Drug Administration (CFDA). Cervarix is registered in China for girls aged 9 to 45, adopting 3-dose program within 6 months. Cervarix was launched in China in 2017, and it was the first approved HPV vaccine in China.
Colombia
The vaccine was introduced in 2012, approved for girls aged 9. The HPV vaccine was initially offered to girls aged 9 and older, and attending the fourth grade of school. Since 2013 the age of coverage was extended to girls in school from grade four (who have reached the age of 9) to grade eleven (independent of age); and no schooling from age 9–17 years 11 months and 29 days old.
Costa Rica
Since June 2019, the vaccine has been administered compulsorily by the state, free of charge to girls at ten years of age.
Europe
As of 2020, the European Centre for Disease Prevention and Control (ECDC) reports that the vaccine uptake among females is the following:
Finland, Hungary, Iceland, Malta, Norway, Portugal, Spain, Sweden, and the UK have reported national coverage above 70%. In some countries, including France and Germany, coverage has been consistently below 50%, though recently increasing in France.
Hong Kong
HPV vaccines are approved for use in Hong Kong. As part of the Hong Kong Childhood Immunisation Programme, HPV vaccines became mandatory for students in the 2019/2020 school year, exclusively for females at primary 5 and 6 levels.
India
HPV vaccine (both Gardasil and Cervarix) was introduced in Indian markets in 2008, but it is yet to be included in the country's universal immunization programme. In Punjab and Sikkim (states of India), it is included in the state immunization program and the coverage is up to 97% of targeted girls. HPV vaccination has been recommended by the National Technical Advisory Group on Immunization, but has not been implemented in India as of 2018.
In 2023, Serum Institute of India (SII) developed a new vaccine Cervavax targeting HPV types 6, 11, 16, and 18. The newly developed vaccine shows equal capability to Merck's Gardasil 9. Cervavax vaccine isn't commercially available yet. In 2024, the HPV vaccine drive was announced by Finance Minister Nirmala Sitharaman as part of Nari Shakti ("Women Power") campaign but hasn't been implemented yet. The vaccine is commercially available in the market at a price between ₹ 3,000 ($35) and ₹ 15,000 ($180).
Ireland
The HPV vaccination programme in Ireland is part of the national strategy to protect females from cervical cancer. Since 2009, the Health Service Executive has offered the HPV vaccine, free of charge, to all girls from the first year onwards (ages 12–13). Secondary schools began implementing the vaccine program on an annual basis from September 2010 onwards. The programme was expanded to include males in 2019. Two HPV vaccines are licensed for use in Ireland: Cervarix and Gardasil. To ensure high uptake, the vaccine is administered to teenagers aged 12–13 in their first year of secondary school, with the first dose administered between September and October and the final dose in April of the following year. Males and females aged 12–13 who are outside of the traditional school setting (home school, etc.) are invited to Health service Executive clinics for their vaccines. HPV vaccination in Ireland is not mandatory and consent is obtained before vaccination. For males and females aged 16 and under, consent is granted by a parent or guardian unless it is explicitly refused by the child. Any male or female aged 16 and over may provide their own consent if they want to be vaccinated. HIQA has stated the vaccine will provide further protection, particularly to men who have sex with men. The vaccine has been extended following evidence that 25% of HPV cancers occur in men. Additionally, HIQA is aiming to replace the current vaccination, which covers 4 major HPV strains, with an updated vaccine protecting against nine strains. The cost with the "gender-neutral nine-talent" vaccine is estimated to be nearly €11.66 million over the next five years.
Israel
Introduced in 2012. Target age group 13–14. Fully financed by national health authorities only for this age group. For the year 2013–2014, girls in the eighth grade may get the vaccine free of charge only in school, and not in Ministry of Health offices or clinics. Girls in the ninth grade may receive the vaccine free of charge only at Ministry of Health offices, and not in schools or clinics. Religious and conservative groups are expected to refuse the vaccination.
Japan
The quadrivalent vaccine has been approved for males and the 9-valent one for females. Since 2010, young women in Japan have been eligible to receive the cervical cancer vaccination for free. In June 2013, the Japanese Ministry of Health, Labor and Welfare mandated that, before administering the vaccine, medical institutions must inform women that the ministry does not recommend it. However, the vaccine is still available at no cost to Japanese women who choose to accept the vaccination. It is widely available only since April 2013. Fully financed by national health authorities to females aged 11 to 16 years. In June 2013, however, Japan's Vaccine Adverse Reactions Review Committee (VARRC) suspended the recommendation of the vaccine due to fears of adverse events. This directive has been criticized by researchers at the University of Tokyo as a failure of governance since the decision was taken without the presentation of adequate scientific evidence. At the time, Ministry spokespeople emphasized that "The decision does not mean that the vaccine itself is problematic from the viewpoint of safety," but that they wanted time to conduct analyses on possible adverse effects, "to offer information that can make the people feel more at ease." However, the suspension of the Ministry's endorsement was still in place as of February 2019, by which time the HPV vaccination rate among younger women fell from approximately 70% in 2013 to 1% or less. Over an overlapping time period (2009–2019), the age-adjusted mortality rate from cervical cancer increased by 9.6%. Japan to Resume Active Promotion of HPV Vaccinations in April 2022. In December 2021, the Ministry of Health, Labour and Welfare has decided to allow free vaccines to women born between fiscal year 1997 and 2005 after eight-year hiatus. A panel of Japan's Ministry of Health, Labour and Welfare agreed to give women (born between fiscal 1997 and fiscal 2005), free vaccinations, if they missed the country's free vaccination program. 225,993 girls were vaccinated for the first round of routine vaccination in 2022, and the vaccination rate was 42.2%. The Osaka University Graduate School of Medicine and Faculty of Medicine reported the first vaccination rate and cumulative first vaccination rate for each year of birth in 2022 at a meeting of the Ministry of Health, Labor and Welfare. For 12-year-old girls born in 2010, the rate was 2.8%.
Laos
In 2013, Laos began implementation of the HPV vaccine, with the assistance of Gavi, the Vaccine Alliance.
Malaysia
In 2010, Malaysia launched a national vaccination program to provide three doses of HPV vaccines to all 13-year-old girls. In 2015, the program transitioned to a two-dose regimen.
High rates of school enrolment for 13-year-olds (96.0%) and retention of female students in secondary schools have made it possible for the HPV vaccination to be integrated into the School Health Service Program and ensure equal access to the HPV vaccine between urban and rural areas.
Mexico
The vaccine was introduced in 2008 to 5% of the population. This percentage of the population had the lowest development index which correlates with the highest incidence of cervical cancer. The HPV vaccine is delivered to girls 12 – 16 years old following the 0-2-6 dosing schedule. By 2009 Mexico had expanded the vaccine use to girls, 9–12 years of age, the dosing schedule in this group was different, the time elapsed between the first and second dose was six months, and the third dose 60 months later. In 2011 Mexico approved a nationwide use of HPV vaccination program to include vaccination of all 9-year-old girls.
New Zealand
Immunization as of 2017 is free for males and females aged 9 to 26 years.
The public funding began on 1 September 2008. The vaccine was initially offered only to girls, usually through a school-based program in Year 8 (approximately age 12), but also through general practices and some family planning clinics. Over 200,000 New Zealand girls and young women have received HPV immunization.
Panama
The vaccine was added to the national immunization program in 2008, to target 10-year-old girls.
South Korea
On 27 July 2007, South Korean government approved Gardasil for use in girls and women aged 9 to 26 and boys aged 9 to 15. Approval for use in boys was based on safety and immunogenicity but not efficacy.
Since 2016, HPV vaccination has been part of the National Immunization Program, offered free of charge to all children under 12 in South Korea, with costs fully covered by the Korean government.
For 2016 only, Korean girls born between 1 January 2003 and 31 December 2004 were also eligible to receive the free vaccinations as a limited-time offer. From 2017, the free vaccines are available to those under 12 only.
Trinidad and Tobago
Introduced in 2013. Target Group 9–26. Fully financed by national health authorities. But was suspended later on that year owing to objections and concerns raised by the Catholic Board, but fully available in local health centers.
United Arab Emirates
The World Health Organization ranks cervical cancer as the fourth most frequent cancer among women in UAE, at 7.4 per 100,000 women, and according to Abu Dhabi Health Authority, the cancer is also the seventh highest cause of death of women in the U.A.E.
In 2007, the HPV vaccine was approved for girls and young women, 15 to 26 years of age, and offered optionally at hospitals and clinics. Moreover, starting 1 June 2013, the vaccine was offered free of charge for women between the ages of 18 and 26, in Abu Dhabi. However, on 14 September 2018, the U.A.E's Ministry of Health and Community Protection announced that HPV vaccine became a mandatory part of the routine vaccinations for all girls in the U.A.E. The vaccine is to be administers to all school girls in the 8th grade girls, aged 13.
United Kingdom
In the UK the vaccine is licensed for females aged 9–26, for males aged 9–15, and for men who have sex with men aged 18–45.
HPV vaccination was introduced into the national immunisation programme in September 2008, for girls aged 12–13 across the UK. A two-year catch-up campaign started in Autumn 2009 to vaccinate all girls up to 18 years of age. Catch-up vaccination was offered to girls aged between 16 and 18 from autumn 2009, and girls aged between 15 and 17 from autumn 2010. It will be many years before the vaccination programme affects cervical cancer incidence so women are advised to continue accepting their invitations for cervical screening. Men who have sex with men up to and including the age of 45 became eligible for free HPV vaccination on the NHS in April 2018. They get the vaccine by visiting sexual health clinics and HIV clinics in England. A meta-analysis of vaccinations for men who have sex with men showed that this strategy is most effective when combined with gender-neutral vaccination of all boys, regardless of their sexual orientation.
From the 2019/2020 school year, it is expected that 12- to 13-year-old boys will also become eligible for the HPV vaccine as part of the national immunisation programme. This follows a statement by the Joint Committee on Vaccination and Immunisation. The first dose of the HPV vaccine will be offered routinely to boys aged 12 and 13 in school year 8, in the same way that it is currently (May 2018) offered to girls. Boots UK opened a private HPV vaccination service to boys and men aged 12–44 years in April 2017 at a cost of £150 per vaccination. In children aged 12–14 years two doses are recommended, while those aged 15–44 years a course of three is recommended.
Cervarix was the HPV vaccine offered from its introduction in September 2008, to August 2012, with Gardasil being offered from September 2012. The change was motivated by Gardasil's added protection against genital warts.
United States
Adoption
On 30 August 2021, fifteen leading academic and freestanding cancer centers with membership in the Association of American Cancer Institutes (AACI), all National Cancer Institute (NCI)-designated cancer centers, the American Cancer Society, the American Society of Clinical Oncology, the American Association for Cancer Research, and the St. Jude Children's Research Hospital have issued a joint statement urging the US health care systems, physicians, parents, children, and young adults to get HPV vaccination and other recommended vaccinations back on track during the National Immunization Awareness Month.
, about one-quarter of US females aged 13–17 years had received at least one of the three HPV shots. , the proportion of such females receiving an HPV vaccination had risen to 38%. The government began recommending vaccination for boys in 2011; , the vaccination rate among boys (at least one dose) had reached 35%.
According to the US Centers for Disease Control and Prevention (CDC), getting as many girls vaccinated as early and as quickly as possible will reduce the cases of cervical cancer among middle-aged women in 30 to 40 years and reduce the transmission of this highly communicable infection. Barriers include the limited understanding by many people that HPV causes cervical cancer, the difficulty of getting pre-teens and teens into the doctor's office to get a shot, and the high cost of the vaccine ($120/dose, $360 total for the three required doses, plus the cost of doctor visits). Community-based interventions can increase the uptake of HPV vaccination among adolescents.
A survey was conducted in 2009 to gather information about knowledge and adoption of the HPV vaccine. Thirty percent of 13- to 17-year-olds and 9% of 18- to 26-year-olds out of the total 1,011 young women surveyed reported receipt of at least one HPV injection. Knowledge about HPV varied; however, 5% or fewer subjects believed that the HPV vaccine precluded the need for regular cervical cancer screening or safe-sex practices. Few girls and young women overestimate the protection provided by the vaccine. Despite moderate uptake, many females at risk of acquiring HPV have not yet received the vaccine. For example, young black women are less likely to receive HPV vaccines compared to young white women. Additionally, young women of all races and ethnicities without health insurance are less likely to get vaccinated.
As of 2017, Gardasil 9 is the only HPV vaccine available in the United States as it provides protection against more HPV types than the earlier approved vaccines (the original Gardasil and Cervarix). Since the approval of Gardasil in 2006 and despite low vaccine uptake, prevalence of HPV among teenagers aged 14–19 has been cut in half with an 88% reduction among vaccinated women. No decline in prevalence was observed in other age groups, indicating the vaccine to have been responsible for the sharp decline in cases. The drop in number of infections is expected to in turn lead to a decline in cervical and other HPV-related cancers in the future.
Legislation
Four states have laws that require HPV vaccination for school students: Hawaii, Rhode Island, Virginia, and Washington D.C. Students in those states must have started HPV vaccination before entering the 7th grade. All school immunization laws grant exemptions to children for medical reasons, with other "opt-out" policies varying by state.
Shortly after the first HPV vaccine was approved, bills to make the vaccine mandatory for school attendance were introduced in many states. Only two such bills passed (in Virginia and Washington DC) during the first four years after vaccine introduction. Mandates have been effective at increasing uptake of other vaccines, such as mumps, measles, rubella, and hepatitis B (which is also sexually transmitted). However most such efforts developed for five or more years after vaccine release, while financing and supply were arranged, further safety data was gathered, and education efforts increased understanding, before mandates were considered. Most public policies including school mandates have not been effective in promoting HPV vaccination while receiving a recommendation from a physician increased the probability of vaccination.
In July 2015, Rhode Island added an HPV vaccine requirement for admittance into public schools. This mandate requires all students entering the seventh grade to receive at least one dose of the HPV vaccine starting in August 2015, all students entering the eighth grade to receive at least two doses of the HPV vaccine starting in August 2016, and all students entering the ninth grade to receive at least three doses of the HPV vaccine starting in August 2017. No legislative action is required for the Rhode Island Department of Health to add new vaccine mandates. Rhode Island is the only state that requires the vaccine for both male and female 7th graders.
Immigrants
Between July 2008 and December 2009, proof of the first of three doses of HPV Gardasil vaccine was required for women ages 11–26 intending to legally enter the United States. This requirement stirred controversy because of the cost of the vaccine, and because all the other vaccines so required to prevent diseases that are spread by respiratory route and considered highly contagious. The Centers for Disease Control and Prevention repealed all HPV vaccination directives for immigrants effective 14 December 2009. Uptake in the United States appears to vary by ethnicity and whether someone was born outside the United States.
Coverage
Measures have been considered including requiring insurers to cover HPV vaccination and funding HPV vaccines for those without insurance. The cost of the HPV vaccines for females under 18 who are uninsured is covered under the federal Vaccines for Children Program. As of 23 September 2010, vaccines are required to be covered by insurers under the Patient Protection and Affordable Care Act. HPV vaccines specifically are to be covered at no charge for women, including those who are pregnant or nursing.
Medicaid covers HPV vaccination in accordance with the ACIP recommendations, and immunizations are a mandatory service under Medicaid for eligible individuals under age 21. In addition, Medicaid includes the Vaccines for Children Program. This program provides immunization services for children 18 and under who are Medicaid eligible, uninsured, underinsured, receiving immunizations through a Federally Qualified Health Center or Rural Health Clinic, or are Native American or Alaska Native.
The vaccine manufacturers also offer help for people who cannot afford HPV vaccination. GlaxoSmithKline's Vaccines Access Program provides Cervarix free of charge 1-877-VACC-911 to low-income women, ages 19 to 25, who do not have insurance. Merck's Vaccine Patient Assistance Program 1-800-293-3881 provides Gardasil free to low-income women and men, ages 19 to 26, who do not have insurance, including immigrants who are legal residents.
Opposition in the United States
The idea that the HPV vaccine is linked to increased sexual behavior is not supported by scientific evidence. A review of nearly 1,400 adolescent girls found no difference in teen pregnancy, incidence of sexually transmitted infection, or contraceptive counseling regardless of whether they received the HPV vaccine. Thousands of Americans die each year from cancers preventable by the vaccine. A disproportionate rate of HPV-related cancers exists amongst LatinX populations, leading researchers to explore how communication and messaging can be adjusted to address vaccine hesitancy.
Insurance companies
There has been significant opposition from health insurance companies to covering the cost of the vaccine ($360).
Religious and conservative groups
Opposition due to the safety of the vaccine has been addressed through studies, but there is still some opposition focused on the sexual implications of the vaccine. Conservative groups in the US have opposed the concept of making HPV vaccination mandatory for pre-adolescent girls, claiming that making the vaccine mandatory is a violation of parental rights and that it will give a false sense of immunity to sexually transmitted infection, leading to early sexual activity. (See Peltzman effect) Both the Family Research Council and the group Focus on the Family support widespread (universal) availability of HPV vaccines but oppose mandatory HPV vaccinations for entry to public school. Parents also express confusion over recent mandates for entry to public school, pointing out that HPV is transmitted through sexual contact, not through attending school with other children.
Conservative groups are concerned children will see the vaccine as a safeguard against STIs and will have sex sooner than they would without the vaccine while failing to use contraceptives. However, the American Academy of Pediatrics disagreed with the argument that the vaccine increases sexual activity among teens. Christine Peterson, director of the University of Virginia's Gynecology Clinic, said "The presence of seat belts in cars doesn't cause people to drive less safely. The presence of a vaccine in a person's body doesn't cause them to engage in risk-taking behavior they would not otherwise engage in." A 2018 study of college-aged students found that HPV vaccination did not increase sexual activity.
Parental opposition
Many parents opposed to providing the HPV vaccine to their pre-teens agree the vaccine is safe and effective, but find talking to their children about sex uncomfortable. Elizabeth Lange, of Waterman Pediatrics in Providence, RI, addresses this concern by emphasizing what the vaccine is doing for the child. Lange suggests parents should focus on the cancer prevention aspect without being distracted by words like 'sexually transmitted'. Everyone wants cancer prevention, yet here parents are denying their children a form of protection due to the nature of the cancer—Lange suggests that this much controversy would not surround a breast cancer or colon cancer vaccine. The HPV vaccine is suggested for 11-year-olds because it should be administered before possible exposure to HPV, but also because the immune system has the highest response for creating antibodies around this age. Lange also emphasized the studies showing that the HPV vaccine does not cause children to be more promiscuous than they would be without the vaccine.
Controversy over the HPV vaccine remains present in the media. Parents in Rhode Island have created a Facebook group called "Rhode Islanders Against Mandated HPV Vaccinations" in response to Rhode Island's mandate that males and females entering the 7th grade, as of September 2015, be vaccinated for HPV before attending public school.
Physician impact
The effectiveness of a physician's recommendation for the HPV vaccine also contributes to low vaccination rates and controversy surrounding the vaccine. A 2015 study of national physician communication and support for the HPV vaccine found physicians routinely recommend HPV vaccines less strongly than they recommend Tdap or meningitis vaccines, find the discussion about HPV to be long and burdensome, and discuss the HPV vaccine last, after all other vaccines. Researchers suggest these factors discourage patients and parents from setting up timely HPV vaccines. To increase vaccination rates, this issue must be addressed and physicians should be better trained to handle discussing the importance of the HPV vaccine with patients and their families.
Ethics
Some researchers have compared the need for adolescent HPV vaccination to that of other childhood diseases such as chicken pox, measles, and mumps. This is because vaccination before infection decreases the risk of several forms of cancer.
There has been some controversy around the HPV vaccine's rollout and distribution. Countries have taken different routes based on economics and social climate leading to issues of forced vaccination and marginalization of segments of the population in some cases.
The rollout of a country's vaccination program is more divisive, compared to the act of providing vaccination against HPV. In more affluent countries, arguments have been made for publicly funded programs aimed at vaccinating all adolescents voluntarily. These arguments are supported by World Health Organization (WHO) surveys showing the effectiveness of cervical cancer prevention with HPV vaccination.
In developing countries, the cost of the vaccine, dosing schedule, and other factors have led to suboptimal levels of vaccination. Future research is focused on low-cost generics and single-dose vaccination in efforts to make the vaccine more accessible.
Research
There are high-risk HPV types that are not affected by available vaccines. Ongoing research is focused on the development of HPV vaccines that will offer protection against a broader range of HPV types. One such method is a vaccine based on the minor capsid protein L2, which is highly conserved across HPV genotypes. Efforts for this have included boosting the immunogenicity of L2 by linking together short amino acid sequences of L2 from different oncogenic HPV types or by displaying L2 peptides on a more immunogenic carrier. There is also substantial research interest in the development of therapeutic vaccines, which seek to elicit immune responses against established HPV infections and HPV-induced cancers.
After exposure
Although HPV vaccination is most encouraged before any exposure to the target strains, its use is still beneficial in women who have contracted some of the target types because it's unlikely for a person to have been exposed to all target types. According to an 2008 article by the editor-in-chief of Harvard Women's Health Watch, the quadrivalent vaccine is able to reduce the occurrence of warts and precancerous lesions in HPV-positive women, and also appeared to reduce the chance of infection by non-targeted types. A 2023 review article finds that vaccination reduces the chance of further HPV-associated diseases even in those already showing HPV-related precancers and diseases. At this point the standard vaccine is not believed to be therapeutic, so this effect is attributed to the vaccine preventing the establishment of new infections.
Therapeutic vaccines
In addition to preventive vaccines, laboratory research, and several human clinical trials are focused on the development of therapeutic HPV vaccines. In general, these vaccines focus on the main HPV oncogenes, E6 and E7. Since expression of E6 and E7 is required for promoting the growth of cervical cancer cells (and cells within warts), it is hoped that immune responses against the two oncogenes might eradicate established tumors.
There is a working therapeutic HPV vaccine. It has gone through three clinical trials. The vaccine is officially called the MEL-1 vaccine but also known as the MVA-E2 vaccine. In a study it has been suggested that an immunogenic peptide pool containing epitopes that can be effective against all the high-risk HPV strains circulating globally and 14 conserved immunogenic peptide fragments from four early proteins (E1, E2, E6 and E7) of 16 high-risk HPV types providing CD8+ responses.
Therapeutic DNA vaccine VGX-3100, which consists of plasmids pGX3001 and pGX3002, has been granted a waiver by the European Medicines Agency for pediatric treatment of squamous intraepithelial lesions of the cervix caused by HPV types 16 and 18. According to an article published 16 September 2015 in The Lancet, which reviewed the safety, efficacy, and immunogenicity of VGX-3100 in a double-blind, randomized controlled trial (phase 2b) targeting HPV-16 and HPV-18 E6 and E7 proteins for cervical intraepithelial neoplasia 2/3, it is the first therapeutic vaccine to show efficacy against CIN 2/3 associated with HPV-16 and HPV-18. In June 2017, VGX-3100 entered a phase III clinical trial called REVEAL-1 for the treatment of HPV-induced high-grade squamous intraepithelial lesions. The estimated completion time for collecting primary clinical endpoint data is August 2019.
As of October 2020, there are multiple therapeutic HPV vaccines in active development and in clinical trials, based on diverse vaccine platforms (protein-based, viral vector, bacterial vector, lipid encapsulated mRNA).
Awards
In 2009, as part of the Q150 celebrations, the cervical cancer vaccine was announced as one of the Q150 Icons of Queensland for its role in "innovation and invention".
In 2017, National Cancer Institute scientists Douglas R. Lowy and John T. Schiller received the Lasker-DeBakey Clinical Medical Research Award for their contributions leading to the development of HPV vaccines.
References
Further reading
External links
Cancer vaccines
Gynaecological cancer
Infectious causes of cancer
Papillomavirus
Wikipedia medicine articles ready to translate
Protein subunit vaccines
Vaccines
World Health Organization essential medicines (vaccines)
Australian inventions
Cervical cancer
Q150 Icons | HPV vaccine | [
"Biology"
] | 13,478 | [
"Viruses",
"Vaccination",
"Vaccines",
"Papillomavirus"
] |
3,304,717 | https://en.wikipedia.org/wiki/Causal%20filter | In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.
An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal.
Example
The following definition is a sliding or moving average of input data . A constant factor of is omitted for simplicity:
where could represent a spatial coordinate, as in image processing. But if represents time , then a moving average defined that way is non-causal (also called non-realizable), because depends on future inputs, such as . A realizable output is
which is a delayed version of the non-realizable output.
Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution
In those terms, causality requires
and general equality of these two expressions requires h(t) = 0 for all t < 0.
Characterization of causal filters in the frequency domain
Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function
which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relation
where Θ(t) is the Heaviside unit step function.
This means that the Fourier transforms of h(t) and g(t) are related as follows
where is a Hilbert transform done in the frequency domain (rather than the time domain). The sign of may depend on the definition of the Fourier Transform.
Taking the Hilbert transform of the above equation yields this relation between "H" and its Hilbert transform:
References
Signal processing
Filter theory | Causal filter | [
"Technology",
"Engineering"
] | 516 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Filter theory"
] |
3,305,211 | https://en.wikipedia.org/wiki/Quadrature%20filter | In signal processing, a quadrature filter is the analytic representation of the impulse response of a real-valued filter:
If the quadrature filter is applied to a signal , the result is
which implies that is the analytic representation of .
Since is an analytic signal, it is either zero or complex-valued. In practice, therefore, is often implemented as two real-valued filters, which correspond to the real and imaginary parts of the filter, respectively.
An ideal quadrature filter cannot have a finite support. It has single sided support, but by choosing the (analog) function carefully, it is possible to design quadrature filters which are localized such that they can be approximated by means of functions of finite support. A digital realization without feedback (FIR) has finite support.
Applications
This construction will simply assemble an analytic signal with a starting point to finally create a causal signal with finite energy. The two Delta Distributions will perform this operation. This will impose an additional constraint on the filter.
Single frequency signals
For single frequency signals (in practice narrow bandwidth signals) with frequency the magnitude of the response of a quadrature filter equals the signal's amplitude A times the frequency function of the filter at frequency .
This property can be useful when the signal s is a narrow-bandwidth signal of unknown frequency. By choosing a suitable frequency function Q of the filter, we may generate known functions of the unknown frequency which then can be estimated.
See also
Analytic signal
Hilbert transform
Signal processing | Quadrature filter | [
"Technology",
"Engineering"
] | 300 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
3,305,496 | https://en.wikipedia.org/wiki/MOSE | MOSE () is a project intended to protect the city of Venice, Italy, and the Venetian Lagoon from flooding.
The project is an integrated system consisting of rows of mobile gates, installed on the seafloor at the Lido, Malamocco, and Chioggia inlets, that can be raised to temporarily seal off the Venetian Lagoon from the Adriatic Sea during acqua alta high tides. Together with other measures, such as coastal reinforcement, elevating of quaysides, and paving and improvement of the lagoon, MOSE is designed to protect Venice and the lagoon from tides of up to . As of 2023, the floodgates are raised for tides forecast to be more than .
The Consorzio Venezia Nuova is responsible for the work on behalf of the Ministry of Infrastructure and Transport – Venice Water Authority. Construction began simultaneously in 2003. On 10 July 2020, the first full test was successfully completed, and after multiple delays, cost overruns, and scandals resulted in the project missing both its 2018 completion deadline (originally a 2011 deadline) and its 2021 deadline, and is now to be finished in 2025. On 3 October 2020, the MOSE was activated for the first time in the occurrence of a high tide event, preventing some of the low-lying parts of the city (in particular piazza San Marco) from being flooded. In 2020, the experts who had conceived a set of three floodgates separating the Adriatic Sea from Venice estimated that each year they would have to raise the floodgates 5 times. Within two years after the inaugural raising of the floodgates, MOSE was activated 49 times.
Origin of the name
Before the acronym was used to describe the entire flood protection system, MOSE referred to the 1:1 scale prototype of a gate that had been tested between 1988 and 1992 at the Lido inlet.
The name also holds a secondary meaning: "MOSE" alludes to the biblical character Moses ("Mosè" in Italian), who is remembered for parting the Red Sea.
Context
MOSE is part of a General Plan of Interventions to safeguard Venice and the lagoon. The project was begun in 1987 by the Ministry of Infrastructure through the Venice Water Authority (the Ministry's operational arm in the lagoon) and the concessionary Consorzio Venezia Nuova. The measures already completed or underway along the coastline and in the lagoon are the most important environmental defense, restoration, and improvement program ever implemented by the Italian State.
In parallel with the construction of MOSE, the Venice Water Authority and Venice Local Authority are raising quaysides and paving in the city in order to protect built-up areas in the lagoon from medium high tides (below , the height at which the mobile barriers will come into operation). These measures are extremely complex, particularly in urban settings such as Venice and Chioggia where the raising must take account of the delicate architectural and monumental context.
Measures to improve the shallow lagoon environment are aimed at slowing degradation of the morphological structures caused by subsidence, eustatism, and erosion due to waves and wash. Work is underway throughout the lagoon basin to protect, reconstruct, and renaturalise salt marshes, mud flats and shallows; restore the environment of the smaller islands; and dredge lagoon canals and channels.
Important activities are also underway to redress pollution in the industrial area of Porto Marghera, at the edge of the central lagoon. Islands formerly used as rubbish dumps are being secured while industrial canals are being consolidated and sealed after removal of their polluted sediments.
Baby MOSE
'Baby MOSE' is the flood defence system protecting Chioggia from the most frequently occurring high waters, up to a maximum of 130 cm.
Completed in the summer of 2012, it consists of two movable sluices located at the ends of the Vena canal - the canal that crosses the centre of Chioggia from North to South. These can be raised in a few minutes and protect the center of Chioggia from the most frequent high waters.
The two sluices (Vigo and Santa Maria) are equal, 18 m long, 3.3 wide and 80 cm thick; in case of threatening tides, they are raised in eight minutes using hydraulic motors.
Together with raising canal banks and the areas at the edge of the urban centre, completed previously, the babyMose is able to defend from tides up to 130 cm.
The babyMose was completed in time to keep the center of Chioggia dry during the high water of October 2012.
In the case of more extreme high waters, it is necessary to wait for the main MOSE to come into operation, which blocks the entrance of the tide into the lagoon through the closure of the MOSE barriers.
The babyMose, as well as the other interventions of protection and urban redevelopment implemented in Chioggia in recent years, was carried out by the Magistrate of the Waters of Venice, through the Consorzio Venezia Nuova, together with the municipal administration.
Objectives
The aim of MOSE is to protect the lagoon, its towns, villages, and inhabitants along with its iconic historic, artistic, and environmental heritage from floods, including extreme events.
Although the tide in the lagoon basin is lower than in other areas of the world (where it may reach as high as ), the phenomenon may become significant when associated with atmospheric and meteorological factors such as low pressure and the bora, a north-easterly wind coming from Trieste, or the Sirocco, a hot south-easterly wind. Those conditions push waves into the gulf of Venice. High water is also worsened by rain and water flowing into the lagoon from the drainage basin at 36 inflow points associated with small rivers and canals.
Floods have caused damage since ancient times and have become ever more frequent and intense as a result of the combined effect of eustatism (a rise in sea level) and subsidence (a drop in land level) caused by natural and man-induced phenomena. Today, towns and villages in the lagoon are an average of lower with respect to the water level than at the beginning of the 1900s and each year, thousands of floods cause serious problems for the inhabitants as well as deterioration of architecture, urban structures and the ecosystem. Over the entire lagoon area, there is also a constant risk of an extreme catastrophic event such as that of 4November 1966 (the great flood) when a tide of submerged Venice, Chioggia and the other built-up areas.
Floods effects are exacerbated due to greater erosion by the sea caused by human interventions to facilitate port activities (e.g. through the construction of jetties and artificial canals); establishment of the industrial Porto Marghera area; and increased wash from motorized boats, which all aggravate erosion of morphological structures and the foundations of quaysides and buildings.
In the future, the high water phenomenon may be further aggravated by the predicted rise in sea level as a result of global warming.
In this context, MOSE, together with reinforcement of the barrier island, has been designed to provide protection from tides of up to in height. The aim of MOSE is to protect the lagoon, even if the most pessimistic hypotheses are proven true, such as a rise in sea level of at least . However, the reports have grown more pessimistic with time compared to when MOSE was originally planned; the 2019 estimate from the IPCC (Intergovernmental Panel on Climate Change) predicts a rise in sea level of between by 2100 if emissions continue to increase, which MOSE was not designed to handle.
MOSE is flexible and can be operated in different ways according to the characteristics and height of the tide. Given that the gates are independent and can be operated separately, all three inlets can be closed in the case of an exceptional event, the inlets can be closed one at a time according to the winds, atmospheric pressure and height of tide forecast, or again, each inlet can be partially closed.
Exceptionally high waters have struck the city since 1935: levels of 140 cm or greater have been recorded during the following floods, with 12 of the 20 events occurring in the 21st century:
Prior to 1936, the highest levels had been in 1879, when on 25 February the water reached 137.5 cm, and on 21 November 1916, when a level of 136 cm occurred. Since 1936, there have been 17 occasions when the level reached between 130 and 140 cm.
All values were recorded at the monitoring station at Punta della Salute (Punta della Dogana) and refer to the 1897 tidal datum point.
The highest tide in over five decades on 13 November 2019 left 85% of the city flooded. Mayor Luigi Brugnaro blamed the situation on climate change. A Washington Post report provided a more thorough analysis:"The sea level has been rising even more rapidly in Venice than in other parts of the world. At the same time, the city is sinking, the result of tectonic plates shifting below the Italian coast. Those factors together, along with the more frequent extreme weather events associated with climate change, contribute to floods."
Operating principles
MOSE consists of rows of mobile gates on the seabed at the three inlets that can be raised to temporarily seal off the lagoon from the sea during high tide. There are 78 gates grouped into four barrier rows. At the Lido inlet, the widest, there are two barrier rows of 21 and 20 gates, respectively, linked by an artificial island (the island connecting the two rows of gates at the centre of the Lido inlet also accommodates the technical buildings housing the system operating plant). In addition, there is one row of 19 gates at the Malamocco inlet and one row of 18 gates at the Chioggia inlet.
The gates consist of metal box-type structures wide for all rows, with a length varying between and from thick. The gates are connected to the concrete housing structures with hinges, the technological heart of the system, which allow the gates to move while attached to the housing structures on the seafloor.
Under normal tidal conditions, the gates are full of water and rest in their housing structures. When a high tide is forecast, compressed air is introduced into the gates to empty them of water, causing them to rotate around the axis of the hinges and rise up until they emerge above the water to stop the tide from entering the lagoon. When the tide recedes, the gates are filled with water again and lowered into their housing.
The inlets are closed for an average of between four and five hours, including the time taken for the gates to be raised (about 30 minutes) and lowered (about 15 minutes).
To guarantee navigation and avoid interruption of activities in the Port of Venice, when the mobile barriers are in operation, a main lock is constructed at the Malamocco inlet to allow the transit of large ships; while at the Lido and Chioggia inlets, there are smaller locks to allow emergency vessels, fishing boats and pleasure craft to shelter and transit.
Operating procedure dictates that the gates will be raised for tides of more than high. The authorities have established this as the optimum height with respect to current sea levels, but the gates can be operated for any level of tide. The MOSE system is flexible: depending on the winds, atmospheric pressure and level of tide, it can oppose the high water in different ways – with simultaneous closure of all three inlets in the case of exceptional tides, by closing just one inlet at a time, or by partially closing each inletgiven that the gates are independentfor medium-high tides.
Chronology
Following the flood of 4November 1966 when Venice, Chioggia and the other built-up areas in the lagoon were submerged by a tide of , the first Special Law for Venice declared the problem of safeguarding the city to be of "priority national interest". This marked the beginning of a long legislative and technical process to guarantee Venice and the lagoon an effective sea defence system.
To this end, in 1975 the State Ministry of Public Works issued a competitive tender, but the process ended without a project being chosen from those presented as no hypothesis for action satisfied all the mandated requirements. The Ministry subsequently acquired documents presented during the call for tender and passed them to a group of experts commissioned to draw up a project to preserve the hydraulic balance of the lagoon and protect Venice from floods (the "Progettone" of 1981).
A few years later, a further Special Law (Law no. 798/1984) emphasised the need for a unified approach to safeguarding measures, set up a committee for policy, coordination and control of these activities (the "Comitatone", chaired by the President of the Council of Ministers and consisting of representatives of the competent national and local authorities and institutions) and entrusted design and implementation to a single body, the Consorzio Venezia Nuova, recognising its ability to manage the safeguarding activities as a whole.
The Venice Water Authority – Consorzio Venezia Nuova presented a complex system of interventions to safeguard Venice (the REA "Riequilibrio E Ambiente", "Rebalancing and the Environment" Project), which included mobile barriers at the inlets to regulate tides in the lagoon. In this context, between 1988 and 1992, experiments were carried out on a prototype gate (MOdulo Sperimentale Elettromeccanico, hence the name MOSE) and in 1989, a conceptual design for the mobile barriers was drawn up. This was completed in 1992 and subsequently approved by the Higher Council of Public Works then subjected to an Environmental Impact Assessment procedure and further developed as requested by the Comitatone. In 2002 the final design was presented and on 3April 2003, the Comitatone gave the go-ahead for its implementation. The same year, construction sites opened at the three lagoon inlets of Lido, Malamocco and Chioggia.
Construction
Construction of MOSE was authorised by the "Comitatone" on 3April 2003 and the associated construction sites opened the same year. Work began simultaneously and continues in parallel at the three inlets of Lido, Malamocco and Chioggia. Work on the structural parts (foundations, mobile barrier abutments, gate housing structures), associated structures (breakwaters, small craft harbours, locks) and parts for operating the system (technical buildings, plant) is now at an advanced stage.
Currently about 4000 people are employed in the construction of MOSE.
As well as the construction sites at the inlets, fabrication of the main components of MOSE (the hinges, the technological heart of the system which constrain the gates to their housing and allow them to move, and the gates) is also proceeding. Restructuring of the buildings and spaces in the area of the Venice Arsenal where maintenance of MOSE and management of the system will be located is also underway.
Lagoon inlet construction sites
Construction of MOSE at the inlets necessitates complex logistical organisation. These are located in a highly delicate environmental context so as to avoid interfering with the surrounding area as far as possible. The sites have been set up on temporary areas of water in order to limit occupation of the land adjacent to the inlets and reduce as far as possible the effect on activities taking place there. Materials (for example, site supplies) and machines are also moved via sea to avoid overloading the road system along the coast. Since the sites opened, all work has been carried out without interrupting transit through the inlet channels.
Below is a description of the work underway and already completed at each inlet, listed in order from North to South.
Lido inlet
There will be two rows of gates at the Lido inlet (21 mobile gates for the North barrier Lido-Treporti and 20 mobile gates for the South barrier Lido-San Nicolò).
To the north of the inlet (Treporti), a small craft harbour consisting of two basins communicating through a lock, will allow small craft and emergency vessels to shelter and transit when the gates are raised. The sea-side basin was temporarily drained and sealed for use as the site to construct the gate housing structures for this barrier. Once the housing structures had been completed, the area was flooded with water to allow the housing structures to be floated out.
The housing structures for the gates in the north barrier (seven housing structures and two for the abutment connections) were positioned on the seabed. Four of this barrier's gates were installed and manoeuvred for the first time in October 2013; at the end of 2014, the installation of 21 gates was completed and operational for functional testing purposes (the so-called "blank tests").
At the south of the inlet (San Nicolò), the launch and the positioning of seven housing structures and two for the abutment connections has been completed (the structures have been fabricated on a temporary raised area in the Malamocco inlet and will be taken out to sea by a giant mobile platform which functions as a giant elevator).
At the centre of the inlet, a new island has been constructed to act as an intermediate structure between the two rows of mobile gates. This island will accommodate the buildings and plant for operating the gates (construction underway).
Outside the inlet, a long curved breakwater is almost complete.
Malamocco inlet
A temporary construction site has been set up alongside the basin to fabricate the gate housing structures to be positioned on the sea bed (Malamocco and Lido San Nicolò barriers, seven housing structures and two for the abutment connections for each barrier have been built).
In April 2014, the lock for the transit of large ships becomes operative to avoid interference with port activities when the gates are in operation.
Positioning of the gate housing structures for the Malamocco barrier was completed in October 2014.
The seabed in the area where the 19 gates will be installed has been reinforced.
Outside the inlet, a long curved breakwater designed to attenuate tidal currents and define a basin of calm water to protect the lock has been completed.
Chioggia inlet
Work has been completed to construct a small craft harbour with double lock to guarantee transit of a large number of fishing vessels when the gates are in operation.
The sea-side basin has been temporarily drained and sealed for use as a construction site to fabricate the gate housing structures, as for the Lido north inlet barrier.
Positioning for the gate housing structures was completed in October 2014.
In the inlet channel, the seabed in the area where the 18 gates will be installed has been reinforced.
Outside the inlet, a long curved breakwater has been completed.
Hinges
The hinges form the technological heart of the sea defence system. They constrain the gates to the housing structures, allow them to move and connect the gates to the operating plant.
The steel gates consist of a male element ( high and weighing ) connected to the gate, a female element ( high and weighing ) fastened to the housing structure and an attachment assembly to connect the male and female elements.
A total of 156 hinges (two for each gate) will be fabricated, together with a number of reserve elements.
Work progress
On 10 July 2020, the first full test of the system was successfully conducted. Amid fanfare, the Italian prime minister, Giuseppe Conte, activated the 78 mobile barriers. MOSE is expected to be fully functional by the end of 2025. It was used to actively combat threatened flooding on 3 October 2020.
Venice Arsenal
Since 2011, the MOSE control centre and management functions for the lagoon system have been located in the Venice Arsenal, symbol of the former trading and military might of the historic Serenissima or "Serene Republic". Numerous historical buildings, in a state of decay and abandonment for decades, have already been restored and reorganisation of the area is underway to accommodate these new activities.
Restoration has enabled a heritage of extraordinary historical and architectural value to be safeguarded and allowed buildings to be recovered and re-utilised. As home to MOSE management and control the arsenal will receive a new lease of life after years of abandonment, allowing its renaissance as a place of innovation and production, with important economic repercussions for the city and local area.
The historic arsenal buildings before and after restoration and construction of infrastructure to accommodate the new functions are shown below.
In the control centre, key decisions will be taken on raising and lowering the mobile barriers according to measurements made by tide gauges positioned in front of the lagoon inlets to record the rising tide in real time. The command to raise the gate will be given when water reaches the level established by the procedure to begin the manoeuvre and guarantee that the water level in the lagoon does not exceed the requisite safe level.
Specifications
Four mobile barriers are constructed at the lagoon inlets (two at the Lido inlet, one at Malamocco and one at Chioggia)
The project has a total of of mobile barriers
There are of linear worksites on land and at sea
MOSE has a total of 78 gates
The smallest gate is (Lido–Treporti row)
The largest gate is (Malamocco row)
One lock for large shipping at the Malamocco inlet enables port activities to continue when the gates are in operation
Three small locks (two at Chioggia and one at Lido-Treporti) allow the transit of fishing boats and other smaller vessels when the gates are in operation
There are 156 hinges, two for each gate and a number of reserve elements
Each hinge weighs 42 tons
The gates are designed to withstand a maximum tide (to date, the highest tide has been )
MOSE is designed to cope with a rise in sea level
30 minutes are required to raise the gates
15 minutes are required to lower the gates back into their housing structures
During a tidal event, the inlets remain closed for 4 to 5 hours, including barrier raising and lowering times
The site currently employs 4,000 people directly or indirectly
Projections
The MOSE project is estimated to cost €5.496 billion, up €1.3 billion from initial cost projections. On 30 January 2019, the last of the 78 gates was put in place. In November 2019, the project was 94% completed and was expected to be ready by the end of 2021, and later moved to 2025.
Criticism, corruption, and court cases
The project has met resistance from environmental and conservation groups such as Italia Nostra, and the World Wide Fund for Nature, who have made negative comments about the project.
Criticisms of the MOSE project, which environmentalists and certain political forces have opposed since its inception, relate to the costs to the Italian State of construction, management, and maintenance, which are said to be much higher than those for alternative systems employed by the Netherlands and England to resolve similar problems. In addition, according to the project's opponents, the monolithic integrated system is not "gradual, experimental and reversible" as required by the Special Law for the Safeguarding of Venice. There have also been criticisms of the environmental impact of the barriers, not just at the inlets where complex leveling will be carried out (the seabed must be flat at the barrier installation sites) and the lagoon bed reinforced to accommodate the gates (which will rest on thousands of concrete piles driven deep underground), but also on the hydrogeological balance and delicate ecosystem of the lagoon.
The NO MOSE front also emphasises what could be a number of critical points in the structure of the system and its inability to cope with predicted rises in sea level.
Over the years, nine appeals have been presented, eight of which have been rejected by the TAR (Tribunale Amministrativo Regionale, meaning the Regional Administrative Tribunal) and the Council of State. The ninth, currently under evaluation by the TAR, was presented by the Venice Local Authority and contests the favourable opinion of the Safeguard Venice Commission on the commencement of work at the Pellestrina site in the Malamocco inlet. Here, part of the MOSE gate housing caissons will be made using processes which, according to the local authority, could damage a site of special natural interest.
Environmental associations have also requested the intervention of the European Union (EU), as project activities affect sites protected by the Nature 2000 Network and the European Directive on birds. Following a report of 5March 2004 by the Venetian MP Luana Zanella, on 19December 2005 the European Commission opened an infraction procedure against Italy for "pollution of the habitat" of the lagoon. The European Environmental Commission Directorate General considers that as it has "neither identified nor adoptedin relation to the impacts on the area 'IBA 064-Venice Lagoon' resulting from construction of the MOSE projectappropriate measures to prevent pollution and deterioration of the habitat, together with harmful disturbance of birds with significant consequences in the light of the objectives of article 4 of EEC Directive 79/409, the Italian Republic has not fulfilled its obligations under Article 4, Paragraph 4, of EEC Directive 79/409 of the Council of 2April 1979 on the conservation of wild birds."
Although the European Environmental Commission has said that the initiative is not intended to stop MOSE going ahead, the body has called on the Italian Government to produce new information on the impact of the sites and the environmental mitigation structures. The Water Authority and Consorzio Venezia Nuova both confirm that the construction sites are temporary and will be completely restored at the end of the work.
In 2014, 35 people, including Giorgio Orsoni, the Mayor of Venice, were arrested in Italy on corruption charges in connection with the MOSE Project. Orsoni was accused of receiving illicit funds from the Consorzio Venezia Nuova, the consortium behind the construction of the project, which he then used in his campaign to be elected mayor. There were allegations that 20 million euros in public funds had been sent to foreign bank accounts and used to finance political parties.
Following the legal proceedings occurred between 2013 and 2014, that involved part of the management bodies of the Consorzio Venezia Nuova and its Companies, the State intervened in order to ensure the conclusion of the flood defence system: in December 2014, the ANAC (National Anti-Corruption Authority) proposed the extraordinary management of the Consorzio, which followed the appointment of three Special Chief Executive Officers.
The Special Administration of the Consorzio pursued its task to guarantee the proper completion of Mose and to ensure the conclusion of the defense system on 2018.
While MOSE's supporters say it can handle the threat of rising waters from global warming, others have doubted the project can face this challenge. Luigi D’Alpaos, a professor emeritus of hydraulics, wrote that "MOSE is obsolete and philosophically wrong, conceptually wrong," for example. The problem is that while the gates could hypothetically deal with rising waters, they could only do so by raising the floodgates so often that they would function as a "near-permanent wall." This, in turn, would devastate the lagoon's drainage and interchange with the Adriatic Sea; Venice's lagoon would become a "stagnant pool for algae and waste" if the gates were usually left up.
See also
Acqua alta
Flood control
Floodgate
Oosterscheldekering, part of the Delta Works
Saint Petersburg Dam (Saint Petersburg Flood Prevention Facility Complex)
Thames Barrier
Notes
References
External links
www.mosevenezia.eu is the website dedicated to activities to safeguard Venice and the lagoon implemented by the Italian State, with a specific section on Mose.
www.magisacque.it is the website of the Venice Water Authority, a local branch of the Ministry of Infrastructure and Transport whose responsibilities include management, safety and hydraulic protection of the Venice lagoon.
www.consorziovenezianuova.it is the website of the Consorzio Venezia Nuova, the Ministry of Infrastructure and Transport – Venice Water Authority concessionary for implementing the measures to safeguard Venice and the lagoon delegated to the State.
www.comune.venezia.it – City of Venice official website.
Buildings and structures in Venice
Buildings and structures under construction in Italy
Environmental engineering
Flood barriers
Flood control in Italy
Geography of Venice
Hydrology
Science and technology in Italy | MOSE | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 5,775 | [
"Chemical engineering",
"Hydrology",
"Civil engineering",
"Environmental engineering"
] |
3,305,678 | https://en.wikipedia.org/wiki/Serge%20Rudaz | Serge Rudaz (born August 19, 1954, pronounced "Rü-DAH") is a Canadian theoretical physicist and professor of physics at the University of Minnesota. He previously served as the director of undergraduate studies of the University of Minnesota's physics department, and is now the director of undergraduate honors at the University of Minnesota. Rudaz received his Ph.D. in 1979 from Cornell University and his undergraduate degree from McGill University.
Teaching
In the spring of 2007, Rudaz was named as the director of the University of Minnesota's new campus-wide honors program, which began operation during the fall of 2008.
Research
In 1995, he was elected Fellow of the American Physical Society "for original and influential contributions to the phenomenology of heavy quarks, supersymmetry and grand unification, and particle astrophysics." In 1985, Rudaz was the recipient of the Canadian Association of Physicists Herzberg Medal. He is the only physicist in the Herzberg Medal's history from a non-Canadian institution.
Rudaz's research interests include:
unified theories of elementary particle interactions and their phenomenology, applications to cosmology and the particle/astrophysics interface
relativistic many-body physics, including phase transitions in field theories at finite temperature and density; models of hadronic interactions
physics of topological defect formation in the early universe and in condensed systems
See also
Penguin diagrams
References
External links
Serge Rudaz's Curriculum Vitae
Head of the Class
Serge Rudaz named founding director of new University Honors Program
1954 births
Living people
McGill University alumni
Cornell University alumni
Canadian physicists
Particle physicists
University of Minnesota faculty
Fellows of the American Physical Society | Serge Rudaz | [
"Physics"
] | 348 | [
"Particle physicists",
"Particle physics"
] |
15,136,870 | https://en.wikipedia.org/wiki/DNA%20polymerase%20eta | DNA polymerase eta (Pol η), is a protein that in humans is encoded by the POLH gene.
DNA polymerase eta is a eukaryotic DNA polymerase involved in the DNA repair by translesion synthesis. The gene encoding DNA polymerase eta is POLH, also known as XPV, because loss of this gene results in the disease xeroderma pigmentosum. Polymerase eta is particularly important for allowing accurate translesion synthesis of DNA damage resulting from ultraviolet radiation or UV.
Function
This gene encodes a member of the Y family of specialized DNA polymerases. It copies undamaged DNA with a lower fidelity than other DNA-directed polymerases. However, it accurately replicates UV-damaged DNA; when thymine dimers are present, this polymerase inserts the complementary nucleotides in the newly synthesized DNA, thereby bypassing the lesion and suppressing the mutagenic effect of UV-induced DNA damage. This polymerase is thought to be involved in hypermutation during immunoglobulin class switch recombination.
Bypass of 8-oxoguanine
During DNA replication of the Saccharomyces cerevisiae chromosome, the oxidative DNA damage 8-oxoguanine triggers a switch to translesion synthesis by DNA polymerase eta. This polymerase replicates 8-oxoguanine with an accuracy (insertion of a cytosine opposite the 8-oxoguanine) of approximately 94%. Replication of 8-oxoguanine in the absence of DNA polymerase eta is less than 40%.
Clinical significance
Mutations in this gene result in XPV, a variant type of xeroderma pigmentosum, characterized by sun sensitivity, elevated incidence of skin cancer, and at the cellular level, by delayed replication and hypermutability after UV-irradiation
Interactions
POLH has been shown to interact with PCNA.
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Xeroderma Pigmentosum
DNA replication
DNA-binding proteins | DNA polymerase eta | [
"Biology"
] | 431 | [
"DNA replication",
"Molecular genetics",
"Genetics techniques"
] |
15,143,012 | https://en.wikipedia.org/wiki/Overlapping%20distribution%20method | The Overlapping distribution method was introduced by Charles H. Bennett for estimating chemical potential.
Theory
For two N particle systems 0 and 1 with partition function and ,
from
get the thermodynamic free energy difference is
For every configuration visited during this sampling of system 1 we can compute the potential energy U as a function of the configuration space, and the potential energy difference is
Now construct a probability density of the potential energy from the above equation:
where in is a configurational part of a partition function
since
now define two functions:
thus that
and can be obtained by fitting and
References
Potentials
Chemical thermodynamics | Overlapping distribution method | [
"Chemistry"
] | 125 | [
"Chemical thermodynamics"
] |
15,143,957 | https://en.wikipedia.org/wiki/Layer%20by%20layer | Layer-by-layer (LbL) deposition is a thin film fabrication technique. The films are formed by depositing alternating layers of complementary materials with wash steps in between. This can be accomplished by using various techniques such as immersion, spin, spray, electromagnetism, or fluidics.
Development
The first implementation of this technique is attributed to J. J. Kirkland and R. K. Iler of DuPont, who carried it out using microparticles in 1966. The method was later revitalized by the discovery of its applicability to a wide range of polyelectrolytes by Gero Decher at Johannes Gutenberg-Universität Mainz.
Implementation
A simple representation can be made by defining two oppositely charged polyions as + and -, and defining the wash step as W. To make an LbL film with 5 bilayers one would deposit W+W-W+W-W+W-W+W-W+W-W, which would lead to a film with 5 bilayers, specifically + - + - + - + - + - .
The representation of the LbL technique as a multilayer build-up based solely on electrostatic attraction is a simplification. Other interactions are involved in this process, including hydrophobic attraction. Multilayer build-up is enabled by multiple attractive forces acting cooperatively, typical for high-molecular weight building blocks, while electrostatic repulsion provides self-limitation of the absorption of individual layers. This range of interactions makes it possible to extend the LbL technique to hydrogen-bonded films, nanoparticles, similarly charged polymers, hydrophobic solvents, and other unusual systems.
The bilayers and wash steps can be performed in many different ways including dip coating, spin-coating, spray-coating, flow based techniques and electro-magnetic techniques. The preparation method distinctly impacts the properties of the resultant films, allowing various applications to be realized. For example, a whole car has been coated with spray assembly, optically transparent films have been prepared with spin assembly, etc. Characterization of LbL film deposition is typically done by optical techniques such as dual polarisation interferometry or ellipsometry or mechanical techniques such as quartz crystal microbalance.
LbL offers several advantages over other thin film deposition methods. LbL is simple and can be inexpensive. There are a wide variety of materials that can be deposited by LbL including polyions, metals, ceramics, nanoparticles, and biological molecules. Another important quality of LbL is the high degree of control over thickness, which arises due to the variable growth profile of the films, which directly correlates to the materials used, the number of bilayers, and the assembly technique. By the fact that each bilayer can be as thin as 1 nm, this method offers easy control over the thickness with 1 nm resolution.
Applications
LbL has found applications in protein purification, corrosion control, (photo)electrocatalysis, biomedical applications, ultrastrong materials, and many more. LbL composites from graphene oxide harbingered the appearance of numerous graphene and graphene oxide composites later on. The first use of reduced graphene oxide composites for lithium batteries was also demonstrated with LbL multilayers.
See also
Atomic layer deposition
References
Thin films | Layer by layer | [
"Materials_science",
"Mathematics",
"Engineering"
] | 683 | [
"Nanotechnology",
"Planes (geometry)",
"Thin films",
"Materials science"
] |
10,029,222 | https://en.wikipedia.org/wiki/Bcl-xL | B-cell lymphoma-extra large (Bcl-xL), encoded by the BCL2-like 1 gene, is a transmembrane molecule in the mitochondria. It is a member of the Bcl-2 family of proteins, and acts as an anti-apoptotic protein by preventing the release of mitochondrial contents such as cytochrome c, which leads to caspase activation and ultimately, programmed cell death.
Function
It is a well-established concept in the field of apoptosis that relative amounts of pro- and anti-survival Bcl-2 family of proteins determine whether the cell will undergo cell death; if more Bcl-xL is present, then pores are non-permeable to pro-apoptotic molecules and the cell survives. However, if Bax and Bak become activated, and Bcl-xL is sequestered away by gatekeeper BH3-only factors (e.g. Bim) causing a pore to form, cytochrome c is released leading to initiation of caspase cascade and apoptotic events.
While the exact signaling pathway of Bcl-xL is still not known, it is believed that Bcl-xL differs highly from Bcl-2 in their mechanism of inducing apoptosis. Bcl-xL is about ten times more functional than Bcl-2 when induced by the chemotherapy drug, Doxorubicin and can specifically bind to cytochrome C residues, preventing apoptosis. It can also prevent the formation of Apaf-1 and Caspase 9 complex by acting directly upon Apaf-1 rather than Caspase 9, as shown in nematode homologs.
Clinical significance
Bcl-xL dysfunction in mice can cause ineffective production of red blood cells, severe anemia, hemolysis, and death. This protein has also been shown as a requirement for heme production and in erythroid lineage, Bcl-xL is a major survival factor responsible for an estimated half of the total survival "signal" proerythroblasts must receive in order to survive and become red cells. Bcl-xL promoter contains GATA-1 and Stat5 sites. This protein accumulates throughout the differentiation, ensuring the survival of erythroid progenitors. Because iron metabolism and incorporation into hemoglobin occurs inside the mitochondria, Bcl-xL was suggested to play additional roles in regulating this process in erythrocytes which could lead to a role in polycythemia vera, a disease where there is an overproduction of erythrocytes.
Similar to other Bcl-2 family members, Bcl-xL has been implicated in the survival of cancer cells by inhibiting the function of p53, a tumor suppressor. In cancerous mouse cells, those which contained Bcl-xL were able to survive while those that only expressed p53 died in a small period of time.
Bcl-xL is a target of various senolytic agents. Studies of cell cultures of senescent human umbilical vein endothelial cells have shown that both fisetin and quercetin induce apoptosis by inhibition of Bcl-xL. Fisetin has roughly twice the senolytic potency as quercetin.
Related proteins
Other Bcl-2 proteins include Bcl-2, Bcl-w, Bcl-xs, and Mcl-1.
References
Mitochondria
Cancer research
Apoptosis | Bcl-xL | [
"Chemistry"
] | 729 | [
"Mitochondria",
"Metabolism",
"Apoptosis",
"Signal transduction"
] |
10,030,230 | https://en.wikipedia.org/wiki/BMDP | BMDP was a statistical package developed in 1965 by Wilfrid Dixon at the University of California, Los Angeles. The acronym stands for Bio-Medical Data Package, the word package was added by Dixon as the software consisted of a series of programs (subroutines) which performed different parametric and nonparametric statistical analyses.
BMDP was originally distributed for free. It was later sold by Statsols, who originally was a subsidiary of BMDP, but through a management buy-out formed the now independent company Statistical Solutions Ltd, known as Statsols. BMDP is no longer available . The company decided to only offer its other statistical product nQuery Sample Size Software.
References
External links
Article on the Free Online Dictionary of Computing
Statistical software
Windows-only proprietary software
1960s software
Biostatistics | BMDP | [
"Mathematics"
] | 166 | [
"Statistical software",
"Mathematical software"
] |
10,030,306 | https://en.wikipedia.org/wiki/ABCD%20Schema | The Access to Biological Collections Data (ABCD) schema is a highly structured data exchange and access model for taxon occurrence data (specimens, observations, etc. of living organisms), i.e. primary biodiversity data.
In 2006, an 'Extension For Geosciences' was added to the schema, to form the ABCDEFG Schema, and in 2010, Biodiversity Information Standards (TDWG) published a draft standard extension for DNA, called ABCDDNA.
References
External links
https://www.tdwg.org/standards/abcd/
http://www.codata.org/
https://web.archive.org/web/20070929124618/http://www.wfcc.nig.ac.jp/NEWSLETTER/newsletter36/a6.pdf the ABCD database schema
Bioinformatics
XML-based standards | ABCD Schema | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 190 | [
"Biological engineering",
"Bioinformatics stubs",
"Computer standards",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"XML-based standards"
] |
19,084,825 | https://en.wikipedia.org/wiki/Metisazone | Methisazone (USAN) or metisazone (INN) is an antiviral drug that works by inhibiting mRNA and protein synthesis, especially in pox viruses. During trials in the 1960s it showed promising results against smallpox infection, but widespread use was considered logistically impractical in the developing countries facing smallpox cases, and it saw only limited use. In developed countries able to cope with the logistic challenge, treatment of smallpox could be achieved just as effectively with immunoglobulin therapy, without the severe nausea associated with metisazone.
Methisazone has been described as being used in prophylaxis since at least 1965.
The condensation of N-methylisatin with thiosemicarbazide leads to methisazone.
References
Antiviral drugs
Thioureas
Indolines
Lactams
Thiosemicarbazones
Oxindoles | Metisazone | [
"Chemistry",
"Biology"
] | 186 | [
"Antiviral drugs",
"Biocides",
"Functional groups",
"Thiosemicarbazones"
] |
19,085,925 | https://en.wikipedia.org/wiki/List%20of%20hybrid%20creatures%20in%20folklore | The following is a list of hybrid entities from the folklore record grouped morphologically. Hybrids not found in classical mythology but developed in the context of modern popular culture are listed in .
Mythology
Head of one animal, body of another
Mammalian bipeds
Anubis – The jackal-headed Egyptian God.
Bastet – The cat-headed Egyptian Goddess.
Cynocephalus – A dog-headed creature.
Daksha – His head was replaced by a goat's head after a beheading.
Ganesha – An elephant-headed God.
Hayagriva – A horse-headed avatar.
Tumburu - A horse faced Hindu deity.
Horse-Face – A horse-headed guardian or type of guardian of the Underworld in Chinese mythology.
Ipotane – A race of half-horse half-humans, usually depicted as the reverse of centaurs.
Keibu Keioiba (alias Kabui Keioiba) – A Meitei folkloric mythical creature having the head of a tiger and the remaining body of a human.
Khnum – The ram-headed Egyptian God.
Maahes, Pakhet, Sekhmet, and Tefnut – Each of these Egyptian Gods has the head of a lion.
Minotaur – A creature that has the body of a human with the head, tail, and occasional hindquarters of a bull.
Nandi – Some Puranas describe Nandi or Nandikeshvara as bull-faced, with a human body that resembles that of Shiva in proportion and aspect.
Narasimha – A Hindu deity with a lion-like face.
Ox-Head – An ox-headed guardian or type of guardian of the Underworld in Chinese mythology.
Penghou – A Chinese tree spirit with the face of a human and the body of a dog.
Pratyangira – A Hindu Goddess having the head of a lion.
Sekmet – The lioness-headed Egyptian Goddess.
Set – The dog-headed Egyptian God.
Tikbalang - A tall Filipino horse-headed man.
Varaha – A boar-headed avatar.
Zhu Bajie – A pig-headed major character of the novel Journey to the West.
Other bipeds
Alkonost – A creature from Russian folklore with the head of a woman with the body of a bird, said to make beautiful sounds that make anyone who hears them forget all that they know and not want anything more ever again.
Bird goddess – Vinca figures of a woman with a bird head.
Cuca - A creature from Brazilian folklore and female counterpart of the Coco that is depicted as a witch with the head of an alligator. It will catch and eat children that disobey their parents.
Gamayun – A Russian creature portrayed with the head of a woman and the body of a bird.
Heqet – The frog-headed Egyptian God.
Horus, Monthu, Ra, and Seker – Each of these Egyptian Gods has the head of a falcon or hawk.
Inmyeonjo – A human face with bird body creature in ancient Korean mythology.
Karura – A divine creature of Japanese Hindu-Buddhist mythology with the head of a bird and the torso of a human.
Kuk – Kuk's male form has a frog head while his female form has a snake head.
Meretseger – The cobra-headed Egyptian Goddess.
Sirin – Half-bird, half-human creature with the head and chest of a woman from Russian folklore; its bird half is generally that of an owl's body.
Sobek – The crocodile-headed Egyptian God.
Thoth – The ibis-headed Egyptian God.
Quadrupeds
Akhekh - A creature from Egyptian mythology with the body of an oryx and the wings and snout of a bird.
Allocamelus – A Heraldic creature that has the head of a donkey and the body of a camel.
Bai Ze – A creature from Chinese mythology with the head of a human and the body of a cow with six horns and nine eyes.
Catoblepas - One version of the creature in Gustave Flaubert's The Temptation of Saint Anthony depicts it with the head of a wild boar and the body of a black African buffalo.
Criosphinx – A Sphinx that has the head of a ram.
Gajasimha – A creature with the head of an elephant and the body of a lion.
Gye-lyong – A creature from Korean mythology with the head of a chicken and the body of a dragon.
Hieracosphinx – A type of Sphinx that had a hawk head.
Jinmenken - A Japanese creature with the face of a human and the body of a dog.
Kudan - A Japanese creature with the face of a human and the body of a cow
Shug Monkey – A creature that is part-monkey and part-dog.
Other
Atargatis – Human face, fish body.
Draconcopedes (snake-feet) – "Snake-feet are large and powerful serpents, with faces very like those of human maidens and necks ending in serpent bodies" as described by Vincent of Beauvais.
Gajamina – A creature with the head of an elephant and body of a fish.
Merlion – A creature with the head of a lion and the body of a fish.
Nure-onna – A creature with the head of a woman and the body of a snake.
Tam Đầu Cửu Vĩ or Ông Lốt - is a divine beast with 3 human heads and a 9-tailed snake body, the mount of the god Ông Hoàng Bơ in Đạo Mẫu in Vietnamese folk religion.
Ugajin - A harvest and fertility kami of Japanese mythology with the body of a snake and the head of a bearded man, for the masculine variant or the head of a woman, for the female variant.
Ushi-oni – A Yōkai with the head of a bull and the body of a spider.
Zhuyin – A creature with the face of a man and the body of a snake.
Front of one animal, rear of another
Echidna – A half-woman and half-snake monster that lives inside a cave.
Fu Xi – A god said to have been made by Nu Wa.
Glaistig – A Scottish fairy or ghost who can take the form of a goat-human hybrid.
Griffin – A creature with the front quarters of an eagle and the hind quarters of a lion. Some depictions also depict it as having a snake-headed tail.
Harpy – A half-bird, half-woman creature of Greek mythology, portrayed sometimes as a woman with bird wings and legs.
Hippalectryon – A creature with the front half of a horse and the rear half has a rooster's wings, tail, and legs.
Hippocampus (or Hippocamp) – A Greek mythological creature that is half-horse half-fish.
Hippogriff – A creature with the front quarters of an eagle and hind quarters of a horse.
Jengu – A water spirit with the tail of a fish.
Ketu – An Asura who has the lower parts of a snake and said to have four arms.
Lamia – A female with the lower body like that of a snake and is also spelled as Lamiai. This should not be confused with the Greco-Roman Lamia.
Matsya – An avatar of Lord Vishnu that is half-man half-fish.
Merfolk – A race of half-human, half-fish creatures. The males are called Mermen and the females are called Mermaids.
Auvekoejak – A merman from Inuit folklore of Greenland and northern Canada that has fur on its fish tail instead of scales.
Ceasg – A Scottish mermaid.
Sirena – A mermaid from Philippine folklore.
Siyokoy – Mermen with scaled bodies from Philippine folklore. It is the male counterpart of the Sirena.
Nü Wa – A woman with the lower body of a serpent in Chinese folklore.
Nāga – A term referring to human/snake mixes of all kinds.
Onocentaur – A creature that has the upper body of a human with the lower body of a donkey and is often portrayed with only two legs.
Ophiotaurus – A creature that has the upper body of a bull and the lower body of a snake.
Peryton – A deer with the wings of a bird.
Sea goat – A creature that is half-goat half-fish.
Sea-griffin – A griffin variant with the hindquarters of a fish.
Sea-lion – A creature with the head and upper body of a lion and the tail of a fish.
Siren – Half-bird, half-woman creature of Greek mythology, who lured sailors to their deaths with their singing voices.
Skvader – A Swedish creature with the forequarters and hind-legs of a hare and the back, wings and tail of a female wood grouse.
Tatzelwurm – A creature with the face of a cat and a serpentine body.
Tlanchana – An aquatic deity that is part woman and part snake.
Triton – A Greek God and the son of Poseidon who has the same description as the Merman. Some depictions have him with two fish tails.
Valravn – A Danish creature that in some description is half-raven half-wolf.
Body of one animal as head of another
Anggitay – A strictly-female creature that has the upper body of a human with the lower body of a horse.
Centaur – A creature that has the upper body of a human with the lower body of a horse.
Khepri – The dung beetle-headed Egyptian God.
Kinnara – Half-human, half-bird in later Indian mythology.
Kurma – Upper-half human, lower-half tortoise.
Ichthyocentaurs – Creatures that have the torsos of a man or woman, the front legs of a horse, and the tails of a fish.
Scorpion man – Half-man half-scorpion.
Serpopard – A creature that is part-snake and part-African leopard.
Animals with extra parts
Angel – Humanoid creatures who are generally depicted with bird-like wings. In Abrahamic mythology and Zoroastrianism mythology, angels are often depicted as benevolent celestial beings who act as messengers between God and humans.
Bat – An Egyptian goddess with the horns and ears of a cow.
Cernunnos – An ancient Gaulish/Celtic God with the antlers of a deer.
Fairy – A humanoid with insect-like wings.
Hathor – An Egyptian goddess with cow horns.
Horned God – A god with horns.
Jackalope – A jackrabbit with the horns of a whitetail deer.
Satyr – Originally an ancient Greek nature spirit with the body of a man, but the long tail and pointed ears of a horse. From the beginning, satyrs were inextricably associated with drunkenness and ribaldry, known for their love of wine, music, and women. By the Hellenistic Period, satyrs gradually began to be depicted as unattractive men with the horns and legs of goats, likely due to conflation with Pan. They were eventually conflated with the Roman fauns and, since roughly the second century AD, they have been indistinguishable from each other.
Silenos - A tutor to Dionysus who is virtually identical to satyrs and normally indistinguishable, although sometimes depicted as more elderly.
Seraph – An elite angel with multiple wings.
Winged cat – A cat with the wings of a bird.
Winged genie – A humanoid with bird wings.
Winged horse – A horse with the wings of a bird.
Pegasus - A particular winged horse from Greek mythology. Sometimes the lowercase spelling is used as a metonym for winged horses in general.
Tulpar - A winged horse from Turkic mythology, though not capable of flight.
Winged lion – A lion with the wings of a bird.
Body of one animal with legs and extra features of another
Adlet – A human with dog legs.
Bes – An Egyptian god with the hindquarters of a lion.
Lilitu – A woman with bird legs (and sometimes wings) found in Mesopotamian mythology.
Faun – An ancient Roman nature spirit with the body of a man, but the legs and horns of a goat. Originally they differed from the Greek satyrs because they were less frequently associated with drunkenness and ribaldry and were instead seen as "shy, woodland creatures". Starting in the first century BC, the Romans frequently conflated them with satyrs and, after the second century AD, the two are virtually indistinguishable.
Goat people are a class of mythological beings who physically resemble humans from the waist up, and had goat-like features usually including the hind legs of goats. They fall into various categories, such as sprites, gods, demons, and demigods.
Krampus – A Germanic mythical figure of obscure origin. It is often depicted with the legs and horns of a goat, the body of a man, and animalistic facial features.
Kusarikku – A demon with the head, arms, and torso of a human and the ears, horns, and hindquarters of a bull.
Lamia – Woman with duck feet.
Pan – The god of the wild and protector of shepherds, who has the body of a man, but the legs and horns of a goat. He is often heard playing a flute.
Sylvań – A satyr like creature with a deer’s hooves, a fox tail, and a white coat that is woven to make their clothing.
Other hybrids of two kinds
Alebrije – A brightly colored creature from Mexican mythology.
Anansi - A West African god, also known as Ananse, Kwaku Ananse, and Anancy. In the Americas he is known as Nancy, Aunt Nancy and Sis' Nancy. Anansi is considered to be the spirit of all knowledge of stories. He is also one of the most important characters of West African and Caribbean folklore. Anansi is depicted in many different ways: sometimes he looks like an ordinary spider, sometimes he is a spider wearing clothes or with a human face, and sometimes he looks much more like a human with spider elements, such as eight legs.
Avatea – A Mangaian god that has the right half of a man and the left half of a fish.
Cerberus – A Greek mythological dog that guarded the gates of the underworld, almost always portrayed with three heads and occasionally having a mane of serpents, as well as the front half of one for a tail.
Drakaina – A female species from Greek mythology that is draconic in nature, primarily depicted as a woman with dragon features.
Feathered serpent - A Mesoamerican spirit deity that possessed a snake-like body and feathered wings.
Garuda – A creature that has the head, wings, and legs of an eagle and body of a man.
Gorgon – Each of them has snakes in place of their hair; sometimes also depicted with a snake-like lower body.
Jorōgumo - Type of Japanese yōkai, depicted as a spider woman manipulating small fire-breathing spiders.
Mothman – A humanoid moth.
Selkie – A seal that becomes a human by shedding its skin on land.
Karasu-tengu – A crow-type Tengu.
Uchek Langmeidong - A half-woman and half-hornbill creature in Manipuri folklore, depicted as a girl who was turned into a bird to escape from her stepmother's torture in the absence of her father.
Werecat – A creature that is part cat, part human, or switches between the two.
Werehyena - A creature that is part hyena, part human, or switches between the two.
Werewolf – A creature that becomes a wolf/human-like beast during the nights of the full moon, but is human otherwise.
Wyvern – A creature with a dragon's head and wings, a reptilian body, two legs, and a tail often ending in a diamond- or arrow-shaped tip.
Hybrids of three kinds
Ammit – An Egyptian creature with the head of a crocodile, the front legs of a lion, and the back legs and hindquarters of a hippopotamus.
Baphomet – Traditionally depicted as an anthropomorphic creature with goat's head.
Buraq – A creature from Arabic iconography that has the head of a man and the body of a winged horse.
Capelobo - A creature from Brazilian folklore with the head of an anteater, the torso of a human, and the legs of a goat.
Chalkydri – Creatures with twelve angel wings, the body of a lion, and the head of a crocodile mentioned in 2 Enoch
Chi You – A creature from Chinese mythology with the head of a bull, the torso of a human, and the ears and hindquarters of a bear.
Cipactli – A creature from Aztec mythology that is part crocodilian, part fish, and part toad or frog.
Chimera – A Greek mythological creature with the head and front legs of a lion, the head and back legs of a goat, and the head of a snake for a tail. Said to be able to breathe fire from lion's mouth.
Cockatrice – A mix between a chicken, a bat, and a reptile.
Hatuibwari – A dragon-like creature with the head of a human with four eyes, the body of a serpent, and the wings of a bat.
Hundun - A Creature with the body of a pig, the legs of a lion or bear and four wings of a bird, with no head.
Kappa - A Japanese humanoid creature with the legs of a frog and the head and shell of a turtle.
Lamassu – A deity that is often depicted with a human head, a bull's body or lion's body, and an eagle's wings.
Longma – A winged horse with the scales of a dragon.
Manticore - A creature with the face of a human, the body of a lion, and the tail of a scorpion. Some versions also depict it with the wings of a dragon.
Opinicus - A griffin variant with the head and wings of an eagle, the body and legs of a lion, and the neck and tail of a dromedary.
Pamola - A creature from Abenaki mythology with a human body, the head of a moose, with the wings and feet of an eagle that protects Maine's tallest mountain.
Sharabha – A Hindu mythological creature having the head of a lion, the legs of deer, and the wings of bird.
Sphinx – A creature with the head of a human or a cat, the body of a lion, and occasional wings of an eagle.
Hybrids of four kinds
Abraxas – A god-like Gnostic creature with many different types of portrayals, many of which as different types of hybrids.
Enfield – A Heraldic creature with the head of a fox, the forelegs and sometimes wings of an eagle, the body of a lion, and the tail of a wolf.
Hatsadiling – A mythical creature with the head and body of a lion, trunk and tusks of an elephant, the comb of a rooster, and the wings of a bird.
Kamadhenu – A creature with the head of a human, the body of a cow, the wings of a pigeon, and the tail of a peacock.
Monoceros – A creature with the head of a deer, the body of a horse, the feet of an elephant, and the tail of a pig.
Nue – A Japanese Chimera with the head of a monkey, the legs of a tiger, the body of a Japanese raccoon dog, and the front half of a snake for a tail.
Qilin – A Chinese creature with the head and scales of a dragon, the antlers of a deer, the hooves of an ox, and the tail of a lion. The Japanese version is described as a deer-shaped dragon with the tail of an ox.
Questing Beast – A creature with the head and tail of a serpent, the feet of a deer, the body of a leopard, and the haunches of a lion.
Simurgh – A griffin-like creature of Persian mythology with the head of a dog, the body of a lion, the tail of a peacock, and the wings of a hawk.
Taweret – The hippopotamus-headed Egyptian Goddess.
Wolpertinger – A creature with the head of a rabbit, the body of a squirrel, the antlers of a deer, and the legs and wings of a pheasant.
Yali – A Hindu creature with the head of a lion, the tusks of an elephant, the body of a cat, and the tail of a serpent.
Ypotryll – A Heraldic creature with the tusked head of a boar, the humped body of a camel, the legs and hooves of an ox or goat, and the tail of a snake.
Hybrids of more than four kinds
Baku – A Japanese creature with the head of an elephant, the ears of a rhinoceros, the legs of a tiger, the body of a bear, and the tail of a cow.
Calygreyhound – A mythical creature described as having the head of a wildcat, the torso of a deer or antelope, the claws of an eagle as its forefeet, ox hooves, antlers or horns on its head, the hind legs of a lion or ox, and its tail like a lion or poodle.
Scylla – A monster from Greek mythology which has the body of a woman, six snake heads, twelve octopus tentacles, a cat's tail and four dog heads in her waist.
Fenghuang – A Chinese creature with the head of a golden pheasant, the body of a mandarin duck, the tail of a peacock, the legs of a crane, the mouth of a parrot and the wings of a swallow.
Kotobuki - A Japanese Chimera with the head of a rat, the ears of a rabbit, the horns of an ox, the comb of a rooster, the beard of a sheep, the neck of a Japanese dragon, the mane of a horse, the back of a wild boar, the shoulders and belly of a South China tiger, the arms of a monkey, the hindquarters of a dog, and the tail of a snake.
Meduza – A sea creature from Russian folklore with the head of a maiden and the body of a striped beast, having a dragon tail with a snake's mouth and elephant legs with the same snake mouths.
Navagunjara – A Hindu creature with the head of a rooster, the neck of a peacock, the back of a bull, a snake-headed tail, three legs of an elephant, tiger and deer or horse with the fourth limb being a human hand holding a lotus.
Nawarupa – A Burmese creature with the head, trunk, and tusks of an elephant, the eyes of a deer, the horns of a rhinoceros, the wings and tongue of a parrot, the body and legs of a lion and the tail of a peacock.
Pyinsarupa – A Burmese creature made of a bullock, carp, elephant, horse and the dragon.
Tarasque – A French dragon with the head of a lion, six short legs similar to that of bear legs, the body of an ox, the shell of a turtle, and a scorpion stinger-tipped tail.
Modern fiction
The following hybrid creatures appear in modern fiction:
Dungeons & Dragons
Dracimera – Half-chimera, half-dragon.
Dracotaur – Half-man, half-dragon. It debuted in Dungeons & Dragons. It also has a counterpart in the form of the Dragonspawn from the Warcraft franchise. Dragoon from the Monster Rancher franchise also fits this description due to it being a fusion of a Dragon and a Centaur.
Drider – Half-Drow half-spider, a "monster that looks like a centaur only with the bottom half of a spider instead of a horse."
– Vicious hybrid with human-like body and hyena-like head. It debuted in Dungeons & Dragons and then spread to other franchises including Warcraft and Pathfinder. It is inspired from but not resembling the gnoles conceived by Lord Dunsany. Considered one of the "five main "humanoid" races" in AD&D by Paul Karczag and Lawrence Schick and a classic of D&D by reviewer Dan Wickline. Within D&D, the demon lord Yeenoghu is worshipped by gnolls.
Gorgimera – Half-gorgon, Half-chimera, whose goat's head is replaced by a gorgon's.
Gorilla bear – A creature with the head, body, and legs of a gorilla, and the teeth and arms of a bear. It debuted in Dungeons & Dragons''' Fiend Folio as one of the, according to TheGamer, more "silly monster designs".
Mantimera – Half-manticore, Half-chimera, whose lion's head is replaced by a manticore's.
Owlbear – A creature that is half-bear half-owl. It debuted in Dungeons & Dragons.
Thessalmera – Half-thessalhydra, Half-chimera.
Wemic – Half-man, half-lion. It debuted in Dungeons & Dragons. It also has a counterpart in the form of the Liontaur from the Quest for Glory video games.
Wereape - Half-man, half-ape. They have been featured in Dungeons & Dragons, Forgotten Realms and The Wereworld Series. They come in different varieties.
Wolftaur – Half-man, half-wolf. It debuted in Dungeons & Dragons. Some depictions of this creature also have wolf heads like Celious from the Monster Rancher franchise (who is depicted as a fusion of a Tiger and a Centaur) and AdventureQuest 3D (as a Lychimera).
Jurassic Park
The Jurassic Park franchise had these hybrids in the films, toylines, and video games.
Amargosaurus - A hybrid of an Amargasaurus and a Spinosaurus. It appeared in the Jurassic Park: Chaos Effect toyline.
Ankylodocus - This dinosaur was made from the DNA of a Diplodocus and an Ankylosaurus. It debuted in Lego Jurassic World: The Indominus Escape and appeared in Jurassic World: The Game and Jurassic World Evolution.
Ankyloranodon - A hybrid of a Pteranodon and a Ankylosaurus. It appeared in the Jurassic Park: Chaos Effect toyline.
Carnoraptor - This dinosaur was made from the DNA of a Pyroraptor and a Carnotaurus. It debuted in Lego Jurassic World: The Indominus Escape (where it was mistakenly claimed that Velociraptor DNA was used to make it) and appeared Jurassic World: The Game and the Jurassic World: Dino Hybrid toyline.
Compsteganathus - A hybrid of a Compsognathus, a Stegosaurus, and a tree frog. It debuted in the Jurassic Park: Chaos Effect toyline.
Indominus rex - This dinosaur was made from the DNA of a Tyrannosaurus, a Velociraptor, a cuttlefish, a tree frog, a Pit viper, a Viavenator, a Aucasaurus, a Carnotaurus, a Giganotosaurus, a Abelisaurus, a Quilmesaurus, a Pycnonemosaurus, a Afrovenator, a Therizinosaurus, a Deinosuchus, a Majungasaurus, and a Rugops. It debuted in Jurassic World.
Indoraptor - This dinosaur was made from the DNA of an Indominus rex and a Velociraptor. It debuted in Jurassic World: Fallen Kingdom.
Paradeinonychus - A hybrid of a Parasaurolophus and a Deinonychus. It debuted in the Jurassic Park: Chaos Effect toyline.
Scorpios Rex - This dinosaur was made from the DNA of a Carnotaurus, a Tyrannosaurus, a Velociraptor, a tree frog, and a scorpionfish. It debuted in Jurassic World: Camp Cretaceous.
Spinoceratops - This dinosaur was made from the DNA of a Sinoceratops and a Spinosaurus. It debuted in Jurassic World: Camp Cretaceous where it was created by Mantah Corp and also appeared in Jurassic World: Alive.
Spinoraptor - This dinosaur was made from the DNA of a Spinosaurus and a Utahraptor. It debuted in Lego Jurassic World: The Indominus Escape and appeared in Jurassic World: The Game and Jurassic World Evolution (the latter had the Utahraptor DNA substituted with Velociraptor DNA).
Stegoceratops - This dinosaur was made from the DNA of a Stegosaurus and a Triceratops. It was cut from Jurassic World, but it appeared in the toyline, Lego Jurassic World: The Indominus Escape, and the associated video games.
Tanaconda - A hybrid of a Tanystropheus and an anaconda. It debuted in the Jurassic Park: Chaos Effect toyline.
Tyrannonops - A hybrid of a Tyrannosaurus and a Lycaenops. It debuted in the Jurassic Park: Chaos Effect toyline.
Ultimasaurus – This dinosaur has the head and body of a Tyrannosaurus, the frill and horns of a Triceratops, the arms and legs of a Velociraptor, the back armor and the tail club of an Ankylosaurus, and the thagomizer and dermal plates of a Stegosaurus. This hybrid dinosaur is featured in the Jurassic Park: Chaos Effect toyline, but the action figure of the adult was not released.
Velocirapteryx - A hybrid of a Velociraptor and an Archaeopteryx. It debuted in the Jurassic Park: Chaos Effect toyline.
Other fiction
Beast (Beauty and the Beast) - The Beast, from the Disney movie Beauty and the Beast, has the head structure and horns of a buffalo, the arms and body of a bear, the eyebrows of a gorilla, the jaws, teeth, and mane of a lion, and the legs and tail of a wolf. He also bears resemblance to mythical monsters like the Minotaur or a werewolf.
Cecaelia – Half-human, half-octopus. The term was coined by fans in the late 2000s to describe characters such as Ursula from The Little Mermaid and may also apply to Harry Styles in the music video of "Music for a Sushi Restaurant". The term is likely derived from "a short pictorial story published in Vampirella magazine entitled 'Cilia'" (1972) by Nicola Cuti and Felix Mas featuring a mysterious "woman whose lower body morphs into tentetacles".
Cervitaur – A deer-type centaur. This description was also used for the Golden Hind from Hercules: The Legendary Journeys.
Cheetaur – Half-man, half-cheetah. They are featured in the Quest for Glory video games.
Gryf - A dinosaur in Tarzan the Terrible that resembles an omnivorous 20 ft. Triceratops with unspecified claws, the teeth of a Tyrannosaurus, and the plates of a Stegosaurus on its back.
Gwazi – A creature with the head of a tiger and the body of a lion. This is the mascot of the roller coaster Iron Gwazi located at the Busch Gardens amusement park in Tampa, Florida.
Jackalote - A hybrid of a jackal and a coyote. They appear in The Christmas Chronicles 2 where Belsnickel created them through an unknown method so that they would pull his sleigh.
Jaguaro (Scooby-Doo) – a half-Panther/half-Gorilla creature.
Jaquin – A creature that resembles a jaguar with the wings and feathers of macaws. It is featured in Elena of Avalor.
Kalidahs – Half tiger, half bear creatures first appearing in the book The Wonderful Wizard of Oz by L. Frank Baum.
Kars - The leader of the Pillar Men and the main villain of Battle Tendency, the second part of JoJo's Bizarre Adventure. After putting on the Aja mask and transforming into the ultimate being, Kars gains bird-like wings with sharp feathers he uses as projectiles, tentacles like an octopus which he uses to fight, and a shell like an armadillo which he uses to shield himself from attacks.
Kimkoh (Contra) – A large arthropod-like alien creature that has two large frog-like legs, its upper head possesses a snout similar to that of a tapir with fangs. the upper head's fangs and nose are directly connected to the main head, giving the impression of biting it. This main head is human, surrounded by elephant tusks. it features hermit crab-like legs sprouting out from underneath the human face and a shell of an armadillo.
ManBearPig – Half-man, half-bear, half-pig. Debuted in the titular episode of the animated television series South Park.
Miga - A fictional sea creature that is half-killer whale, half-Kermode bear who is one of the mascots of the 2010 Winter Olympics.
Posleen – A crocodile-headed reptilian centaur from Legacy of the Aldenata.
Sumi – An animal guardian spirit with the wings of a Thunderbird and the legs of an American black bear who is the mascot of the 2010 Winter Paralympics.
Toodee – A blue monster with the body and skin of a dinosaur, the scales and spikes of a dragon, and the face, ears and whiskers of a rabbit. She is debuted in Yo Gabba Gabba!.
Unitaur – A unicorn-type centaur.
Ursagryph – A creature with the head, claws, and wings of an eagle, the body of a bear, and a short reptilian tail. The Predacon Darksteel from Transformers Prime Beast Hunters: Predacons Rising transforms into a mechanical Ursagryph.
Vampire-werewolf hybrid – These half-vampire half-werewolf hybrids had been shown in various media appearances like AdventureQuest (as a Werepyre), AdventureQuest Worlds (also as a Werepyre), Axe Cop (as a Wolvye), Supernatural, The Elder Scrolls, The Vampire Diaries, the Underworld franchise (as a Lycan-dominant vampire hybrids and a Lycan-Corvinus strain hybrid), and Werewolf: The Apocalypse.
Vinicius – Part-cat, part-monkey, part-bird from Rio 2016.
Weregorilla - A gorilla-type wereape. Two appeared in The Wereworld Series and a monster mask of a weregorilla was advertised in episode 1 of Creepshow.
Wereorangutan - An orangutan-type wereape. One appeared in The Wereworld Series.
Zoras - Half-man half-fish. Appear in most games in The Legend of Zelda'' franchise.
Butatō - Suid humanoid barbarians with snake-like fangs. Appeared any Anime (such as Inuyasha, That Time I Got Reincarnated as a Slime, Sword Art Online, Daily Life with a Monster Girl, Dragon Quest, etc).
Dhinnabarrada - A human with emu legs.
Olano - Horse with a head of a dog.
Tizzie-Whizie - Fairy hedgehogs with pair of antennas and wings of bee, a fluffy tail of fox and squirrel.
Chouyu - Rabbit/hare with the face of an owl and a reptilian tail.
Papillequine - A horse or pony with Lepidopteran wings.
Lagopus - A ptarmigan with a head and feet of an rabbit.
Bo - Horse-like equine with a single black unicorn-like horn, the mouth and paws of tiger’s, and ears of leopard.
Bingfeng - A black pig with two heads that has elongated.
Zamba Zaara - A hedgehog with its tail resembles of ankylosaur, who hits the earth and causing the earth shaking violently.
Jiao - Dog-like canine with leopard spots, ox horns and a short tail. And it also barks like a dog.
See also
Shapeshifting
Theriocephaly
References
Hybrids
Science fiction themes
Genetic engineering | List of hybrid creatures in folklore | [
"Chemistry",
"Engineering",
"Biology"
] | 7,622 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
19,093,657 | https://en.wikipedia.org/wiki/Omega1%20Aquilae | {{DISPLAYTITLE:Omega1 Aquilae}}
Omega1 Aquilae, which is Latinized from ω1 Aquilae, is the Bayer designation for a single star in the equatorial constellation of Aquila. With an apparent visual magnitude of 5.2 it is a faint, yellow-white hued star that can be seen with the naked eye in dark skies. From the annual parallax shift of , the distance to this star can be estimated as , give or take a 6 light year margin of error. It is drifting closer to the Sun with a radial velocity of −14 km/s.
The spectrum of this star fits a stellar classification of F0 IV. Typically, a luminosity class of IV means that the star is in the subgiant stage. It is rotating rapidly with a projected rotational velocity of 115 km/s. This is causing an equatorial bulge that is 5% larger than the polar radius. The star has 2.85 times the mass of the Sun and five times the Sun's radius. It is radiating 85 times the luminosity of the Sun from its photosphere at an effective temperature of 7,766 K.
References
External links
Image Omega-1 Aquilae
HR 7315
F-type subgiants
Aquila (constellation)
Aquilae, 25
Aquilae, Omega1
BD+11 3790
180868
094834
7315 | Omega1 Aquilae | [
"Astronomy"
] | 294 | [
"Aquila (constellation)",
"Constellations"
] |
19,094,964 | https://en.wikipedia.org/wiki/Elimination%20rate%20constant | The elimination rate constant K or Ke is a value used in pharmacokinetics to describe the rate at which a drug is removed from the human system.
It is often abbreviated K or Ke. It is equivalent to the fraction of a substance that is removed per unit time measured at any particular instant and has units of T−1. This can be expressed mathematically with the differential equation
,
where is the blood plasma concentration of drug in the system at a given point in time , is an infinitely small change in time, and is the concentration of drug in the system after the infinitely small change in time.
The solution of this differential equation is useful in calculating the concentration after the administration of a single dose of drug via IV bolus injection:
Ct is concentration after time t
C0 is the initial concentration (t=0)
K is the elimination rate constant
Derivation
In first-order (linear) kinetics, the plasma concentration of a drug at a given time t after single dose administration via IV bolus injection is given by;
where:
C0 is the initial concentration (at t=0)
t1/2 is the half-life time of the drug, which is the time needed for the plasma drug concentration to drop to its half
Therefore, the amount of drug present in the body at time t is;
where Vd is the apparent volume of distribution
Then, the amount eliminated from the body after time t is;
Then, the rate of elimination at time t is given by the derivative of this function with respect to t;
And since is fraction of the drug that is removed per unit time measured at any particular instant, then if we divide the rate of elimination by the amount of drug in the body at time t, we get;
References
Pharmacokinetic metrics | Elimination rate constant | [
"Chemistry"
] | 361 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
11,465,254 | https://en.wikipedia.org/wiki/Thiomer | Thiolated polymers designated thiomers are functional polymers used in biotechnology product development with the intention to prolong mucosal drug residence time and to enhance absorption of drugs. The name thiomer was coined by Andreas Bernkop-Schnürch in 2000. Thiomers have thiol bearing side chains. Sulfhydryl ligands of low molecular mass are covalently bound to a polymeric backbone consisting of mainly biodegradable polymers, such as chitosan, hyaluronic acid, cellulose derivatives, pullulan, starch, gelatin, polyacrylates, cyclodextrins, or silicones.
Thiomers exhibit properties potentially useful for non-invasive drug delivery via oral, ocular, nasal, vesical, buccal and vaginal routes. Thiomers show also potential in the field of tissue engineering and regenerative medicine. Various thiomers such as thiolated chitosan and thiolated hyaluronic acid are commercialy available as scaffold materials. Thiomers can be directly compressed to tablets or given as solutions. In 2012, a second generation of thiomers – called "preactivated" or "S-protected" thiomers – were introduced.
In contrast to thiomers of the first generation, preactivated thiomers are stable towards oxidation and display comparatively higher mucoadhesive and permeation enhancing properties. Approved thiomer products for human use are for example eyedrops for treatment of dry eye syndrome or adhesive gels for treatment of nickel allergy.
Properties and applications
Mucoadhesion
Thiomers are capable of forming disulfide bonds with cysteine substructures of the mucus gel layer covering mucosal membranes. Because of this property they exhibit up to 100-fold higher mucoadhesive properties in comparison to the corresponding unthiolated polymers. Because of their mucoadhesive properties, thiolated polymers are an effective tool in the treatment of diseases such as dry eye, dry mouth, and dry vagina syndrome where dry mucosal surfaces are involved.
In situ gelation
Various polymers such as poloxamers exhibit in situ gelling properties. Because of these properties they can be administered as liquid formulations forming stable gels once having reached their site of application. An unintended rapid elimination or outflow of the formulation from mucosal membranes such as the ocular, nasal or vaginal mucosa can therefore be avoided. Thiolated polymers are capable of providing a comparatively more pronounced increase in viscosity after application, as an extensive crosslinking process by the formation of disulfide bonds between the polymer chains due to oxidation takes place. This effect was first described in 1999 by Bernkop-Schnürch et al. for polymeric excipients. In case of thiolated chitosan, for instance, a more than 10,000-fold increase in viscosity within a few minutes was shown. These high in situ gelling properties can also be used for numerous further reasons such as for parenteral formulations, as coating material or for food additives
Controlled drug release
Due to a sustained drug release, a prolonged therapeutic level of drugs exhibiting a short elimination half-life can be maintained. Consequently the frequency of dosing can be reduced contributing to an improved compliance. The release of drugs out of polymeric carrier systems can be controlled by a simple diffusion process. So far the efficacy of such delivery systems, however, was limited by a too rapid disintegration and/or erosion of the polymeric network. By using thiolated polymers this essential shortcoming can be overcome. Because of the formation of inter- and intrachain disulfide bonds during the swelling process, the stability of the polymeric drug carrier matrix is strongly improved. Hence, a controlled drug release for numerous hours is guaranteed. There are numerous drug delivery systems making use of this technology.
Enzyme inhibition
Due to the binding of metal ions being essential for various enzymes to maintain their enzymatic activity, thiomers are potent reversible enzyme inhibitors. Many non-invasively administered drugs such as therapeutic peptides or nucleic acids are degraded on the mucosa by membrane bound enzymes, strongly reducing their bioavailability. In case of oral administration, this ‘enzymatic barrier’ is even more pronounced as an additional degradation caused by luminally secreted enzymes takes place. Because of their capability to bind zinc ions via thiol groups, thiomers are potent inhibitors of most membrane bound and secreted zinc-dependent enzymes. Due to this enzyme inhibitory effect, thiolated polymers can significantly improve the bioavailability of non-invasively administered drugs
Antimicrobial activity
In vitro, thiomers were shown to have antimicrobial activity towards Gram-positive bacteria. In particular, N-acyl thiolated chitosans show great potential as highly efficient, biocompatible and cost-effective antimicrobial compounds. Metabolism and mechanistic studies are under way to optimize these thiomers for clinical applications. Because of their antimicrobial activity, thiolated polymers are also used as coatings that avoid bacterial adhesion.
Permeation enhancement
Thiomers are able to reversibly open tight junctions. The responsible mechanism seems to be based on the inhibition of protein tyrosine phosphatase being involved in the closing process of tight junctions. Due to thiolation the permeation enhancing effect of polymers such as polyacrylic acid or chitosan can be up to 10-fold improved. In comparison to most low molecular weight permeation enhancers, thiolated polymers offer the advantage of not being absorbed from the mucosal membrane. Hence, their permeation enhancing effect can be maintained for a comparatively longer period of time and systemic toxic side effects of the auxiliary agent can be excluded.
Efflux pump inhibition
Thiomers are able to reversibly inhibit efflux pumps. Because of this property the mucosal uptake of various efflux pump substrates such as anticancer drugs, antimycotic drugs and antiinflammatory drugs can be tremendously improved. The postulated mechanism of efflux pump inhibition is based on an interaction of thiolated polymers with the channel forming transmembrane domain of various efflux pumps such as P-gp and multidrug resistance proteins (MRPs). P-gp, for instance, exhibits 12 transmembrane regions forming a channel through which substrates are transported outside of the cell. Two of these transmembrane domains – namely 2 and 11 – exhibit on position 137 and 956, respectively, a cysteine subunit. Thiomers seem to enter in the channel of P-gp and likely form subsequently one or two disulfide bonds with one or both cysteine subunits located within the channel. Due to this covalent interaction the allosteric change of the transporter being essential to move drugs outside of the cell might be blocked.
Complexation of metal ions
Thiomers have the ability to form complexes with different metal ions, especially divalent metal ions, due to their thiol groups. Thiolated chitosans, for instance, were shown to effectively absorb nickel ions.
Tissue engineering and regenerative medicine
As thiolated polymers exhibit biocompatibility, cellular mimicking properties and efficiently support proliferation and differentiation of various cell types, they are used as scaffolds for tissue engineering. Furthermore thiolated polymers such as thiolated hyaluronic acid and thiolated chitosan were shown to exhibit wound healing properties.
References
Carbohydrate chemistry
Drug delivery devices
Gels
Materials science
Organosulfur compounds
Polymers
Polymer chemistry
Polysaccharides | Thiomer | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,616 | [
"Glycobiology",
"Pharmacology",
"Carbohydrates",
"Applied and interdisciplinary physics",
"Organosulfur compounds",
"Drug delivery devices",
"Materials science",
"Colloids",
"Organic compounds",
"Carbohydrate chemistry",
"Gels",
"Chemical synthesis",
"Polymer chemistry",
"Polymers",
"nan... |
11,465,932 | https://en.wikipedia.org/wiki/Lifting-line%20theory | The Lanchester-Prandtl lifting-line theory is a mathematical model in aerodynamics that predicts lift distribution over a three-dimensional wing from the wing's geometry. The theory was expressed independently by Frederick W. Lanchester in 1907, and by Ludwig Prandtl in 1918–1919 after working with Albert Betz and Max Munk. In this model, the vortex bound to the wing develops along the whole wingspan because it is shed as a vortex-sheet from the trailing edge, rather than just as a single vortex from the wing-tips.
Introduction
It is difficult to predict analytically the overall amount of lift that a wing of given geometry will generate. When analyzing a three-dimensional finite wing, a traditional approach slices the wing into cross-sections and analyzes each cross-section independently as a wing in a two-dimensional world. Each of these slices is called an airfoil, and it is easier to understand an airfoil than a complete three-dimensional wing.
One might expect that understanding the full wing simply involves adding up the independently calculated forces from each airfoil segment. However, this approximation is grossly incorrect: on a real wing, the lift from each infinitesimal wing section is strongly affected by the airflow over neighboring wing sections. Lifting-line theory corrects some of the errors in the naive two-dimensional approach by including some interactions between the wing slices.
Principle and derivation
Lifting line theory supposes wings that are long and thin with negligible fuselage, akin to a thin bar (the eponymous "lifting line") of span driven through the fluid. From the Kutta–Joukowski theorem, the lift on a 2-dimensional segment of the wing at distance from the fuselage is proportional to the circulation about the bar at . When the aircraft is stationary on the ground, these circulations are all equal, but when the craft is in motion, they vary with . By Helmholtz's theorems, the generation of spatially-varying circulation must correspond to shedding an equal-strength vortex filament downstream from the wing.
In the lifting line theory, the resulting vortex line is presumed to remain bound to the wing, so that it changes the effective vertical angle of the incoming freestream air.
The vertical motion induced by a vortex line of strength on air a distance away is , so that the entire vortex system induces a freestream vertical motion at position of where the integral is understood in the sense of a Cauchy principal value. This flow changes the effective angle of attack at ; if the circulation response of the airfoils comprising the wing are understood over a range of attack angles, then one can develop an integral equation to determine .
Formally, there is some angle of orientation such that the airfoil at position develops no lift. For airstreams of velocity oriented at an angle relative to the liftless angle, the airfoil will develop some circulation ; for small , Taylor expansion approximates that circulation as . If the airfoil is ideal and has chord , then theory predicts that but real airfoils may be less efficient.
Suppose the freestream flow attacks the airfoil at position at angle (relative to the liftless angle for the airfoil at position — thus a uniform flow across a wing may still have varying ). By the small-angle approximation, the effective angle of attack at of the combined freestream and vortex system is . Combining the above formulae, All the quantities in this equation except and are geometric properties of the wing, and so an engineer can (in principle) solve for given a fixed . As in the derivation of thin-airfoil theory, a common approach is to expand as a Fourier series along the wing, and then keep only the first few terms.
Once the velocity , circulation , and fluid density are known, the lift generated by the wing is assumed to be the net lift produced by each airfoil with the prescribed circulation... ...and the drag is likewise the total across airfoils: From these quantities and the aspect ratio , the span efficiency factor may be computed.
Effects of control inputs
Control surface deflection changes the shape each airfoil slice, which can produce a different angle-of-no-lift for that airfoil, as well as a different angle-of-attack response. These do not require substantial modification to the theory, only changing and in . However, a body with rapidly moving wings, such as a rolling aircraft or flapping bird, experiences a vertical flow across the wing due to the wing's change in orientation, which appears as a missing term in the theory.
Rolling wings
When the aircraft is rolling at rate about the fuselage, an airfoil at (signed) position experiences a vertical airflow at rate , which correspondingly adds to the effective angle of attack. Thus becomes: which correspondingly modifies both the lift and the induced drag. This "drag force" comprises the main production of thrust for flapping wings.
Elliptical wings
The efficiency is theoretically optimized in an elliptical wing with no twist, in which where is an alternate parameterization of station along the wing. For such a wing,which yields the equation for the elliptic induced drag coefficient:According to lifting-line theory, any wing planform can achieve the same efficiency through twist (a position-varying increase in pitch) relative to the fuselage.
Useful approximations
A useful approximation for the 3D lift coefficient for elliptical circulation distribution isNote that this equation becomes the thin airfoil equation if AR goes to infinity.
Limitations
The lifting line theory does not take into account compression of the air by the wings, viscous flow within the fuselage's boundary layer, or wing shapes other than the long, straight and thin, such as swept or low–aspect-ratio wings. The theory also presupposes that flow around the wings is in equilibrium, and does not address bodies that are quickly accelerated relative to the freestream air.
See also
Horseshoe vortex
Kutta condition
Thin airfoil theory
Vortex lattice method
Euler equations (fluid dynamics)
Notes
References
L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London.
Abbott, Ira H., and Von Doenhoff, Albert E. (1959), Theory of Wing Sections, Dover Publications Inc., New York. Standard Book Number 486-60586-8
Aerodynamics | Lifting-line theory | [
"Chemistry",
"Engineering"
] | 1,289 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
11,467,262 | https://en.wikipedia.org/wiki/Isovaleryl-CoA | Isovaleryl-coenzyme A, also known as isovaleryl-CoA, is an intermediate in the metabolism of branched-chain amino acids.
Leucine metabolism
See also
Isovaleryl coenzyme A dehydrogenase
References
Thioesters of coenzyme A | Isovaleryl-CoA | [
"Chemistry",
"Biology"
] | 60 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
11,467,988 | https://en.wikipedia.org/wiki/Methylcrotonyl-CoA | 3-Methylcrotonyl-CoA (β-Methylcrotonyl-CoA or MC-CoA) is an intermediate in the metabolism of leucine.
It is found in mitochondria, where it is formed from isovaleryl-coenzyme A by isovaleryl coenzyme A dehydrogenase. It then reacts with CO2 to yield 3-Methylcrotonyl-CoA carboxylase.
Leucine metabolism
See also
Methylcrotonyl-CoA carboxylase
References
Thioesters of coenzyme A | Methylcrotonyl-CoA | [
"Chemistry",
"Biology"
] | 117 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
11,468,037 | https://en.wikipedia.org/wiki/3-Methylglutaconyl-CoA | 3-Methylglutaconyl-CoA (MG-CoA), also known as β-methylglutaconyl-CoA, is an intermediate in the metabolism of leucine. It is metabolized into HMG-CoA.
Leucine metabolism
See also
Methylcrotonyl-CoA carboxylase
Methylglutaconyl-CoA hydratase
References
Organophosphates
Thioesters of coenzyme A | 3-Methylglutaconyl-CoA | [
"Chemistry",
"Biology"
] | 92 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
11,470,048 | https://en.wikipedia.org/wiki/Glutaryl-CoA | Glutaryl-coenzyme A is an intermediate in the metabolism of lysine and tryptophan.
See also
Glutaryl-CoA dehydrogenase
References
Thioesters of coenzyme A | Glutaryl-CoA | [
"Chemistry",
"Biology"
] | 49 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
11,470,331 | https://en.wikipedia.org/wiki/%CE%92-Hydroxybutyryl-CoA | β-Hydroxybutyryl-CoA (or 3-hydroxybutyryl-coenzyme A) is an intermediate in the fermentation of butyric acid, and in the metabolism of lysine and tryptophan. The L-3-hydroxybutyl-CoA (or (S)-3-hydroxybutanoyl-CoA) enantiomer is also the second to last intermediate in beta oxidation of even-numbered, straight chain, and saturated fatty acids.
See also
Crotonyl-coenzyme A
Acetoacetyl CoA
Beta-hydroxybutyryl-CoA dehydrogenase
References
Biomolecules
Metabolism
Thioesters of coenzyme A | Β-Hydroxybutyryl-CoA | [
"Chemistry",
"Biology"
] | 155 | [
"Natural products",
"Biotechnology stubs",
"Organic compounds",
"Biochemistry stubs",
"Cellular processes",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Metabolism",
"Molecular biology"
] |
11,471,537 | https://en.wikipedia.org/wiki/Polymer%20blend | In materials science, a polymer blend, or polymer mixture, is a member of a class of materials analogous to metal alloys, in which at least two polymers are blended together to create a new material with different physical properties.
History
During the 1940s, '50s and '60s, the commercial development of new monomers for production of new polymers seemed endless. In this period, it was discovered that the development of the new techniques for the modification of the already existing polymers, would be economically viable.
The first technique of modification developed was the polymerization, in other words, the joint polymerization of more than one kind of polymer.
A new polymers modification process, based on a simple mechanical mixture of two polymers first appeared when Thomas Hancock created a mixture of natural rubber with gutta-percha. This process generated a new polymer class called "polymer blends."
Basic concepts
Polymer blends can be broadly divided into three categories:
immiscible polymer blends (heterogeneous polymer blends): This is by far the most populous group. If the blend is made of two polymers, two glass transition temperatures will be observed.
compatible polymer blends: Immiscible polymer blends that exhibit macroscopically uniform physical properties. The macroscopically uniform properties are usually caused by sufficiently strong interactions between the component polymers.
miscible polymer blends (homogeneous polymer blends): Polymer blend that is a single-phase structure. In this case, one glass transition temperature will be observed.
The use of the term polymer alloy for a polymer blend is discouraged, as the former term includes multiphase copolymers but excludes incompatible polymer blends.
Examples of miscible polymer blends:
homopolymer–homopolymer:
polyphenylene oxide (PPO) – polystyrene (PS): noryl developed by General Electric Plastics in 1966 (now owned by SABIC). The miscibility of the two polymers in noryl is caused by the presence of an aromatic ring in the repeat units of both chains.
polyethylene terephthalate (PET) – polybutylene terephthalate (PBT)
poly(methyl methacrylate) (PMMA) – polyvinylidene fluoride (PVDF)
homopolymer–copolymer:
polypropylene (PP) – EPDM
polycarbonate (PC) – acrylonitrile butadiene styrene (ABS): Bayblend, Pulse, Anjablend A''
Polymer blends can be used as thermoplastic elastomers.
See also
Flory–Huggins solution theory
Emulsion dispersion
References
External links
Miscible polymer blends: http://pslc.ws/macrog/blend.htm
Immiscible polymer blends: http://pslc.ws/macrog/iblend.htm
Polymers | Polymer blend | [
"Chemistry",
"Materials_science"
] | 614 | [
"Polymer stubs",
"Polymers",
"Organic chemistry stubs",
"Polymer chemistry"
] |
14,051,706 | https://en.wikipedia.org/wiki/Neuropeptide%20S%20receptor | The neuropeptide S receptor (NPSR) is a member of the G-protein coupled receptor superfamily of integral membrane proteins which binds neuropeptide S (NPS). It was formerly an orphan receptor, GPR154, until the discovery of neuropeptide S as the endogenous ligand. Increased expression of this gene in ciliated cells of the respiratory epithelium and in bronchial smooth muscle cells is associated with asthma. This gene is a member of the G protein-coupled receptor 1 family and encodes a plasma membrane protein. Mutations in this gene have also been associated with this disease.
Clinical significance
In the CNS, activation of the NPSR by NPS promotes arousal and anxiolytic-like effects.
In addition, mututations in NPSR have been linked to a susceptibility to asthma (rs3249801, A107I). Hence NPSR has also been called GPRA (G protein-coupled receptor for asthma susceptibility). Activation of NPSR in the airway epithelium has a number of effects including upregulation of matrix metalloproteinases which are involved in the pathogenesis of asthma. It has been shown that activation of NPSR by NPS affects both gastrointestinal motility and mucosal permeability simultaneously. Aberrant signaling and upregulation of NPSR1 could potentially exacerbate dysmotility and hyperpermeability by local mechanisms in gastrointestinal functional and inflammatory reactions.
The very rare NPSR mutation Y206H, which makes the receptor more sensitive to NPS, may cause familial natural short sleep. This finding has not been investigated in animal models, and is sufficiently rare that a biobank study was unable to find other carriers to attempt a replication of the association with sleep duration.
References
Further reading
External links
G protein-coupled receptors | Neuropeptide S receptor | [
"Chemistry"
] | 405 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,051,879 | https://en.wikipedia.org/wiki/Minicircle | Minicircles are small (~4kb) circular replicons. They occur naturally in some eukaryotic organelle genomes. In the mitochondria-derived kinetoplast of trypanosomes, minicircles encode guide RNAs for RNA editing. In Amphidinium, the chloroplast genome is made of minicircles that encode chloroplast proteins.
In vitro experimentally-derived minicircles
Minicircles are small (~4kb) circular plasmid derivatives that have been freed from all prokaryotic vector parts. They have been applied as transgene carriers for the genetic modification of mammalian cells, with the advantage that, since they contain no bacterial DNA sequences, they are less likely to be perceived as foreign and destroyed. (Typical transgene delivery methods involve plasmids, which contain foreign DNA.) The smaller size of minicircles also extends their cloning capacity and facilitates their delivery into cells.
Their preparation usually follows a two-step procedure:
production of a 'parental plasmid' (bacterial plasmid with eukaryotic inserts) in E. coli
induction of a site-specific recombinase at the end of this process but still in bacteria. These steps are followed by the
excision of prokaryotic vector parts via two recombinase-target sequences at both ends of the insert
recovery of the resulting minicircle (vehicle for the highly efficient modification of the recipient cell) and the miniplasmid by capillary gel electrophoresis (CGE)
The purified minicircle can be transferred into the recipient cell by transfection or lipofection and into a differentiated tissue by, for instance, jet injection.
Conventional minicircles lack an origin of replication, so they do not replicate within the target cells and the encoded genes will disappear as the cell divides (which can be either an advantage or disadvantage depending on whether the application demands persistent or transient expression). A novel addition to the field are nonviral self-replicating minicircles, which owe this property to the presence of a S/MAR-Element. Self-replicating minicircles hold great promise for the systematic modification of stem cells and will significantly extend the potential of their plasmidal precursor forms ("parental plasmids"), the more as the principal feasibility of such an approach has amply been demonstrated for their plasmidal precursor forms.
See also
References
Molecular genetics
Applied genetics | Minicircle | [
"Chemistry",
"Biology"
] | 536 | [
"Molecular genetics",
"Molecular biology"
] |
14,053,488 | https://en.wikipedia.org/wiki/Non-random%20two-liquid%20model | The non-random two-liquid model (abbreviated NRTL model) is an activity coefficient model introduced by Renon
and Prausnitz in 1968 that correlates the activity coefficients of a compound with its mole fractions in the liquid phase concerned. It is frequently applied in the field of chemical engineering to calculate phase equilibria. The concept of NRTL is based on the hypothesis of Wilson, who stated that the local concentration around a molecule in most mixtures is different from the bulk concentration. This difference is due to a difference between the interaction energy of the central molecule with the molecules of its own kind and that with the molecules of the other kind . The energy difference also introduces a non-randomness at the local molecular level. The NRTL model belongs to the so-called local-composition models. Other models of this type are the Wilson model, the UNIQUAC model, and the group contribution model UNIFAC. These local-composition models are not thermodynamically consistent for a one-fluid model for a real mixture due to the assumption that the local composition around molecule i is independent of the local composition around molecule j. This assumption is not true, as was shown by Flemr in 1976. However, they are consistent if a hypothetical two-liquid model is used. Models, which have consistency between bulk and the local molecular concentrations around different types of molecules are COSMO-RS, and COSMOSPACE.
Derivation
Like Wilson (1964), Renon & Prausnitz (1968) began with local composition theory, but instead of using the Flory–Huggins volumetric expression as Wilson did, they assumed local compositions followed
with a new "non-randomness" parameter α. The excess Gibbs free energy was then determined to be
.
Unlike Wilson's equation, this can predict partially miscible mixtures. However, the cross term, like Wohl's expansion, is more suitable for than , and experimental data is not always sufficiently plentiful to yield three meaningful values, so later attempts to extend Wilson's equation to partial miscibility (or to extend Guggenheim's quasichemical theory for nonrandom mixtures to Wilson's different-sized molecules) eventually yielded variants like UNIQUAC.
Equations for a binary mixture
For a binary mixture the following functions are used:
with
Here, and are the dimensionless interaction parameters, which are related to the interaction energy parameters and by:
Here R is the gas constant and T the absolute temperature, and Uij is the energy between molecular surface i and j. Uii is the energy of evaporation. Here Uij has to be equal to Uji, but is not necessary equal to .
The parameters and are the so-called non-randomness parameter, for which usually is set equal to . For a liquid, in which the local distribution is random around the center molecule, the parameter . In that case, the equations reduce to the one-parameter Margules activity model:
In practice, is set to 0.2, 0.3 or 0.48. The latter value is frequently used for aqueous systems. The high value reflects the ordered structure caused by hydrogen bonds. However, in the description of liquid-liquid equilibria, the non-randomness parameter is set to 0.2 to avoid wrong liquid-liquid description. In some cases, a better phase equilibria description is obtained by setting . However this mathematical solution is impossible from a physical point of view since no system can be more random than random (). In general, NRTL offers more flexibility in the description of phase equilibria than other activity models due to the extra non-randomness parameters. However, in practice this flexibility is reduced in order to avoid wrong equilibrium description outside the range of regressed data.
The limiting activity coefficients, also known as the activity coefficients at infinite dilution, are calculated by:
The expressions show that at , the limiting activity coefficients are equal. This situation occurs for molecules of equal size but of different polarities. It also shows, since three parameters are available, that multiple sets of solutions are possible.
General equations
The general equation for for species in a mixture of components is:
with
There are several different equation forms for and , the most general of which are shown above.
Temperature dependent parameters
To describe phase equilibria over a large temperature regime, i.e. larger than 50 K, the interaction parameter has to be made temperature dependent.
Two formats are frequently used. The extended Antoine equation format:
Here the logarithmic and linear terms are mainly used in the description of liquid-liquid equilibria (miscibility gap).
The other format is a second-order polynomial format:
Parameter determination
The NRTL parameters are fitted to activity coefficients that have been derived from experimentally determined phase equilibrium data (vapor–liquid, liquid–liquid, solid–liquid) as well as from heats of mixing. The source of the experimental data are often factual data banks like the Dortmund Data Bank. Other options are direct experimental work and predicted activity coefficients with UNIFAC and similar models.
Noteworthy is that for the same mixture several NRTL parameter sets might exist. The NRTL parameter set to use depends on the kind of phase equilibrium (i.e. solid–liquid (SL), liquid–liquid (LL), vapor–liquid (VL)). In the case of the description of a vapor–liquid equilibria it is necessary to know which saturated vapor pressure of the pure components was used and whether the gas phase was treated as an ideal or a real gas. Accurate saturated vapor pressure values are important in the determination or the description of an azeotrope. The gas fugacity coefficients are mostly set to unity (ideal gas assumption), but for vapor-liquid equilibria at high pressures (i.e. > 10 bar) an equation of state is needed to calculate the gas fugacity coefficient for a real gas description.
Determination of NRTL parameters from regression of LLE and VLE experimental data is a challenging problem because it involves solving isoactivity or isofugacity equations which are highly non-linear. In addition, parameters obtained from LLE of VLE may not always represent the experimental behaviour expected. For this reason it is necessary to confirm the thermodynamic consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated tie-lines, calculated plait point location (by using Hessian matrix), etc.) by using a phase stability test such as, the Free Gibss Energy minor tangent criteria .
Parameters for NRTL model
NRTL binary interaction parameters have been published in the Dechema data series and are provided by NIST and DDBST. There also exist machine-learning approaches that are able to predict NRTL parameters by using the SMILES notation for molecules as input.
Literature
Physical chemistry
Thermodynamic models
Engineering thermodynamics | Non-random two-liquid model | [
"Physics",
"Chemistry",
"Engineering"
] | 1,429 | [
"Applied and interdisciplinary physics",
"Equations of physics",
"Thermodynamic models",
"Engineering thermodynamics",
"Statistical mechanics",
"Thermodynamics",
"nan",
"Mechanical engineering",
"Equations of state",
"Physical chemistry"
] |
14,054,023 | https://en.wikipedia.org/wiki/Honeycomb%20sea%20wall | A honeycomb sea wall (also known as a "Seabee") is a coastal defense structure that protects against strong waves and tides. It is constructed as a sloped wall of ceramic or concrete blocks with hexagonal holes on the slope, which makes it look like a honeycomb, hence the name of the unit. Its role is to capture sand and to discharge wave energy.
Ceramic honeycomb sea wall units usually have 6 or 7 holes and are safer to walk on. These are placed as a revetment over gravel or rock. During strong storms, surging sea water loses energy as it travels down the holes and through the underlayer. The water returns to the sea by upward flow through holes at levels below the transient phreatic surface in the underlayer, causing the downslope disturbing drag force to be reduced. Water that does not go through the holes is redirected by the concrete wall back into the path of oncoming waves, creating more turbulence. Cost comparisons between various seawalls are always site specific, but Seabees use approximately 22% the mass of rock for the same exposure. As the area of the unit is sensibly independent of height [aspect ratios in use vary from 0.4 to 2.5] the mass of the unit can be optimised for all stages of the production and construction process. Surface roughness may also be determined by using combinations of different height units. Allowance for wear is easily allowed [e.g. Shoreham, 1989-90 & various Lincolnshire Seawalls]. Reductions of almost 50% in runup have been achieved, both in the laboratory and at chosen sites.
See also
Beach
Coastal management, for creation and maintenance of beach
Coastal erosion
Longshore drift
Coastal geography
Strand plain
Sand dune stabilization
References
External links
Close up picture of a seabee wall
Coastal engineering
Seawalls | Honeycomb sea wall | [
"Engineering"
] | 378 | [
"Coastal engineering",
"Civil engineering"
] |
14,054,801 | https://en.wikipedia.org/wiki/Koch%20reaction | The Koch reaction is an organic reaction for the synthesis of tertiary carboxylic acids from alcohols or alkenes and carbon monoxide. Some commonly industrially produced Koch acids include pivalic acid, 2,2-dimethylbutyric acid and 2,2-dimethylpentanoic acid. The Koch reaction employs carbon monoxide as a reagent and can therefore be classified as a carbonylation. The carbonylated product is converted to a carboxylic acid, so in this respect the Koch reaction can also be classified as a carboxylation.
Substrate scope and applications
Pivalic acid is produced from isobutene using the Koch reaction, as well as several other branched carboxylic acids. An estimated 150,000 tonnes of "Koch acids" and their derivatives annually.
Koch–Haaf-type reactions have been used to carboxylate adamantanes.
Conditions
The reaction is a strongly acid-catalyzed carbonylation and typically proceeds under pressures of CO and at elevated temperatures. The commercially important synthesis of pivalic acid from isobutenes operates near 50 °C and 50 kPa (50 atm). Generally the reaction is conducted with strong mineral acids such as sulfuric acid, HF, or phosphoric acid in combination with BF3.
Formic acid, which readily decomposes to carbon monoxide in the presence of acids, can be used instead of carbon monoxide. This method is referred to as the Koch–Haaf reaction. This variation allows for reactions at nearly standard room temperature and pressure.
Mechanism
The mechanism has been intensively scrutinized. The mechanism involves generation of a tertiary carbenium ion, which binds carbon monoxide. The resulting acylium ion is then hydrolysed to the tertiary carboxylic acid:
The carbenium ion can be produced either by protonation of an alkene or protonation/elimination of a tertiary alcohol:
Catalyst usage and variations
Standard acid catalysts are sulfuric acid or a mixture of BF3 and HF.
Although the use of acidic ionic liquids for the Koch reaction requires relatively high temperatures and pressures (8 MPa and 430 K in one 2006 study), acidic ionic solutions themselves can be reused with only a very slight decrease in yield, and the reactions can be carried out biphasically to ensure easy separation of products.
A large number of transition metal catalyst carbonyl cations have also been investigated for usage in Koch-like reactions: Cu(I), Au(I) and Pd(I) carbonyl cations catalysts dissolved in sulfuric acid can allow the reaction to progress at room temperature and atmospheric pressure. Usage of a Nickel tetracarbonyl catalyst with CO and water as a nucleophile is known as the Reppe carbonylation, and there are many variations on this type of metal-mediated carbonylation used in industry, particularly those used by Monsanto and the Cativa processes, which convert methanol to acetic acid using acid catalysts and carbon monoxide in the presence of metal catalysts.
Because of the use of strong mineral acids, industrial implementation of the Koch reaction is complicated by equipment corrosion, separation procedures for products and difficulty in managing large amounts of waste acid. Several acid resins and acidic ionic liquids have been investigated in order to discover if Koch acids can be synthesized in more mild environments.
Side reactions
Koch reactions can involve a large number of side products, although high yields are generally possible (Koch and Haaf reported yields of over 80% for several alcohols in their 1958 paper). Carbocation rearrangements, etherization (in case an alcohol is used as a substrate, instead of an alkene), and occasionally substrate CN+1 carboxylic acids are observed due to fragmentation and dimerization of carbon monoxide-derived carbenium ions, especially since each step of the reaction is reversible. Alkyl sulfuric acids are also known to be possible side products, but are usually eliminated by the excess sulfuric acid used.
Further reading
See also
Hydroformylation - related reaction of alkenes and CO to form aldehydes
Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, AlCl3, and HCl.
References
Addition reactions
Name reactions | Koch reaction | [
"Chemistry"
] | 896 | [
"Name reactions"
] |
14,055,188 | https://en.wikipedia.org/wiki/Pellizzari%20reaction | The Pellizzari reaction was discovered in 1911 by Guido Pellizzari, and is the organic reaction of an amide and a hydrazide to form a 1,2,4-triazole.
The product is similar to that of the Einhorn-Brunner reaction, but the mechanism itself is not regioselective.
Mechanism
The mechanism begins by the nitrogen in the hydrazide attacking the carbonyl carbon on the amide to form compound 3. The negatively charged oxygen then abstracts two hydrogens from neighboring nitrogens in order for a molecule of water to be released to form compound 5. The nitrogen then performs an intramolecular attack on the carbonyl group to form the five-membered ring of compound 6. After another proton migration from the nitrogens to the oxygen, another water molecule is released to form the 1,2,4-triazole 8.
Uses
The synthesis of the 1,2,4-triazole has a wide range of biological functions. 1,2,4-triazoles have antibacterial, antifungal, antidepressant and hypoglycemic properties. 3-benzylsulfanyl derivates of the triazole also show slight to moderate antimycobacterial activity, but are considered moderately toxic.
Problems and variations
The Pellizzari reaction is limited in the number of substituents that can be on the ring, so other methods have been developed to incorporate three elements of diversity. Liquid-phase synthesis of 3-alkylamino-4,5-disubstituted-1,2,4-triazoles by PEG support has given moderate yields with excellent purity. In practice, the Pellizzari reaction requires high temperatures, long reaction times, and has an overall low yield. However, adding microwave irradiation shortens the reaction time and increases its yield.
Related reactions
Einhorn-Brunner reaction
References
Condensation reactions
Heterocycle forming reactions
Name reactions | Pellizzari reaction | [
"Chemistry"
] | 416 | [
"Name reactions",
"Condensation reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
14,055,317 | https://en.wikipedia.org/wiki/Retinoid%20X%20receptor%20alpha | Retinoid X receptor alpha (RXR-alpha), also known as NR2B1 (nuclear receptor subfamily 2, group B, member 1) is a nuclear receptor that in humans is encoded by the RXRA gene.
Function
Retinoid X receptors (RXRs) and retinoic acid receptors (RARs), are nuclear receptors that mediate the biological effects of retinoids by their involvement in retinoic acid-mediated gene activation. These receptors exert their action by binding, as homodimers or heterodimers, to specific sequences in the promoters of target genes and regulating their transcription. The protein encoded by this gene is a member of the steroid and thyroid hormone receptor superfamily of transcription factors. In the absence of ligand, the RXR-RAR heterodimers associate with a multiprotein complex containing transcription corepressors that induce histone deacetylation, chromatin condensation and transcriptional suppression. On ligand binding, the corepressors dissociate from the receptors and associate with the coactivators leading to transcriptional activation. The RXRA/PPARA heterodimer is required for PPARA transcriptional activity on fatty acid oxidation genes such as ACOX1 and the cytochrome P450 system genes.
Interactive pathway map
Interactions
Retinoid X receptor alpha has been shown to interact with:
BCL3,
BRD8,
CLOCK,
FXR
IGFBP3,
ITGB3BP,
LXR-β,
MyoD,
NCOA6,
NFKBIB,
NPAS2,
NRIP1,
NR4A1,
NCOA2,
NCOA3,
POU2F1,
PPARGC1A,
PPAR-γ,
RNF8,
RAR-α,
SHP,
TADA3L,
TBP,
TRIM24,
TR-β, and
VDR.
See also
Retinoid X receptor
References
Further reading
External links
Intracellular receptors
Transcription factors | Retinoid X receptor alpha | [
"Chemistry",
"Biology"
] | 415 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,055,458 | https://en.wikipedia.org/wiki/Reactive%20nitrogen%20species | Reactive nitrogen species (RNS) are a family of antimicrobial molecules derived from nitric oxide (•NO) and superoxide (O2•−) produced via the enzymatic activity of inducible nitric oxide synthase 2 (NOS2) and NADPH oxidase respectively. NOS2 is expressed primarily in macrophages after induction by cytokines and microbial products, notably interferon-gamma (IFN-γ) and lipopolysaccharide (LPS).
Reactive nitrogen species act together with reactive oxygen species (ROS) to damage cells, causing nitrosative stress. Therefore, these two species are often collectively referred to as ROS/RNS.
Reactive nitrogen species are also continuously produced in plants as by-products of aerobic metabolism or in response to stress.
Types
RNS are produced in animals starting with the reaction of nitric oxide (•NO) with superoxide (O2•−) to form peroxynitrite (ONOO−):
•NO (nitric oxide) + O2•− (superoxide) → ONOO− (peroxynitrite)
Superoxide anion (O2−) is a reactive oxygen species that reacts quickly with nitric oxide (NO) in the vasculature. The reaction produces peroxynitrite and depletes the bioactivity of NO. This is important because NO is a key mediator in many important vascular functions including regulation of smooth muscle tone and blood pressure, platelet activation, and vascular cell signaling.
Peroxynitrite itself is a highly reactive species which can directly react with various biological targets and components of the cell including lipids, thiols, amino acid residues, DNA bases, and low-molecular weight antioxidants. However, these reactions happen at a relatively slow rate. This slow reaction rate allows it to react more selectively throughout the cell. Peroxynitrite is able to get across cell membranes to some extent through anion channels. Additionally peroxynitrite can react with other molecules to form additional types of RNS including nitrogen dioxide (•NO2) and dinitrogen trioxide (N2O3) as well as other types of chemically reactive free radicals. Important reactions involving RNS include:
ONOO− + H+ → ONOOH (peroxynitrous acid) → •NO2 (nitrogen dioxide) + •OH (hydroxyl radical)
ONOO− + CO2 (carbon dioxide) → ONOOCO2− (nitrosoperoxycarbonate)
ONOOCO2− → •NO2 (nitrogen dioxide) + O=C(O•)O− (carbonate radical)
•NO + •NO2 N2O3 (dinitrogen trioxide)
Biological targets
Peroxynitrite can react directly with proteins that contain transition metal centers. Therefore, it can modify proteins such as hemoglobin, myoglobin, and cytochrome c by oxidizing ferrous heme into its corresponding ferric forms. Peroxynitrite may also be able to change protein structure through the reaction with various amino acids in the peptide chain. The most common reaction with amino acids is cysteine oxidation. Another reaction is tyrosine nitration; however peroxynitrite does not react directly with tyrosine. Tyrosine reacts with other RNS that are produced by peroxynitrite. All of these reactions affect protein structure and function and thus have the potential to cause changes in the catalytic activity of enzymes, altered cytoskeletal organization, and impaired cell signal transduction.
See also
Reactive oxygen species
Reactive sulfur species
Reactive carbonyl species
References
External links
Short article on RN chemistry
Article on global RN trends
Nitrogen compounds
Free radicals | Reactive nitrogen species | [
"Chemistry",
"Biology"
] | 785 | [
"Senescence",
"Free radicals",
"Biomolecules"
] |
14,056,613 | https://en.wikipedia.org/wiki/Nonsingular%20black%20hole%20models | A nonsingular black hole model is a mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon.
Avoiding paradoxes in the standard black hole model
For a black hole to physically exist as a solution to Einstein's equation, it must form an event horizon in finite time relative to outside observers. This requires an accurate theory of black hole formation, of which several have been proposed. In 2007, Shuan Nan Zhang of Tsinghua University proposed a model in which the event horizon of a potential black hole only forms (or expands) after an object falls into the existing horizon, or after the horizon has exceeded the critical density. In other words, an infalling object causes the horizon of a black hole to expand, which only occurs after the object has fallen into the hole, allowing an observable horizon in finite time. This solution does not solve the information paradox, however.
Alternative black hole models
Nonsingular black hole models have been proposed since theoretical problems with black holes were first realized. Today some of the most viable candidates for the result of the collapse of a star with mass well above the Chandrasekhar limit include the gravastar and the dark energy star.
While black holes were a well-established part of mainstream physics for most of the end of the 20th century, alternative models received new attention when models proposed by George Chapline and later by Lawrence Krauss, Dejan Stojkovic, and Tanmay Vachaspati of Case Western Reserve University showed in several separate models that black hole horizons could not form.
Such research has attracted much media attention, as black holes have long captured the imagination of both scientists and the public for both their innate simplicity and mysteriousness. The recent theoretical results have therefore undergone much scrutiny and most of them are now ruled out by theoretical studies. For example, several alternative black hole models were shown to be unstable in extremely fast rotation, which, by conservation of angular momentum, would be a not unusual physical scenario for a collapsed star (see pulsar). Nevertheless, the existence of a stable model of a nonsingular black hole is still an open question.
Hayward metric
The Hayward metric is the simplest description of a black hole that is non-singular. The metric was written down by Sean Hayward as the minimal model that is regular, static, spherically symmetric and asymptotically flat.
Ayón-Beato–García metric
The Ayón-Beato–García model is the first exact charged regular black hole with a source. The model was proposed by Eloy Ayón Beato and Alberto García in 1998 based on the minimal coupling between a nonlinear electrodynamics model and general relativity, considering a static and spherically symmetric spacetime. Later the same authors reinterpreted the first non-singular black hole geometry, the Bardeen toy Model, as a nonlinear-electrodynamics-based regular black hole. Nowadays, it is known that the Ayón-Beato–García model may mimic the absorption properties of the Reissner–Nordström metric, from the perspective of the absorption of massless test scalar fields.
Nonsingular black holes as dark matter
In 2024, Paul C.W. Davies, Damien A. Easson, and Phillip B. Levin proposed that nonsingular black holes are a viable candidate for dark matter. They showed that the nonsingular Schwarzschild-de Sitter black hole slowly evaporates, reaching a maximum but finite temperature, then forms a black hole remnant that does not have a singularity and whose mass is on the order of the Planck mass. This nonsingular black hole can comprise all of the dark matter in the observable universe because the fraction of primordial black holes that is dark matter is inversely proportional to the smallest mass primordial black hole that could have survived since the primordial era. It was previously thought that Hawking evaporation set the lower bound of primordial black holes to be 1012 kg, but nonsingular black holes, which form remnants and do not evaporate completely, lower this bound to the Planck mass, which is 10−8 kg. Thus Planck mass nonsingular black holes formed primordially can comprise all of the dark matter in the observable universe today.
See also
Exotic star
Planck star
Fuzzball (string theory)
References
External links
Black holes don't exist, Case physicists report
Black holes
Theory of relativity | Nonsingular black hole models | [
"Physics",
"Astronomy"
] | 927 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
14,057,225 | https://en.wikipedia.org/wiki/Plasma%20pencil | The plasma pencil is a dielectric tube where two disk-shaped electrodes of about the same diameter as the tube are inserted, and are separated by a small gap. Each of the two electrodes is made of a thin copper ring attached to the surface of a centrally perforated dielectric disk. The plasma is ignited when nanoseconds-wide high voltage pulses at kHz repetition rate are applied between the two electrodes and a gas mixture (such as helium and oxygen) is flown through the holes of the electrodes. When a plasma is ignited in the gap between the electrodes, a plasma plume reaching lengths up to 12 cm is launched through the aperture of the outer electrode and into the surrounding room air. The cold plasma plume emitted by the plasma pencil can be used to kill bacteria without harming skin tissue.
Applications of the plasma pencil are in wound healing, killing of oral bacteria, and in controlled surface modification of heat-sensitive materials. The plasma pencil was invented by Mounir Laroussi, a plasma science professor at Old Dominion University, Norfolk, VA, USA.
Sources
External links
https://abcnews.go.com/health/ColdandFluNews/story?id=5987227&page=7
www.nature.com/news/2005/050919/full/news050919-13.html
Dielectrics | Plasma pencil | [
"Physics"
] | 289 | [
"Materials",
"Dielectrics",
"Matter"
] |
14,060,661 | https://en.wikipedia.org/wiki/Capillary%20surface | In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.
Capillary surfaces are of interest in mathematics because the problems involved are very nonlinear and have interesting properties, such as discontinuous dependence on boundary data at isolated points. In particular, static capillary surfaces with gravity absent have constant mean curvature, so that a minimal surface is a special case of static capillary surface.
They are also of practical interest for fluid management in space (or other environments free of body forces), where both flow and static configuration are often dominated by capillary effects.
The stress balance equation
The defining equation for a capillary surface is called the stress balance equation, which can be derived by considering the forces and stresses acting on a small volume that is partly bounded by a capillary surface. For a fluid meeting another fluid (the "other" fluid notated with bars) at a surface , the equation reads
where is the unit normal pointing toward the "other" fluid (the one whose quantities are notated with bars), is the stress tensor (note that on the left is a tensor-vector product), is the surface tension associated with the interface, and is the surface gradient. Note that the quantity is twice the mean curvature of the surface.
In fluid mechanics, this equation serves as a boundary condition for interfacial flows, typically complementing the Navier–Stokes equations. It describes the discontinuity in stress that is balanced by forces at the surface. As a boundary condition, it is somewhat unusual in that it introduces a new variable: the surface that defines the interface. It's not too surprising then that the stress balance equation normally mandates its own boundary conditions.
For best use, this vector equation is normally turned into 3 scalar equations via dot product with the unit normal and two selected unit tangents:
Note that the products lacking dots are tensor products of tensors with vectors (resulting in vectors similar to a matrix-vector product), those with dots are dot products. The first equation is called the normal stress equation, or the normal stress boundary condition. The second two equations are called tangential stress equations.
The stress tensor
The stress tensor is related to velocity and pressure. Its actual form will depend on the specific fluid being dealt with, for the common case of incompressible Newtonian flow the stress tensor is given by
where is the pressure in the fluid, is the velocity, and is the viscosity.
Static interfaces
In the absence of motion, the stress tensors yield only hydrostatic pressure so that , regardless of fluid type or compressibility. Considering the normal and tangential equations,
The first equation establishes that curvature forces are balanced by pressure forces. The second equation implies that a static interface cannot exist in the presence of nonzero surface tension gradient.
If gravity is the only body force present, the Navier–Stokes equations simplify significantly:
If coordinates are chosen so that gravity is nonzero only in the direction, this equation degrades to a particularly simple form:
where is an integration constant that represents some reference pressure at . Substituting this into the normal stress equation yields what is known as the Young-Laplace equation:
where is the (constant) pressure difference across the interface, and is the difference in density. Note that, since this equation defines a surface, is the coordinate of the capillary surface. This nonlinear partial differential equation when supplied with the right boundary conditions will define the static interface.
The pressure difference above is a constant, but its value will change if the coordinate is shifted. The linear solution to pressure implies that, unless the gravity term is absent, it is always possible to define the coordinate so that . Nondimensionalized, the Young-Laplace equation is usually studied in the form
where (if gravity is in the negative direction) is positive if the denser fluid is "inside" the interface, negative if it is "outside", and zero if there is no gravity or if there is no difference in density between the fluids.
This nonlinear equation has some rich properties, especially in terms of existence of unique solutions. For example, the nonexistence of solution to some boundary value problem implies that, physically, the problem can't be static. If a solution does exist, normally it'll exist for very specific values of , which is representative of the pressure jump across the interface. This is interesting because there isn't another physical equation to determine the pressure difference. In a capillary tube, for example, implementing the contact angle boundary condition will yield a unique solution for exactly one value of . Solutions often aren't unique, this implies that there are multiple static interfaces possible; while they may all solve the same boundary value problem, the minimization of energy will normally favor one. Different solutions are called configurations of the interface.
Energy consideration
A deep property of capillary surfaces is the surface energy that is imparted by surface tension:
where is the area of the surface being considered, and the total energy is the summation of all energies. Note that every interface imparts energy. For example, if there are two different fluids (say liquid and gas) inside a solid container with gravity and other energy potentials absent, the energy of the system is
where the subscripts , , and respectively indicate the liquid–gas, solid–gas, and solid–liquid interfaces. Note that inclusion of gravity would require consideration of the volume enclosed by the capillary surface and the solid walls.
Typically the surface tension values between the solid–gas and solid–liquid interfaces are not known. This does not pose a problem; since only changes in energy are of primary interest. If the net solid area is a constant, and the contact angle is known, it may be shown that (again, for two different fluids in a solid container)
so that
where is the contact angle and the capital delta indicates the change from one configuration to another. To obtain this result, it's necessary to sum (distributed) forces at the contact line (where solid, gas, and liquid meet) in a direction tangent to the solid interface and perpendicular to the contact line:
where the sum is zero because of the static state. When solutions to the Young-Laplace equation aren't unique, the most physically favorable solution is the one of minimum energy, though experiments (especially low gravity) show that metastable surfaces can be surprisingly persistent, and that the most stable configuration can become metastable through mechanical jarring without too much difficulty. On the other hand, a metastable surface can sometimes spontaneously achieve lower energy without any input (seemingly at least) given enough time.
Boundary conditions
Boundary conditions for stress balance describe the capillary surface at the contact line: the line where a solid meets the capillary interface; also, volume constraints can serve as boundary conditions (a suspended drop, for example, has no contact line but clearly must admit a unique solution).
For static surfaces, the most common contact line boundary condition is the implementation of the contact angle, which specifies the angle that one of the fluids meets the solid wall. The contact angle condition on the surface is normally written as:
where is the contact angle. This condition is imposed on the boundary (or boundaries) of the surface. is the unit outward normal to the solid surface, and is a unit normal to . Choice of depends on which fluid the contact angle is specified for.
For dynamic interfaces, the boundary condition showed above works well if the contact line velocity is low. If the velocity is high, the contact angle will change ("dynamic contact angle"), and as of 2007 the mechanics of the moving contact line (or even the validity of the contact angle as a parameter) is not known and an area of active research.
See also
Capillary pressure
Surface energy
Surface tension
Capillary bridges
References
Fluid mechanics
Fluid dynamics
Fluid statics | Capillary surface | [
"Chemistry",
"Engineering"
] | 1,641 | [
"Chemical engineering",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
12,421,588 | https://en.wikipedia.org/wiki/Strain%E2%80%93encoded%20magnetic%20resonance%20imaging | Strain–encoded magnetic resonance imaging (SENC-MRI) is a magnetic resonance imaging technique for imaging the strain of deforming tissue. It is undergoing testing to diagnose some heart diseases, particularly congenital right ventricle dysfunctions, which are difficult to diagnose. It is an improvement on magnetic resonance elastography in that it has a faster imaging time, and less post-processing time, to turn the acquired data into a useful image.
To use the technique, the gradient coils in the MRI equipment need to be driven with special pulse sequences, designed for specific tissues, that "tags" deformation of the tissue, such that tissue that deforms more is brighter, or darker, as needed. Using a baseline measurement of normal deformation, the measurements can show unusual amounts of pressure a tissue is exposed to, or indicate that the tissue is unusually stiff or flexible, in either case potentially revealing a pathology.
Inventors of the technique, Nael Osman and Jerry Prince, co-founded a company called DiagnoSoft to get regulatory approval for software enabling this technique and others from their academic lab, and make them available to doctors and patients.
See also
Harmonic phase (HARP) algorithm
References
Cardiac imaging
Magnetic resonance imaging
Medical imaging | Strain–encoded magnetic resonance imaging | [
"Chemistry"
] | 253 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
12,421,756 | https://en.wikipedia.org/wiki/Percentage%20depth%20dose%20curve | In radiotherapy, a percentage depth dose curve (PDD) (sometimes percent depth dose curve) relates the absorbed dose deposited by a radiation beam into a medium as it varies with depth along the axis of the beam. The dose values are divided by the maximum dose, referred to as dmax, yielding a plot in terms of percentage of the maximum dose. Dose measurements are generally made in water or "water equivalent" plastic with an ionization chamber, since water is very similar to human tissue with regard to radiation scattering and absorption.
Percent depth dose (PDD), which reflects the overall percentage of dose deposited as compared to the depth of maximum dose, depends on the depth of interest, beam energy, field size, and SSD (source to surface distance) as follows. Of note, PDD generally refers to depths greater than the depth of maximum dose
PDD decreases with increasing depth due to the inverse square law and due to attenuation of the radiation beam
PDD increases with increasing radiation field size due to greater primary and scattered photons from the irradiated medium
PDD increases with increasing SSD because inverse square variations over a fixed distance interval are smaller at large total distance than small total distance
See also
Dosimetry
Dose profile
References
[1] Radiation Therapy Physics, Hendee W., Ibbott G. and Hendee E. (2005) Wiley-Liss Publ.,
Radiation therapy
Medical physics | Percentage depth dose curve | [
"Physics"
] | 290 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
410,793 | https://en.wikipedia.org/wiki/Continuity%20equation | A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Continuity equations are a stronger, local form of conservation laws. For example, a weak version of the law of conservation of energy states that energy can neither be created nor destroyed—i.e., the total amount of energy in the universe is fixed. This statement does not rule out the possibility that a quantity of energy could disappear from one point while simultaneously appearing at another point. A stronger statement is that energy is locally conserved: energy can neither be created nor destroyed, nor can it "teleport" from one place to another—it can only move by a continuous flow. A continuity equation is the mathematical way to express this kind of statement. For example, the continuity equation for electric charge states that the amount of electric charge in any volume of space can only change by the amount of electric current flowing into or out of that volume through its boundaries.
Continuity equations more generally can include "source" and "sink" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. In an everyday example, there is a continuity equation for the number of people alive; it has a "source term" to account for people being born, and a "sink term" to account for people dying.
Any continuity equation can be expressed in an "integral form" (in terms of a flux integral), which applies to any finite region, or in a "differential form" (in terms of the divergence operator) which applies at a point.
Continuity equations underlie more specific transport equations such as the convection–diffusion equation, Boltzmann transport equation, and Navier–Stokes equations.
Flows governed by continuity equations can be visualized using a Sankey diagram.
General equation
Definition of flux
A continuity equation is useful when a flux can be defined. To define flux, first there must be a quantity which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let be the volume density of this quantity, that is, the amount of per unit volume.
The way that this quantity is flowing is described by its flux. The flux of is a vector field, which we denote as j. Here are some examples and properties of flux:
The dimension of flux is "amount of flowing per unit time, through a unit area". For example, in the mass continuity equation for flowing water, if 1 gram per second of water is flowing through a pipe with cross-sectional area 1 cm2, then the average mass flux inside the pipe is , and its direction is along the pipe in the direction that the water is flowing. Outside the pipe, where there is no water, the flux is zero.
If there is a velocity field which describes the relevant flow—in other words, if all of the quantity at a point is moving with velocity —then the flux is by definition equal to the density times the velocity field:
For example, if in the mass continuity equation for flowing water, is the water's velocity at each point, and is the water's density at each point, then would be the mass flux, also known as the material discharge.
In a well-known example, the flux of electric charge is the electric current density.
If there is an imaginary surface , then the surface integral of flux over is equal to the amount of that is passing through the surface per unit time:
in which is a surface integral.
(Note that the concept that is here called "flux" is alternatively termed flux density in some literature, in which context "flux" denotes the surface integral of flux density. See the main article on Flux for details.)
Integral form
The integral form of the continuity equation states that:
The amount of in a region increases when additional flows inward through the surface of the region, and decreases when it flows outward;
The amount of in a region increases when new is created inside the region, and decreases when is destroyed;
Apart from these two processes, there is no other way for the amount of in a region to change.
Mathematically, the integral form of the continuity equation expressing the rate of increase of within a volume is:
where
is any imaginary closed surface, that encloses a volume ,
denotes a surface integral over that closed surface,
is the total amount of the quantity in the volume ,
is the flux of ,
is time,
is the net rate that is being generated inside the volume per unit time. When is being generated, it is called a source of , and it makes more positive. When is being destroyed, it is called a sink of , and it makes more negative. This term is sometimes written as or the total change of q from its generation or destruction inside the control volume.
In a simple example, could be a building, and could be the number of people in the building. The surface would consist of the walls, doors, roof, and foundation of the building. Then the continuity equation states that the number of people in the building increases when people enter the building (an inward flux through the surface), decreases when people exit the building (an outward flux through the surface), increases when someone in the building gives birth (a source, ), and decreases when someone in the building dies (a sink, ).
Differential form
By the divergence theorem, a general continuity equation can also be written in a "differential form":
where
is divergence,
is the density of the amount (i.e. the quantity per unit volume),
is the flux density of (i.e. j = ρv, where v is the vector field describing the movement of the quantity ),
is time,
is the generation of per unit volume per unit time. Terms that generate (i.e., ) or remove (i.e., ) are referred to as a sources and sinks respectively.
This general equation may be used to derive any continuity equation, ranging from as simple as the volume continuity equation to as complicated as the Navier–Stokes equations. This equation also generalizes the advection equation. Other equations in physics, such as Gauss's law of the electric field and Gauss's law for gravity, have a similar mathematical form to the continuity equation, but are not usually referred to by the term "continuity equation", because in those cases does not represent the flow of a real physical quantity.
In the case that is a conserved quantity that cannot be created or destroyed (such as energy), and the equations become:
Electromagnetism
In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density (in amperes per square meter) is equal to the negative rate of change of the charge density (in coulombs per cubic meter),
Current is the movement of charge. The continuity equation says that if charge is moving out of a differential volume (i.e., divergence of current density is positive) then the amount of charge within that volume is going to decrease, so the rate of change of charge density is negative. Therefore, the continuity equation amounts to a conservation of charge.
If magnetic monopoles exist, there would be a continuity equation for monopole currents as well, see the monopole article for background and the duality between electric and magnetic currents.
Fluid dynamics
In fluid dynamics, the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system.
The differential form of the continuity equation is:
where
is fluid density,
is time,
is the flow velocity vector field.
The time derivative can be understood as the accumulation (or loss) of mass in the system, while the divergence term represents the difference in flow in versus flow out. In this context, this equation is also one of the Euler equations (fluid dynamics). The Navier–Stokes equations form a vector continuity equation describing the conservation of linear momentum.
If the fluid is incompressible (volumetric strain rate is zero), the mass continuity equation simplifies to a volume continuity equation:
which means that the divergence of the velocity field is zero everywhere. Physically, this is equivalent to saying that the local volume dilation rate is zero, hence a flow of water through a converging pipe will adjust solely by increasing its velocity as water is largely incompressible.
Computer vision
In computer vision, optical flow is the pattern of apparent motion of objects in a visual scene. Under the assumption that brightness of the moving object did not change between two image frames, one can derive the optical flow equation as:
where
is time,
coordinates in the image,
is the image intensity at image coordinate and time ,
is the optical flow velocity vector at image coordinate and time
Energy and heat
Conservation of energy says that energy cannot be created or destroyed. (See below for the nuances associated with general relativity.) Therefore, there is a continuity equation for energy flow:
where
, local energy density (energy per unit volume),
, energy flux (transfer of energy per unit cross-sectional area per unit time) as a vector,
An important practical example is the flow of heat. When heat flows inside a solid, the continuity equation can be combined with Fourier's law (heat flux is proportional to temperature gradient) to arrive at the heat equation. The equation of heat flow may also have source terms: Although energy cannot be created or destroyed, heat can be created from other types of energy, for example via friction or joule heating.
Probability distributions
If there is a quantity that moves continuously according to a stochastic (random) process, like the location of a single dissolved molecule with Brownian motion, then there is a continuity equation for its probability distribution. The flux in this case is the probability per unit area per unit time that the particle passes through a surface. According to the continuity equation, the negative divergence of this flux equals the rate of change of the probability density. The continuity equation reflects the fact that the molecule is always somewhere—the integral of its probability distribution is always equal to 1—and that it moves by a continuous motion (no teleporting).
Quantum mechanics
Quantum mechanics is another domain where there is a continuity equation related to conservation of probability. The terms in the equation require the following definitions, and are slightly less obvious than the other examples above, so they are outlined here:
The wavefunction for a single particle in position space (rather than momentum space), that is, a function of position and time , .
The probability density function is
The probability of finding the particle within at is denoted and defined by
The probability current (probability flux) is
With these definitions the continuity equation reads:
Either form may be quoted. Intuitively, the above quantities indicate this represents the flow of probability. The chance of finding the particle at some position and time flows like a fluid; hence the term probability current, a vector field. The particle itself does not flow deterministically in this vector field.
Semiconductor
The total current flow in the semiconductor consists of drift current and diffusion current of both the electrons in the conduction band and holes in the valence band.
General form for electrons in one-dimension:
where:
n is the local concentration of electrons
is electron mobility
E is the electric field across the depletion region
Dn is the diffusion coefficient for electrons
Gn is the rate of generation of electrons
Rn is the rate of recombination of electrons
Similarly, for holes:
where:
p is the local concentration of holes
is hole mobility
E is the electric field across the depletion region
Dp is the diffusion coefficient for holes
Gp is the rate of generation of holes
Rp is the rate of recombination of holes
Derivation
This section presents a derivation of the equation above for electrons. A similar derivation can be found for the equation for holes.
Consider the fact that the number of electrons is conserved across a volume of semiconductor material with cross-sectional area, A, and length, dx, along the x-axis. More precisely, one can say:
Mathematically, this equality can be written:
Here J denotes current density(whose direction is against electron flow by convention) due to electron flow within the considered volume of the semiconductor. It is also called electron current density.
Total electron current density is the sum of drift current and diffusion current densities:
Therefore, we have
Applying the product rule results in the final expression:
Solution
The key to solving these equations in real devices is whenever possible to select regions in which most of the mechanisms are negligible so that the equations reduce to a much simpler form.
Relativistic version
Special relativity
The notation and tools of special relativity, especially 4-vectors and 4-gradients, offer a convenient way to write any continuity equation.
The density of a quantity and its current can be combined into a 4-vector called a 4-current:
where is the speed of light. The 4-divergence of this current is:
where is the 4-gradient and is an index labeling the spacetime dimension. Then the continuity equation is:
in the usual case where there are no sources or sinks, that is, for perfectly conserved quantities like energy or charge. This continuity equation is manifestly ("obviously") Lorentz invariant.
Examples of continuity equations often written in this form include electric charge conservation
where is the electric 4-current; and energy–momentum conservation
where is the stress–energy tensor.
General relativity
In general relativity, where spacetime is curved, the continuity equation (in differential form) for energy, charge, or other conserved quantities involves the covariant divergence instead of the ordinary divergence.
For example, the stress–energy tensor is a second-order tensor field containing energy–momentum densities, energy–momentum fluxes, and shear stresses, of a mass-energy distribution. The differential form of energy–momentum conservation in general relativity states that the covariant divergence of the stress-energy tensor is zero:
This is an important constraint on the form the Einstein field equations take in general relativity.
However, the ordinary divergence of the stress–energy tensor does not necessarily vanish:
The right-hand side strictly vanishes for a flat geometry only.
As a consequence, the integral form of the continuity equation is difficult to define and not necessarily valid for a region within which spacetime is significantly curved (e.g. around a black hole, or across the whole universe).
Particle physics
Quarks and gluons have color charge, which is always conserved like electric charge, and there is a continuity equation for such color charge currents (explicit expressions for currents are given at gluon field strength tensor).
There are many other quantities in particle physics which are often or always conserved: baryon number (proportional to the number of quarks minus the number of antiquarks), electron number, mu number, tau number, isospin, and others. Each of these has a corresponding continuity equation, possibly including source / sink terms.
Noether's theorem
One reason that conservation equations frequently occur in physics is Noether's theorem. This states that whenever the laws of physics have a continuous symmetry, there is a continuity equation for some conserved physical quantity. The three most famous examples are:
The laws of physics are invariant with respect to time-translation—for example, the laws of physics today are the same as they were yesterday. This symmetry leads to the continuity equation for conservation of energy.
The laws of physics are invariant with respect to space-translation—for example, a rocket in outer space is not subject to different forces or potentials if it is displaced in any given direction (eg. x, y, z), leading to the conservation of the three components of momentum conservation of momentum.
The laws of physics are invariant with respect to orientation—for example, floating in outer space, there is no measurement you can do to say "which way is up"; the laws of physics are the same regardless of how you are oriented. This symmetry leads to the continuity equation for conservation of angular momentum.
See also
Conservation law
Conservation form
Dissipative system
References
Further reading
Equations of fluid dynamics
Conservation equations
Partial differential equations | Continuity equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,408 | [
"Equations of fluid dynamics",
"Equations of physics",
"Conservation laws",
"Mathematical objects",
"Equations",
"Fluid dynamics",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
410,899 | https://en.wikipedia.org/wiki/Therblig | Therbligs are elemental motions used in the study of workplace motion economy. A workplace task is analyzed by recording each of the therblig units for a process, with the results used for optimization of manual labour by eliminating unneeded movements. Eighteen therbligs have been defined.
The word therblig was the creation of Frank Bunker Gilbreth and Lillian Moller Gilbreth, American industrial psychologists who invented the field of time and motion study. It is a reversal of the name Gilbreth, with 'th' transposed.
Elements
A basic motion element is one of a set of fundamental motions used by a worker to perform a manual operation or task. The set consists of 18 elements, each describing one activity.
Transport empty [unloaded] (TE): receiving an object with an empty hand. (Now called "Reach".)
Grasp (G): grasping an object with the active hand.
Transport loaded (TL): moving an object using a hand motion.
Hold (H): holding an object.
Release load (RL): releasing control of an object.
Pre-position (PP): positioning and/or orienting an object for the next operation and relative to an approximation location.
Position (P): positioning and/or orienting an object in the defined location.
Use (U): manipulating a tool in the intended way during the course of working.
Assemble (A): joining two parts together.
Disassemble (DA): separating multiple components that were joined.
Search (Sh): attempting to find an object using the eyes and hands.
Select (St): Choosing among several objects in a group.
Plan (Pn): deciding on a course of action.
Inspect (I): determining the quality or the characteristics of an object using the eyes and/or other senses.
Unavoidable delay (UD): waiting due to factors beyond the worker's control and included in the work cycle.
Avoidable delay (AD): pausing for reasons under the worker's control that is not part of the regular work cycle.
Rest (R): resting to overcome a fatigue, consisting of a pause in the motions of the hands and/or body during the work cycles or between them.
Find (F): A momentary mental reaction at the end of the Search cycle. Seldom used.
Effective and ineffective basic motion elements
Effective:
Reach
Move
Grasp
Release Load
Use
Assemble
Disassemble
Pre-Position
Ineffective:
Hold
Rest
Position
Search
Select
Plan
Unavoidable Delay
Avoidable Delay
Inspect
Sample usage
Here is an example of how therbligs can be used to analyze motion:
History
In an article published in 1915, Frank Gilbreth wrote of 16 elements: "The elements of a cycle of decisions and motions, either running partly or wholly concurrently with other elements in the same or other cycles, consist of the following, arranged in varying sequences: 1. Search, 2. Find, 3. Select, 4. Grasp, 5. Position, 6. Assemble, 7. Use, 8. Dissemble, or take apart, 9. Inspect, 10. Transport, loaded, 11. Pre-position for next operation, 12. Release load, 13. Transport, empty, 14. Wait (unavoidable delay), 15. Wait (avoidable delay), 16. Rest (for overcoming fatigue)."
Notes
References
*
External links
The Gilbreth Network: Therbligs
Time and motion study | Therblig | [
"Engineering"
] | 715 | [
"Time and motion study",
"Industrial engineering"
] |
410,923 | https://en.wikipedia.org/wiki/Neutron%20radiation | Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new nuclides—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino. Free neutrons have a mean lifetime of 887 seconds (14 minutes, 47 seconds).
Neutron radiation is distinct from alpha, beta and gamma radiation.
Sources
Neutrons may be emitted from nuclear fusion or nuclear fission, or from other nuclear reactions such as radioactive decay or particle interactions with cosmic rays or within particle accelerators. Large neutron sources are rare, and usually limited to large-sized devices such as nuclear reactors or particle accelerators, including the Spallation Neutron Source.
Neutron radiation was discovered from observing an alpha particle colliding with a beryllium nucleus, which was transformed into a carbon nucleus while emitting a neutron, Be(α, n)C. The combination of an alpha particle emitter and an isotope with a large (α, n) nuclear reaction probability is still a common neutron source.
Neutron radiation from fission
The neutrons in nuclear reactors are generally categorized as slow (thermal) neutrons or fast neutrons depending on their energy. Thermal neutrons are similar in energy distribution (the Maxwell–Boltzmann distribution) to a gas in thermodynamic equilibrium; but are easily captured by atomic nuclei and are the primary means by which elements undergo nuclear transmutation.
To achieve an effective fission chain reaction, neutrons produced during fission must be captured by fissionable nuclei, which then split, releasing more neutrons. In most fission reactor designs, the nuclear fuel is not sufficiently refined to absorb enough fast neutrons to carry on the chain reaction, due to the lower cross section for higher-energy neutrons, so a neutron moderator must be introduced to slow the fast neutrons down to thermal velocities to permit sufficient absorption. Common neutron moderators include graphite, ordinary (light) water and heavy water. A few reactors (fast neutron reactors) and all nuclear weapons rely on fast neutrons.
Cosmogenic neutrons
Cosmogenic neutrons are produced from cosmic radiation in the Earth's atmosphere or surface, as well as in particle accelerators. They often possess higher energy levels compared to neutrons found in reactors. Many of these neutrons activate atomic nuclei before reaching the Earth's surface, while a smaller fraction interact with nuclei in the atmospheric air. When these neutrons interact with nitrogen-14 atoms, they can transform them into carbon-14 (14C), which is extensively utilized in radiocarbon dating.
Uses
Cold, thermal and hot neutron radiation is most commonly used in scattering and diffraction experiments, to assess the properties and the structure of materials in crystallography, condensed matter physics, biology, solid state chemistry, materials science, geology, mineralogy, and related sciences. Neutron radiation is also used in Boron Neutron Capture Therapy to treat cancerous tumors due to its highly penetrating and damaging nature to cellular structure. Neutrons can also be used for imaging of industrial parts termed neutron radiography when using film, neutron radioscopy when taking a digital image, such as through image plates, and neutron tomography for three-dimensional images. Neutron imaging is commonly used in the nuclear industry, the space and aerospace industry, as well as the high reliability explosives industry.
Ionization mechanisms and properties
Neutron radiation is often called indirectly ionizing radiation. It does not ionize atoms in the same way that charged particles such as protons and electrons do (exciting an electron), because neutrons have no charge. However, neutron interactions are largely ionizing, for example when neutron absorption results in gamma emission and the gamma ray (photon) subsequently removes an electron from an atom, or a nucleus recoiling from a neutron interaction is ionized and causes more traditional subsequent ionization in other atoms. Because neutrons are uncharged, they are more penetrating than alpha radiation or beta radiation. In some cases they are more penetrating than gamma radiation, which is impeded in materials of high atomic number. In materials of low atomic number such as hydrogen, a low energy gamma ray may be more penetrating than a high energy neutron.
Health hazards and protection
In health physics, neutron radiation is a type of radiation hazard. Another, more severe hazard of neutron radiation, is neutron activation, the ability of neutron radiation to induce radioactivity in most substances it encounters, including bodily tissues. This occurs through the capture of neutrons by atomic nuclei, which are transformed to another nuclide, frequently a radionuclide. This process accounts for much of the radioactive material released by the detonation of a nuclear weapon. It is also a problem in nuclear fission and nuclear fusion installations as it gradually renders the equipment radioactive such that eventually it must be replaced and disposed of as low-level radioactive waste.
Neutron radiation protection relies on radiation shielding. Due to the high kinetic energy of neutrons, this radiation is considered the most severe and dangerous radiation to the whole body when it is exposed to external radiation sources. In comparison to conventional ionizing radiation based on photons or charged particles, neutrons are repeatedly bounced and slowed (absorbed) by light nuclei so hydrogen-rich material is more effective at shielding than iron nuclei. The light atoms serve to slow down the neutrons by elastic scattering so they can then be absorbed by nuclear reactions. However, gamma radiation is often produced in such reactions, so additional shielding must be provided to absorb it. Care must be taken to avoid using materials whose nuclei undergo fission or neutron capture that causes radioactive decay of nuclei, producing gamma rays.
Neutrons readily pass through most material, and hence the absorbed dose (measured in grays) from a given amount of radiation is low, but interact enough to cause biological damage. The most effective shielding materials are water, or hydrocarbons like polyethylene or paraffin wax. Water-extended polyester (WEP) is effective as a shielding wall in harsh environments due to its high hydrogen content and resistance to fire, allowing it to be used in a range of nuclear, health physics, and defense industries. Hydrogen-based materials are suitable for shielding as they are proper barriers against radiation.
Concrete (where a considerable number of water molecules chemically bind to the cement) and gravel provide a cheap solution due to their combined shielding of both gamma rays and neutrons. Boron is also an excellent neutron absorber (and also undergoes some neutron scattering). Boron decays into carbon or helium and produces virtually no gamma radiation with boron carbide, a shield commonly used where concrete would be cost prohibitive. Commercially, tanks of water or fuel oil, concrete, gravel, and B4C are common shields that surround areas of large amounts of neutron flux, e.g., nuclear reactors. Boron-impregnated silica glass, standard borosilicate glass, high-boron steel, paraffin, and Plexiglas have niche uses.
Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop replicating, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye.
Effects on materials
High-energy neutrons damage and degrade materials over time; bombardment of materials with neutrons creates collision cascades that can produce point defects and
dislocations in the material, the creation of which is the primary driver behind microstructural changes occurring over time in materials exposed to radiation. At high neutron fluences this can lead to embrittlement of metals and other materials, and to neutron-induced swelling in some of them. This poses a problem for nuclear reactor vessels and significantly limits their lifetime (which can be somewhat prolonged by controlled annealing of the vessel, reducing the number of the built-up dislocations). Graphite neutron moderator blocks are especially susceptible to this effect, known as Wigner effect, and must be annealed periodically. The Windscale fire was caused by a mishap during such an annealing operation.
Radiation damage to materials occurs as a result of the interaction of an energetic incident particle (a neutron, or otherwise) with a lattice atom in the material. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA). Because the PKA is surrounded by other lattice atoms, its displacement and passage through the lattice results in many subsequent collisions and the creations of additional knock-on atoms, producing what is known as the collision cascade or displacement cascade. The knock-on atoms lose energy with each collision, and terminate as interstitials, effectively creating a series of Frenkel defects in the lattice. Heat is also created as a result of the collisions (from electronic energy loss), as are possibly transmuted atoms. The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs. The entire cascade event occurs over a timescale of 1 × 10−13 seconds, and therefore, can only be "observed" in computer simulations of the event.
The knock-on atoms terminate in non-equilibrium interstitial lattice positions, many of which annihilate themselves by diffusing back into neighboring vacant lattice sites and restore the ordered lattice. Those that do not or cannot leave vacancies, which causes a local rise in the vacancy concentration far above that of the equilibrium concentration. These vacancies tend to migrate as a result of thermal diffusion towards vacancy sinks (i.e., grain boundaries, dislocations) but exist for significant amounts of time, during which additional high-energy particles bombard the lattice, creating collision cascades and additional vacancies, which migrate towards sinks. The main effect of irradiation in a lattice is the significant and persistent flux of defects to sinks in what is known as the defect wind. Vacancies can also annihilate by combining with one another to form dislocation loops and later, lattice voids.
The collision cascade creates many more vacancies and interstitials in the material than equilibrium for a given temperature, and diffusivity in the material is dramatically increased as a result. This leads to an effect called radiation-enhanced diffusion, which leads to microstructural evolution of the material over time. The mechanisms leading to the evolution of the microstructure are many, may vary with temperature, flux, and fluence, and are a subject of extensive study.
Radiation-induced segregation results from the aforementioned flux of vacancies to sinks, implying a flux of lattice atoms away from sinks; but not necessarily in the same proportion to alloy composition in the case of an alloyed material. These fluxes may therefore lead to depletion of alloying elements in the vicinity of sinks. For the flux of interstitials introduced by the cascade, the effect is reversed: the interstitials diffuse toward sinks resulting in alloy enrichment near the sink.
Dislocation loops are formed if vacancies form clusters on a lattice plane. If these vacancy concentration expand in three dimensions, a void forms. By definition, voids are under vacuum, but may became gas-filled in the case of alpha-particle radiation (helium) or if the gas is produced as a result of transmutation reactions. The void is then called a bubble, and leads to dimensional instability (neutron-induced swelling) of parts subject to radiation. Swelling presents a major long-term design problem, especially in reactor components made out of stainless steel. Alloys with crystallographic isotropy, such as Zircaloys are subject to the creation of dislocation loops, but do not exhibit void formation. Instead, the loops form on particular lattice planes, and can lead to irradiation-induced growth, a phenomenon distinct from swelling, but that can also produce significant dimensional changes in an alloy.
Irradiation of materials can also induce phase transformations in the material: in the case of a solid solution, the solute enrichment or depletion at sinks radiation-induced segregation can lead to the precipitation of new phases in the material.
The mechanical effects of these mechanisms include irradiation hardening, embrittlement, creep, and environmentally-assisted cracking. The defect clusters, dislocation loops, voids, bubbles, and precipitates produced as a result of radiation in a material all contribute to the strengthening and embrittlement (loss of ductility) in the material. Embrittlement is of particular concern for the material comprising the reactor pressure vessel, where as a result the energy required to fracture the vessel decreases significantly. It is possible to restore ductility by annealing the defects out, and much of the life-extension of nuclear reactors depends on the ability to safely do so. Creep is also greatly accelerated in irradiated materials, though not as a result of the enhanced diffusivities, but rather as a result of the interaction between lattice stress and the developing microstructure. Environmentally-assisted cracking or, more specifically, irradiation-assisted stress corrosion cracking (IASCC) is observed especially in alloys subject to neutron radiation and in contact with water, caused by hydrogen absorption at crack tips resulting from radiolysis of the water, leading to a reduction in the required energy to propagate the crack.
See also
Neutron emission
Neutron flux
Neutron radiography
References
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.222501
External links
EPA definitions of various terms
Comparison of Neutron Radiographic and X-Radiographic Images
Neutron techniques A unique tool for research and development
IARC Group 1 carcinogens
Ionizing radiation
Radiation
Neutron-related techniques | Neutron radiation | [
"Physics"
] | 2,972 | [
"Ionizing radiation",
"Physical phenomena",
"Radiation"
] |
411,215 | https://en.wikipedia.org/wiki/Integer%20programming | An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.
Integer programming is NP-complete. In particular, the special case of 0–1 integer linear programming, in which unknowns are binary, and only the restrictions must be satisfied, is one of Karp's 21 NP-complete problems.
If some decision variables are not discrete, the problem is known as a mixed-integer programming problem.
Canonical and standard form for ILPs
In integer linear programming, the canonical form is distinct from the standard form. An integer linear program in canonical form is expressed thus (note that it is the vector which is to be decided):
and an ILP in standard form is expressed as
where are vectors and is a matrix. As with linear programs, ILPs not in standard form can be converted to standard form by eliminating inequalities, introducing slack variables () and replacing variables that are not sign-constrained with the difference of two sign-constrained variables.
Example
The plot on the right shows the following problem.
The feasible integer points are shown in red, and the red dashed lines indicate their convex hull, which is the smallest convex polyhedron that contains all of these points. The blue lines together with the coordinate axes define the polyhedron of the LP relaxation, which is given by the inequalities without the integrality constraint. The goal of the optimization is to move the black dashed line as far upward while still touching the polyhedron. The optimal solutions of the integer problem are the points and that both have an objective value of 2. The unique optimum of the relaxation is with objective value of 2.8. If the solution of the relaxation is rounded to the nearest integers, it is not feasible for the ILP.
Proof of NP-hardness
The following is a reduction from minimum vertex cover to integer programming that will serve as the proof of NP-hardness.
Let be an undirected graph. Define a linear program as follows:
Given that the constraints limit to either 0 or 1, any feasible solution to the integer program is a subset of vertices. The first constraint implies that at least one end point of every edge is included in this subset. Therefore, the solution describes a vertex cover. Additionally given some vertex cover C, can be set to 1 for any and to 0 for any thus giving us a feasible solution to the integer program. Thus we can conclude that if we minimize the sum of we have also found the minimum vertex cover.
Variants
Mixed-integer linear programming (MILP) involves problems in which only some of the variables, , are constrained to be integers, while other variables are allowed to be non-integers.
Zero–one linear programming (or binary integer programming) involves problems in which the variables are restricted to be either 0 or 1. Any bounded integer variable can be expressed as a combination of binary variables. For example, given an integer variable, , the variable can be expressed using binary variables:
Applications
There are two main reasons for using integer variables when modeling problems as a linear program:
The integer variables represent quantities that can only be integer. For example, it is not possible to build 3.7 cars.
The integer variables represent decisions (e.g. whether to include an edge in a graph) and so should only take on the value 0 or 1.
These considerations occur frequently in practice and so integer linear programming can be used in many applications areas, some of which are briefly described below.
Production planning
Mixed-integer programming has many applications in industrial productions, including job-shop modelling. One important example happens in agricultural production planning and involves determining production yield for several crops that can share resources (e.g. land, labor, capital, seeds, fertilizer, etc.). A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but the variables must be constrained to be integer.
Scheduling
These problems involve service and vehicle scheduling in transportation networks. For example, a problem may involve assigning buses or subways to individual routes so that a timetable can be met, and also to equip them with drivers. Here binary decision variables indicate whether a bus or subway is assigned to a route and whether a driver is assigned to a particular train or subway. The zero–one programming technique has been successfully applied to solve a project selection problem in which projects are mutually exclusive and/or technologically interdependent. It is used in a special case of integer programming, in which all the decision variables are integers. Variable can assume only the values zero or one.
Territorial partitioning
Territorial partitioning or districting problems consist of partitioning a geographical region into districts in order to plan some operations while considering different criteria or constraints. Some requirements for this problem are: contiguity, compactness, balance or equity, respect of natural boundaries, and socio-economic homogeneity. Some applications for this type of problem include: political districting, school districting, health services districting and waste management districting.
Telecommunications networks
The goal of these problems is to design a network of lines to install so that a predefined set of communication requirements are met and the total cost of the network is minimal. This requires optimizing both the topology of the network along with setting the capacities of the various lines. In many cases, the capacities are constrained to be integer quantities. Usually there are, depending on the technology used, additional restrictions that can be modeled as linear inequalities with integer or binary variables.
Cellular networks
The task of frequency planning in GSM mobile networks involves distributing available frequencies across the antennas so that users can be served and interference is minimized between the antennas. This problem can be formulated as an integer linear program in which binary variables indicate whether a frequency is assigned to an antenna.
Other applications
Cash flow matching
Energy system optimization
UAV guidance
Transit map layouting
Algorithms
The naive way to solve an ILP is to simply remove the constraint that x is integer, solve the corresponding LP (called the LP relaxation of the ILP), and then round the entries of the solution to the LP relaxation. But, not only may this solution not be optimal, it may not even be feasible; that is, it may violate some constraint.
Using total unimodularity
While in general the solution to LP relaxation will not be guaranteed to be integral, if the ILP has the form such that where and have all integer entries and is totally unimodular, then every basic feasible solution is integral. Consequently, the solution returned by the simplex algorithm is guaranteed to be integral. To show that every basic feasible solution is integral, let be an arbitrary basic feasible solution . Since is feasible,
we know that . Let be the elements corresponding to the basis columns for the basic solution . By definition of a basis, there is some square submatrix of
with linearly independent columns such that .
Since the columns of are linearly independent and is square, is nonsingular,
and therefore by assumption, is unimodular and so . Also, since is nonsingular, it is invertible and therefore . By definition, . Here denotes the adjugate of and is integral because is integral. Therefore,
Thus, if the matrix of an ILP is totally unimodular, rather than use an ILP algorithm, the simplex method can be used to solve the LP relaxation and the solution will be integer.
Exact algorithms
When the matrix is not totally unimodular, there are a variety of algorithms that can be used to solve integer linear programs exactly. One class of algorithms are cutting plane methods, which work by solving the LP relaxation and then adding linear constraints that drive the solution towards being integer without excluding any integer feasible points.
Another class of algorithms are variants of the branch and bound method. For example, the branch and cut method that combines both branch and bound and cutting plane methods. Branch and bound algorithms have a number of advantages over algorithms that only use cutting planes. One advantage is that the algorithms can be terminated early and as long as at least one integral solution has been found, a feasible, although not necessarily optimal, solution can be returned. Further, the solutions of the LP relaxations can be used to provide a worst-case estimate of how far from optimality the returned solution is. Finally, branch and bound methods can be used to return multiple optimal solutions.
Exact algorithms for a small number of variables
Suppose is an m-by-n integer matrix and is an m-by-1 integer vector. We focus on the feasibility problem, which is to decide whether there exists an n-by-1 vector satisfying .
Let V be the maximum absolute value of the coefficients in and . If n (the number of variables) is a fixed constant, then the feasibility problem can be solved in time polynomial in m and log V. This is trivial for the case n=1. The case n=2 was solved in 1981 by Herbert Scarf. The general case was solved in 1983 by Hendrik Lenstra, combining ideas by László Lovász and Peter van Emde Boas. Doignon's theorem asserts that an integer program is feasible whenever every subset of constraints is feasible; a method combining this result with algorithms for LP-type problems can be used to solve integer programs in time that is linear in and fixed-parameter tractable (FPT) in , but possibly doubly exponential in , with no dependence on .
In the special case of 0-1 ILP, Lenstra's algorithm is equivalent to complete enumeration: the number of all possible solutions is fixed (2n), and checking the feasibility of each solution can be done in time poly(m, log V). In the general case, where each variable can be an arbitrary integer, complete enumeration is impossible. Here, Lenstra's algorithm uses ideas from Geometry of numbers. It transforms the original problem into an equivalent one with the following property: either the existence of a solution is obvious, or the value of (the n-th variable) belongs to an interval whose length is bounded by a function of n. In the latter case, the problem is reduced to a bounded number of lower-dimensional problems. The run-time complexity of the algorithm has been improved in several steps:
The original algorithm of Lenstra had run-time .
Kannan presented an improved algorithm with run-time .
Frank and Tardos presented an improved algorithm with run-time .
Dadush presented an improved algorithm with run-time .
Reis and Rothvoss presented an improved algorithm with run-time .
These algorithms can also be used for mixed integer linear programs (MILP) - programs in which some variables are integer and some variables are real. The original algorithm of Lenstra has run-time , where n is the number of integer variables, d is the number of continuous variables, and L is the binary encoding size of the problem. Using techniques from later algorithms, the factor can be improved to or to .
Heuristic methods
Since integer linear programming is NP-hard, many problem instances are intractable and so heuristic methods must be used instead. For example, tabu search can be used to search for solutions to ILPs. To use tabu search to solve ILPs, moves can be defined as incrementing or decrementing an integer constrained variable of a feasible solution while keeping all other integer-constrained variables constant. The unrestricted variables are then solved for. Short-term memory can consist of previously tried solutions while medium-term memory can consist of values for the integer constrained variables that have resulted in high objective values (assuming the ILP is a maximization problem). Finally, long-term memory can guide the search towards integer values that have not previously been tried.
Other heuristic methods that can be applied to ILPs include
Hill climbing
Simulated annealing
Reactive search optimization
Ant colony optimization
Hopfield neural networks
There are also a variety of other problem-specific heuristics, such as the k-opt heuristic for the traveling salesman problem. A disadvantage of heuristic methods is that if they fail to find a solution, it cannot be determined whether it is because there is no feasible solution or whether the algorithm simply was unable to find one. Further, it is usually impossible to quantify how close to optimal a solution returned by these methods are.
Sparse integer programming
It is often the case that the matrix that defines the integer program is sparse. In particular, this occurs when the matrix has a block structure, which is the case in many applications. The sparsity of the matrix can be measured as follows. The graph of has vertices corresponding to columns of , and two columns form an edge if has a row where both columns have nonzero entries. Equivalently, the vertices correspond to variables, and two variables form an edge if they share an inequality. The sparsity measure of is the minimum of the tree-depth of the graph of and the tree-depth of the graph of the transpose of . Let be the numeric measure of defined as the maximum absolute value of any entry of . Let be the number of variables of the integer program. Then it was shown in 2018 that integer programming can be solved in strongly polynomial and fixed-parameter tractable time parameterized by and . That is, for some computable function and some constant , integer programming can be solved in time . In particular, the time is independent of the right-hand side and objective function . Moreover, in contrast to the classical result of Lenstra, where the number of variables is a parameter, here the number of variables is a variable part of the input.
See also
Constrained least squares
References
Further reading
External links
A Tutorial on Integer Programming
Conference Integer Programming and Combinatorial Optimization, IPCO
The Aussois Combinatorial Optimization Workshop
Combinatorial optimization
NP-complete problems | Integer programming | [
"Mathematics"
] | 2,872 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
411,226 | https://en.wikipedia.org/wiki/Charts%20on%20SO%283%29 | In mathematics, the special orthogonal group in three dimensions, otherwise known as the rotation group SO(3), is a naturally occurring example of a manifold. The various charts on SO(3) set up rival coordinate systems: in this case there cannot be said to be a preferred set of parameters describing a rotation. There are three degrees of freedom, so that the dimension of SO(3) is three. In numerous applications one or other coordinate system is used, and the question arises how to convert from a given system to another.
The space of rotations
In geometry the rotation group is the group of all rotations about the origin of three-dimensional Euclidean space R3 under the operation of composition. By definition, a rotation about the origin is a linear transformation that preserves length of vectors (it is an isometry) and preserves orientation (i.e. handedness) of space. A length-preserving transformation which reverses orientation is called an improper rotation. Every improper rotation of three-dimensional Euclidean space is a rotation followed by a reflection in a plane through the origin.
Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity map satisfies the definition of a rotation. Owing to the above properties, the set of all rotations is a group under composition. Moreover, the rotation group has a natural manifold structure for which the group operations are smooth; so it is in fact a Lie group. The rotation group is often denoted SO(3) for reasons explained below.
The space of rotations is isomorphic with the set of rotation operators and the set of orthonormal matrices with determinant +1. It is also closely related (double covered) with the set of quaternions with their internal product, as well as to the set of rotation vectors (though here the relation is harder to describe, see below for details), with a different internal composition operation given by the product of their equivalent matrices.
Rotation vectors notation arise from the Euler's rotation theorem which states that any rotation in three dimensions can be described by a rotation by some angle about some axis. Considering this, we can then specify the axis of one of these rotations by two angles, and we can use the radius of the vector to specify the angle of rotation. These vectors represent a ball in 3D with an unusual topology.
This 3D solid sphere is equivalent to the surface of a 4D disc, which is also a 3D variety. For doing this equivalence, we will have to define how will we represent a rotation with this 4D-embedded surface.
The hypersphere of rotations
Visualizing the hypersphere
It is interesting to consider the space as the three-dimensional sphere S3, the boundary of a disk in 4-dimensional Euclidean space. For doing this, we will have to define how we represent a rotation with this 4D-embedded surface.
The way in which the radius can be used to specify the angle of rotation is not straightforward. It can be related to circles of latitude in a sphere with a defined north pole and is explained as follows:
Beginning at the north pole of a sphere in three-dimensional space, we specify the point at the north pole to represent the identity rotation. In the case of the identity rotation, no axis of rotation is defined, and the angle of rotation (zero) is irrelevant. A rotation with its axis contained in the xy-plane and a very small rotation angle can be specified by a slice through the sphere parallel to the xy-plane and very near the north pole. The circle defined by this slice will be very small, corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves southward, and the circles become larger until the equator of the sphere is reached, which will correspond to a rotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding to the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached, the circles shrink once more to the identity rotation, which is also specified as the point at the south pole. Notice that a number of characteristics of such rotations and their representations can be seen by this visualization.
The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and this neighborhood becomes flat as the neighborhood shrinks.
Aliases
Also, each rotation is actually represented by two antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360 degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of rotations is not closed under composition.
Two successive rotations with axes in the xy-plane will not necessarily give a rotation whose axis lies in the xy-plane, and thus cannot be represented as a point on the sphere. This will not be the case with a general rotation in 3-space, which do form a closed set under composition.
This visualization can be extended to a general rotation in 3-dimensional space. The identity rotation is a point, and a small angle of rotation about some axis can be represented as a point on a sphere with a small radius. As the angle of rotation grows, the sphere grows, until the angle of rotation reaches 180 degrees, at which point the sphere begins to shrink, becoming a point as the angle approaches 360 degrees (or zero degrees from the negative direction). This set of expanding and contracting spheres represents a hypersphere in four-dimensional space (a 3-sphere).
Just as in the simpler example above, each rotation represented as a point on the hypersphere is matched by its antipodal point on that hypersphere. The "latitude" on the hypersphere will be half of the corresponding angle of rotation, and the neighborhood of any point will become "flatter" (i.e. be represented by a 3D Euclidean space of points) as the neighborhood shrinks.
This behavior is matched by the set of unit quaternions: A general quaternion represents a point in a four-dimensional space, but constraining it to have unit magnitude yields a three-dimensional space equivalent to the surface of a hypersphere. The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius.
The vector part of a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the sine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.
Parametrizations
We can parameterize the space of rotations in several ways, but degenerations will always appear. For example, if we use three angles (Euler angles), such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock. We can avoid this by using four Euclidean coordinates w,x,y,z, with w2 + x2 + y2 + z2 = 1. The point (w,x,y,z) represents a rotation around the axis directed by the vector (x,y,z) by an angle
This problem is similar to parameterize the bidimensional surface of a sphere with two coordinates, such as latitude and longitude. Latitude and longitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically different from any other points on the sphere. At the poles (latitudes +90° and −90°), the longitude becomes meaningless. It can be shown that no two-parameter coordinate system can avoid such degeneracy.
The possible parametrizations candidates include:
Euler angles (θ,φ,ψ), representing a product of rotations about the x, y and z axes;
Tait–Bryan angles (θ,φ,ψ), representing a product of rotations about the x, y and z axes;
Axis angle pair (n, θ) of a unit vector representing an axis, and an angle of rotation about it;
A quaternion q of length 1 (cf. Versor, quaternions and spatial rotation, 3-sphere), the components of which are also called Euler–Rodrigues parameters;
a 3 × 3 skew-symmetric matrix, via exponentiation; the 3 × 3 skew-symmetric matrices are the Lie algebra , and this is the exponential map in Lie theory;
Cayley rational parameters, based on the Cayley transform, usable in all characteristics;
Möbius transformations, acting on the Riemann sphere.
Problems of the parametrizations
There are problems in using these as more than local charts, to do with their multiple-valued nature, and singularities. That is, one must be careful above all to work only with diffeomorphisms in the definition of chart. Problems of this sort are inevitable, since SO(3) is diffeomorphic to real projective space P3(R), which is a quotient of S3 by identifying antipodal points, and charts try to model a manifold using R3.
This explains why, for example, the Euler angles appear to give a variable in the 3-torus, and the unit quaternions in a 3-sphere. The uniqueness of the representation by Euler angles breaks down at some points (cf. gimbal lock), while the quaternion representation is always a double cover, with q and −q giving the same rotation.
If we use a skew-symmetric matrix, every 3 × 3 skew-symmetric matrix is determined by 3 parameters, and so at first glance, the parameter space is R3. Exponentiating such a matrix results in an orthogonal 3 × 3 matrix of determinant 1 – in other words, a rotation matrix, but this is a many-to-one map. Note that it is not a covering map – while it is a local homeomorphism near the origin, it is not a covering map at rotations by 180 degrees. It is possible to restrict these matrices to a ball around the origin in R3 so that rotations do not exceed 180 degrees, and this will be one-to-one, except for rotations by 180 degrees, which correspond to the boundary S2, and these identify antipodal points – this is the cut locus. The 3-ball with this identification of the boundary is P3(R). A similar situation holds for applying a Cayley transform to the skew-symmetric matrix.
Axis angle gives parameters in S2 × S1; if we replace the unit vector by the actual axis of rotation, so that n and −n give the same axis line, the set of axis becomes P2(R), the real projective plane. But since rotations around n and −n are parameterized by opposite values of θ, the result is an S1 bundle over P2(R), which turns out to be P3(R).
Fractional linear transformations use four complex parameters, a, b, c, and d, with the condition that ad−bc is non-zero. Since multiplying all four parameters by the same complex number does not change the parameter, we can insist that ad−bc=1. This suggests writing (a,b,c,d) as a 2 × 2 complex matrix of determinant 1, that is, as an element of the special linear group SL(2,C). But not all such matrices produce rotations: conformal maps on S2 are also included. To only get rotations we insist that d is the complex conjugate of a, and c is the negative of the complex conjugate of b. Then we have two complex numbers, a and b, subject to |a|2+|b|2=1. If we write a+bj, this is a quaternion of unit length.
Ultimately, since R3 is not P3(R), there will be a problem with each of these approaches. In some cases, we need to remember that certain parameter values result in the same rotation, and to remove this issue, boundaries must be set up, but then a path through this region in R3 must then suddenly jump to a different region when it crosses a boundary. Gimbal lock is a problem when the derivative of the map is not full rank, which occurs with Euler angles and Tait–Bryan angles, but not for the other choices. The quaternion representation has none of these problems (being a two-to-one mapping everywhere), but it has 4 parameters with a condition (unit length), which sometimes makes it harder to see the three degrees of freedom available.
Applications
One area in which these considerations, in some form, become inevitable, is the kinematics of a rigid body. One can take as definition the idea of a curve in the Euclidean group E(3) of three-dimensional Euclidean space, starting at the identity (initial position). The translation subgroup T of E(3) is a normal subgroup, with quotient SO(3) if we look at the subgroup E+(3) of direct isometries only (which is reasonable in kinematics). The translational part can be decoupled from the rotational part in standard Newtonian kinematics by considering the motion of the center of mass, and rotations of the rigid body about the center of mass. Therefore, any rigid body movement leads directly to SO(3), when we factor out the translational part.
These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the center down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the z-axis starting and ending at the identity rotation (i.e. a series of rotations through an angle φ where φ runs from 0 to 2π).
Surprisingly, if you run through the path twice, i.e., from north pole down to south pole and back to the north pole so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The Balinese plate trick and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin–statistics theorem.
The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of unit quaternions (i.e. those with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map.
See also
References
Euclidean symmetries
Lie groups
Rotation in three dimensions | Charts on SO(3) | [
"Physics",
"Mathematics"
] | 3,537 | [
"Lie groups",
"Mathematical structures",
"Euclidean symmetries",
"Functions and mappings",
"Mathematical objects",
"Algebraic structures",
"Mathematical relations",
"Symmetry"
] |
411,231 | https://en.wikipedia.org/wiki/Spin%20group | In mathematics the spin group, denoted Spin(n), is a Lie group whose underlying manifold is the double cover of the special orthogonal group , such that there exists a short exact sequence of Lie groups (when )
The group multiplication law on the double cover is given by lifting the multiplication on .
As a Lie group, Spin(n) therefore shares its dimension, , and its Lie algebra with the special orthogonal group.
For , Spin(n) is simply connected and so coincides with the universal cover of SO(n).
The non-trivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of reflection through the origin, generally denoted −.
Spin(n) can be constructed as a subgroup of the invertible elements in the Clifford algebra Cl(n). A distinct article discusses the spin representations.
Motivation and physical interpretation
The spin group is used in physics to describe the symmetries of (electrically neutral, uncharged) fermions. Its complexification, Spinc, is used to describe electrically charged fermions, most notably the electron. Strictly speaking, the spin group describes a fermion in a zero-dimensional space; however, space is not zero-dimensional, and so the spin group is used to define spin structures on (pseudo-)Riemannian manifolds: the spin group is the structure group of a spinor bundle. The affine connection on a spinor bundle is the spin connection; the spin connection can simplify calculations in general relativity. The spin connection in turn enables the Dirac equation to be written in curved spacetime (effectively in the tetrad coordinates), which in turn provides a footing for quantum gravity, as well as a formalization of Hawking radiation (where one of a pair of entangled, virtual fermions falls past the event horizon, and the other does not).
Construction
Construction of the Spin group often starts with the construction of a Clifford algebra over a real vector space V with a definite quadratic form q. The Clifford algebra is the quotient of the tensor algebra TV of V by a two-sided ideal. The tensor algebra (over the reals) may be written as
The Clifford algebra Cl(V) is then the quotient algebra
where is the quadratic form applied to a vector . The resulting space is finite dimensional, naturally graded (as a vector space), and can be written as
where is the dimension of , and . The spin algebra is defined as
where the last is a short-hand for V being a real vector space of real dimension n. It is a Lie algebra; it has a natural action on V, and in this way can be shown to be isomorphic to the Lie algebra of the special orthogonal group.
The pin group is a subgroup of 's Clifford group of all elements of the form
where each is of unit length:
The spin group is then defined as
where
is the subspace generated by elements that are the product of an even number of vectors. That is, Spin(V) consists of all elements of Pin(V), given above, with the restriction to k being an even number. The restriction to the even subspace is key to the formation of two-component (Weyl) spinors, constructed below.
If the set are an orthonormal basis of the (real) vector space V, then the quotient above endows the space with a natural anti-commuting structure:
for
which follows by considering for . This anti-commutation turns out to be of importance in physics, as it captures the spirit of the Pauli exclusion principle for fermions. A precise formulation is out of scope here, but it involves the creation of a spinor bundle on Minkowski spacetime; the resulting spinor fields can be seen to be anti-commuting as a by-product of the Clifford algebra construction. This anti-commutation property is also key to the formulation of supersymmetry. The Clifford algebra and the spin group have many interesting and curious properties, some of which are listed below.
Geometric construction
The spin groups can be constructed less explicitly but without appealing to Clifford algebras. As a manifold, is the double cover of . Its multiplication law can be defined by lifting as follows. Call the covering map . Then is a set with two elements, and one can be chosen without loss of generality to be the identity. Call this . Then to define multiplication in , for choose paths satisfying , and . These define a path in defined satisfying . Since is a double cover, there is a unique lift of with . Then define the product as .
It can then be shown that this definition is independent of the paths , that the multiplication is continuous, and the group axioms are satisfied with inversion being continuous, making a Lie group.
Double covering
For a quadratic space V, a double covering of SO(V) by Spin(V) can be given explicitly, as follows. Let be an orthonormal basis for V. Define an antiautomorphism by
This can be extended to all elements of by linearity. It is an antihomomorphism since
Observe that can then be defined as all elements for which
Now define the automorphism which on degree 1 elements is given by
and let denote , which is an antiautomorphism of . With this notation, an explicit double covering is the homomorphism given by
where . When has degree 1 (i.e. ), is the reflection across the hyperplane orthogonal to ; this follows from the anti-commuting property of the Clifford algebra.
This gives a double covering of both by and of by because gives the same transformation as .
Spinor space
It is worth reviewing how spinor space and Weyl spinors are constructed, given this formalism. Given a real vector space V of dimension an even number, its complexification is . It can be written as the direct sum of a subspace of spinors and a subspace of anti-spinors:
The space is spanned by the spinors
for and the complex conjugate spinors span . It is straightforward to see that the spinors anti-commute, and that the product of a spinor and anti-spinor is a scalar.
The spinor space is defined as the exterior algebra . The (complexified) Clifford algebra acts naturally on this space; the (complexified) spin group corresponds to the length-preserving endomorphisms. There is a natural grading on the exterior algebra: the product of an odd number of copies of correspond to the physics notion of fermions; the even subspace corresponds to the bosons. The representations of the action of the spin group on the spinor space can be built in a relatively straightforward fashion.
Complex case
The SpinC group is defined by the exact sequence
It is a multiplicative subgroup of the complexification of the Clifford algebra, and specifically, it is the subgroup generated by Spin(V) and the unit circle in C. Alternately, it is the quotient
where the equivalence identifies with .
This has important applications in 4-manifold theory and Seiberg–Witten theory. In physics, the Spin group is appropriate for describing uncharged fermions, while the SpinC group is used to describe electrically charged fermions. In this case, the U(1) symmetry is specifically the gauge group (structure group) of electromagnetism.
Exceptional isomorphisms
In low dimensions, there are isomorphisms among the classical Lie groups called exceptional isomorphisms. For instance, there are isomorphisms between low-dimensional spin groups and certain classical Lie groups, owing to low-dimensional isomorphisms between the root systems (and corresponding isomorphisms of Dynkin diagrams) of the different families of simple Lie algebras. Writing R for the reals, C for the complex numbers, H for the quaternions and the general understanding that Cl(n) is a short-hand for Cl(Rn) and that Spin(n) is a short-hand for Spin(Rn) and so on, one then has that
Cleven(1) = R the real numbers
Pin(1) = {+i, −i, +1, −1}
Spin(1) = O(1) = {+1, −1} the orthogonal group of dimension zero.
--
Cleven(2) = C the complex numbers
Spin(2) = U(1) = SO(2), which acts on z in R2 by double phase rotation . Corresponds to the abelian . dim = 1
--
Cleven(3) = H the quaternions
Spin(3) = Sp(1) = SU(2), corresponding to . dim = 3
--
Cleven(4) = H ⊕ H
Spin(4) = SU(2) × SU(2), corresponding to . dim = 6
--
Cleven(5)= M(2, H) the two-by-two matrices with quaternionic coefficients
Spin(5) = Sp(2), corresponding to . dim = 10
--
Cleven(6)= M(4, C) the four-by-four matrices with complex coefficients
Spin(6) = SU(4), corresponding to . dim = 15
There are certain vestiges of these isomorphisms left over for (see Spin(8) for more details). For higher n, these isomorphisms disappear entirely.
Indefinite signature
In indefinite signature, the spin group is constructed through Clifford algebras in a similar way to standard spin groups. It is a double cover of , the connected component of the identity of the indefinite orthogonal group . For , is connected; for there are two connected components. As in definite signature, there are some accidental isomorphisms in low dimensions:
Spin(1, 1) = GL(1, R)
Spin(2, 1) = SL(2, R)
Spin(3, 1) = SL(2, C)
Spin(2, 2) = SL(2, R) × SL(2, R)
Spin(4, 1) = Sp(1, 1)
Spin(3, 2) = Sp(4, R)
Spin(5, 1) = SL(2, H)
Spin(4, 2) = SU(2, 2)
Spin(3, 3) = SL(4, R)
Spin(6, 2) = SU(2, 2, H)
Note that .
Topological considerations
Connected and simply connected Lie groups are classified by their Lie algebra. So if G is a connected Lie group with a simple Lie algebra, with G′ the universal cover of G, there is an inclusion
with Z(G′) the center of G′. This inclusion and the Lie algebra of G determine G entirely (note that it is not the case that and π1(G) determine G entirely; for instance SL(2, R) and PSL(2, R) have the same Lie algebra and same fundamental group Z, but are not isomorphic).
The definite signature Spin(n) are all simply connected for n > 2, so they are the universal coverings of SO(n).
In indefinite signature, Spin(p, q) is not necessarily connected, and in general the identity component, Spin0(p, q), is not simply connected, thus it is not a universal cover. The fundamental group is most easily understood by considering the maximal compact subgroup of SO(p, q), which is SO(p) × SO(q), and noting that rather than being the product of the 2-fold covers (hence a 4-fold cover), Spin(p, q) is the "diagonal" 2-fold cover – it is a 2-fold quotient of the 4-fold cover. Explicitly, the maximal compact connected subgroup of Spin(p, q) is
Spin(p) × Spin(q)/{(1, 1), (−1, −1)}.
This allows us to calculate the fundamental groups of SO(p, q), taking p ≥ q:
Thus once the fundamental group is Z2, as it is a 2-fold quotient of a product of two universal covers.
The maps on fundamental groups are given as follows. For , this implies that the map is given by going to . For , this map is given by . And finally, for , is sent to and is sent to .
Fundamental groups of SO(n)
The fundamental groups can be more directly derived using results in homotopy theory. In particular we can find for as the three smallest have familiar underlying manifolds: is the point manifold, , and (shown using the axis-angle representation).
The proof uses known results in algebraic topology.
The same argument can be used to show , by considering a fibration
where is the upper sheet of a two-sheeted hyperboloid, which is contractible, and is the identity component of the proper Lorentz group (the proper orthochronous Lorentz group).
Center
The center of the spin groups, for , (complex and real) are given as follows:
Quotient groups
Quotient groups can be obtained from a spin group by quotienting out by a subgroup of the center, with the spin group then being a covering group of the resulting quotient, and both groups having the same Lie algebra.
Quotienting out by the entire center yields the minimal such group, the projective special orthogonal group, which is centerless, while quotienting out by {±1} yields the special orthogonal group – if the center equals {±1} (namely in odd dimension), these two quotient groups agree. If the spin group is simply connected (as Spin(n) is for ), then Spin is the maximal group in the sequence, and one has a sequence of three groups,
Spin(n) → SO(n) → PSO(n),
splitting by parity yields:
Spin(2n) → SO(2n) → PSO(2n),
Spin(2n+1) → SO(2n+1) = PSO(2n+1),
which are the three compact real forms (or two, if ) of the compact Lie algebra
The homotopy groups of the cover and the quotient are related by the long exact sequence of a fibration, with discrete fiber (the fiber being the kernel) – thus all homotopy groups for are equal, but π0 and π1 may differ.
For , Spin(n) is simply connected ( is trivial), so SO(n) is connected and has fundamental group Z2 while PSO(n) is connected and has fundamental group equal to the center of Spin(n).
In indefinite signature the covers and homotopy groups are more complicated – Spin(p, q) is not simply connected, and quotienting also affects connected components. The analysis is simpler if one considers the maximal (connected) compact and the component group of .
Whitehead tower
The spin group appears in a Whitehead tower anchored by the orthogonal group:
The tower is obtained by successively removing (killing) homotopy groups of increasing order. This is done by constructing short exact sequences starting with an Eilenberg–MacLane space for the homotopy group to be removed. Killing the 3 homotopy group in Spin(n), one obtains the infinite-dimensional string group String(n).
Discrete subgroups
Discrete subgroups of the spin group can be understood by relating them to discrete subgroups of the special orthogonal group (rotational point groups).
Given the double cover , by the lattice theorem, there is a Galois connection between subgroups of Spin(n) and subgroups of SO(n) (rotational point groups): the image of a subgroup of Spin(n) is a rotational point group, and the preimage of a point group is a subgroup of Spin(n), and the closure operator on subgroups of Spin(n) is multiplication by {±1}. These may be called "binary point groups"; most familiar is the 3-dimensional case, known as binary polyhedral groups.
Concretely, every binary point group is either the preimage of a point group (hence denoted 2G, for the point group G), or is an index 2 subgroup of the preimage of a point group which maps (isomorphically) onto the point group; in the latter case the full binary group is abstractly (since {±1} is central). As an example of these latter, given a cyclic group of odd order in SO(n), its preimage is a cyclic group of twice the order, and the subgroup maps isomorphically to .
Of particular note are two series:
higher binary tetrahedral groups, corresponding to the 2-fold cover of symmetries of the n-simplex; this group can also be considered as the double cover of the symmetric group, , with the alternating group being the (rotational) symmetry group of the n-simplex.
higher binary octahedral groups, corresponding to the 2-fold covers of the hyperoctahedral group (symmetries of the hypercube, or equivalently of its dual, the cross-polytope).
For point groups that reverse orientation, the situation is more complicated, as there are two pin groups, so there are two possible binary groups corresponding to a given point group.
See also
Clifford algebra
Clifford analysis
Spinor
Spinor bundle
Spin structure
Table of Lie groups
Anyon
Orientation entanglement
Related groups
Pin group Pin(n) – two-fold cover of orthogonal group, O(n)
Metaplectic group Mp(2n) – two-fold cover of symplectic group, Sp(2n)
String group String(n) – the next group in the Whitehead tower
References
External links
The essential dimension of spin groups is OEIS:A280191.
Grothendieck's "torsion index" is OEIS:A096336.
Further reading
Lie groups
Topology of Lie groups
Spinors | Spin group | [
"Mathematics"
] | 3,762 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
411,244 | https://en.wikipedia.org/wiki/Thermal%20lance | A thermal lance, thermic lance, oxygen lance, or burning bar is a tool that heats and melts steel in the presence of pressurized oxygen to create very high temperatures for cutting. It consists of a long steel tube packed with alloy steel rods, which serve as fuel; these are sometimes mixed with aluminum rods to increase the heat output.
Operation
One end of the tube is placed in a holder and oxygen is fed through the tube. The far end of the tube is pre-heated and lit by an oxyacetylene torch. An intense stream of burning steel is produced at the working end and can be used to cut rapidly through thick materials, including steel and concrete. The tube is consumed by the process within a few minutes.
Applications
Often used as a heavy duty demolition tool, the thermic lance is also used to remove seized axles of heavy machinery without damaging the bearings or axle housing. This technique is often used on the pins and axles of large equipment such as cranes, ships, bridges, and sluice-gates. In addition, thermal lancing is used to clean the bottom of steel furnace pots, which accumulate a skull layer of slag and iron during operation.
Principle of operation
Steel, in the form of steel wool, can burn at atmospheric (20%) concentrations of oxygen because it has a high surface area-to-mass ratio and relatively low mass, which prevents the heat from being dissipated in the bulk of the material. When the oxygen concentration is increased, steel wool will burn faster. Burning steel wool is simply the rapid oxidation of iron into ; the thermal lance uses steel in the form of rods rather than wool, the rods will burn with a sufficiently high supply of concentrated oxygen.
The temperature at which a thermal lance operates varies depending on the environment. Some estimates put the maximum temperature at , while others calculate it to be .
Alternative fuels
Thermal lances have been constructed for demonstration using foodstuffs (including bacon and dried spaghetti) as the fuel instead of steel rods; a supply of pure oxygen is more important to drive rapid oxidation than the fuel being burned.
History
Leo Malcher filed for a patent in 1922 entitled "Process of attacking compact mineral material, noncombustible in oxygen". The patent uses "a suitable disintegrating flux to act upon the material at the point where it is desired to attack it ... the fuel employed in the example to be described is metallic iron, and is arranged in the form of two concentric pipes". The annulus between the two pipes was filled with a flux (sodium carbonate borax and sodium chloride in equal proportions) and oxygen was supplied through the inner tube.
See also
References
External links
(oxygen-supplied thermic lance, invented by Ernst Brandenberger)
(device for fixing holes by method of smelting, especially into buildings, invented by Berczes et al.)
Cutting tools
Welding | Thermal lance | [
"Engineering"
] | 592 | [
"Welding",
"Mechanical engineering"
] |
411,325 | https://en.wikipedia.org/wiki/Real%20projective%20plane | In mathematics, the real projective plane, denoted or , is a two-dimensional projective space, similar to the familiar Euclidean plane in many respects but without the concepts of distance, circles, angle measure, or parallelism. It is the setting for planar projective geometry, in which the relationships between objects are not considered to change under projective transformations. The name projective comes from perspective drawing: projecting an image from one plane onto another as viewed from a point outside either plane, for example by photographing a flat painting from an oblique angle, is a projective transformation.
The fundamental objects in the projective plane are points and straight lines, and as in Euclidean geometry, every pair of points determines a unique line passing through both, but unlike in the Euclidean case in projective geometry every pair of lines also determines a unique point at their intersection (in Euclidean geometry, parallel lines never intersect). In contexts where there is no ambiguity, it is simply called the projective plane; the qualifier "real" is added to distinguish it from other projective planes such as the complex projective plane and finite projective planes.
One common model of the real projective plane is the space of lines in three-dimensional Euclidean space which pass through a particular origin point; in this model, lines through the origin are considered to be the "points" of the projective plane, and planes through the origin are considered to be the "lines" in the projective plane. These projective points and lines can be pictured in two dimensions by intersecting them with any arbitrary plane not passing through the origin; then the parallel plane which does pass through the origin (a projective "line") is called the line at infinity. (See below.)
In topology, the name real projective plane is applied to any surface which is topologically equivalent to the real projective plane. Topologically, the real projective plane is compact and non-orientable (one-sided). It cannot be embedded in three-dimensional Euclidean space without intersecting itself. It has Euler characteristic 1, hence a demigenus (non-orientable genus, Euler genus) of 1.
The topological real projective plane can be constructed by taking the (single) edge of a Möbius strip and gluing it to itself in the correct direction, or by gluing the edge to a disk. Alternately, the real projective plane can be constructed by identifying each pair of opposite sides of the square, but in opposite directions, as shown in the diagram. (Performing any of these operations in three-dimensional space causes the surface to intersect itself.)
Examples
Projective geometry is not necessarily concerned with curvature and the real projective plane may be twisted up and placed in the Euclidean plane or 3-space in many different ways. Some of the more important examples are described below.
The projective plane cannot be embedded (that is without intersection) in three-dimensional Euclidean space. The proof that the projective plane does not embed in three-dimensional Euclidean space goes like this: Assuming that it does embed, it would bound a compact region in three-dimensional Euclidean space by the generalized Jordan curve theorem. The outward-pointing unit normal vector field would then give an orientation of the boundary manifold, but the boundary manifold would be the projective plane, which is not orientable. This is a contradiction, and so our assumption that it does embed must have been false.
The projective sphere
Consider a sphere, and let the great circles of the sphere be "lines", and let pairs of antipodal points be "points". It is easy to check that this system obeys the axioms required of a projective plane:
any pair of distinct great circles meet at a pair of antipodal points; and
any two distinct pairs of antipodal points lie on a single great circle.
If we identify each point on the sphere with its antipodal point, then we get a representation of the real projective plane in which the "points" of the projective plane really are points. This means that the projective plane is the quotient space of the sphere obtained by partitioning the sphere into equivalence classes under the equivalence relation ~, where if or . This quotient space of the sphere is homeomorphic with the collection of all lines passing through the origin in R3.
The quotient map from the sphere onto the real projective plane is in fact a two sheeted (i.e. two-to-one) covering map. It follows that the fundamental group of the real projective plane is the cyclic group of order 2; i.e., integers modulo 2. One can take the loop AB from the figure above to be the generator.
Projective hemisphere
Because the sphere covers the real projective plane twice, the plane may be represented as a closed hemisphere around whose rim opposite points are identified.
Boy's surface – an immersion
The projective plane can be immersed (local neighbourhoods of the source space do not have self-intersections) in 3-space. Boy's surface is an example of an immersion.
Polyhedral examples must have at least nine faces.
Roman surface
Steiner's Roman surface is a more degenerate map of the projective plane into 3-space, containing a cross-cap.
A polyhedral representation is the tetrahemihexahedron, which has the same general form as Steiner's Roman surface, shown here.
Hemi polyhedra
Looking in the opposite direction, certain abstract regular polytopes – hemi-cube, hemi-dodecahedron, and hemi-icosahedron – can be constructed as regular figures in the projective plane; see also projective polyhedra.
Planar projections
Various planar (flat) projections or mappings of the projective plane have been described. In 1874 Klein described the mapping:
Central projection of the projective hemisphere onto a plane yields the usual infinite projective plane, described below.
Cross-capped disk
A closed surface is obtained by gluing a disk to a cross-cap. This surface can be represented parametrically by the following equations:
where both u and v range from 0 to 2π.
These equations are similar to those of a torus. Figure 1 shows a closed cross-capped disk.
A cross-capped disk has a plane of symmetry that passes through its line segment of double points. In Figure 1 the cross-capped disk is seen from above its plane of symmetry z = 0, but it would look the same if seen from below.
A cross-capped disk can be sliced open along its plane of symmetry, while making sure not to cut along any of its double points. The result is shown in Figure 2.
Once this exception is made, it will be seen that the sliced cross-capped disk is homeomorphic to a self-intersecting disk, as shown in Figure 3.
The self-intersecting disk is homeomorphic to an ordinary disk. The parametric equations of the self-intersecting disk are:
where u ranges from 0 to 2π and v ranges from 0 to 1.
Projecting the self-intersecting disk onto the plane of symmetry (z = 0 in the parametrization given earlier) which passes only through the double points, the result is an ordinary disk which repeats itself (doubles up on itself).
The plane z = 0 cuts the self-intersecting disk into a pair of disks which are mirror reflections of each other. The disks have centers at the origin.
Now consider the rims of the disks (with v = 1). The points on the rim of the self-intersecting disk come in pairs which are reflections of each other with respect to the plane z = 0.
A cross-capped disk is formed by identifying these pairs of points, making them equivalent to each other. This means that a point with parameters (u, 1) and coordinates is identified with the point (u + π, 1) whose coordinates are . But this means that pairs of opposite points on the rim of the (equivalent) ordinary disk are identified with each other; this is how a real projective plane is formed out of a disk. Therefore, the surface shown in Figure 1 (cross-cap with disk) is topologically equivalent to the real projective plane RP2.
Homogeneous coordinates
The points in the plane can be represented by homogeneous coordinates. A point has homogeneous coordinates [x : y : z], where the coordinates [x : y : z] and [tx : ty : tz] are considered to represent the same point, for all nonzero values of t. The points with coordinates [x : y : 1] are the usual real plane, called the finite part of the projective plane, and points with coordinates [x : y : 0], called points at infinity or ideal points, constitute a line called the line at infinity. (The homogeneous coordinates [0 : 0 : 0] do not represent any point.)
The lines in the plane can also be represented by homogeneous coordinates. A projective line corresponding to the plane in R3 has the homogeneous coordinates (a : b : c). Thus, these coordinates have the equivalence relation (a : b : c) = (da : db : dc) for all nonzero values of d. Hence a different equation of the same line dax + dby + dcz = 0 gives the same homogeneous coordinates.
A point [x : y : z] lies on a line (a : b : c) if ax + by + cz = 0.
Therefore, lines with coordinates (a : b : c) where a, b are not both 0 correspond to the lines in the usual real plane, because they contain points that are not at infinity. The line with coordinates (0 : 0 : 1) is the line at infinity, since the only points on it are those with z = 0.
Points, lines, and planes
A line in P2 can be represented by the equation ax + by + cz = 0. If we treat a, b, and c as the column vector ℓ and x, y, z as the column vector x then the equation above can be written in matrix form as:
xTℓ = 0 or ℓTx = 0.
Using vector notation we may instead write x ⋅ ℓ = 0 or ℓ ⋅ x = 0.
The equation k(xTℓ) = 0 (which k is a non-zero scalar) sweeps out a plane that goes through zero in R3 and k(x) sweeps out a line, again going through zero. The plane and line are linear subspaces in R3, which always go through zero.
Ideal points
In P2 the equation of a line is and this equation can represent a line on any plane parallel to the x, y plane by multiplying the equation by k.
If we have a normalized homogeneous coordinate. All points that have z = 1 create a plane. Let's pretend we are looking at that plane (from a position further out along the z axis and looking back towards the origin) and there are two parallel lines drawn on the plane. From where we are standing (given our visual capabilities) we can see only so much of the plane, which we represent as the area outlined in red in the diagram. If we walk away from the plane along the z axis, (still looking backwards towards the origin), we can see more of the plane. In our field of view original points have moved. We can reflect this movement by dividing the homogeneous coordinate by a constant. In the adjacent image we have divided by 2 so the z value now becomes 0.5. If we walk far enough away what we are looking at becomes a point in the distance. As we walk away we see more and more of the parallel lines. The lines will meet at a line at infinity (a line that goes through zero on the plane at ). Lines on the plane when are ideal points. The plane at is the line at infinity.
The homogeneous point is where all the real points go when you're looking at the plane from an infinite distance, a line on the plane is where parallel lines intersect.
Duality
In the equation there are two column vectors. You can keep either constant and vary the other. If we keep the point x constant and vary the coefficients ℓ we create new lines that go through the point. If we keep the coefficients constant and vary the points that satisfy the equation we create a line. We look upon x as a point, because the axes we are using are x, y, and z. If we instead plotted the coefficients using axis marked a, b, c points would become lines and lines would become points. If you prove something with the data plotted on axis marked x, y, and z the same argument can be used for the data plotted on axis marked a, b, and c. That is duality.
Lines joining points and intersection of lines (using duality)
The equation calculates the inner product of two column vectors. The inner product of two vectors is zero if the vectors are orthogonal. In P2, the line between the points x1 and x2 may be represented as a column vector ℓ that satisfies the equations and , or in other words a column vector ℓ that is orthogonal to x1 and x2. The cross product will find such a vector: the line joining two points has homogeneous coordinates given by the equation . The intersection of two lines may be found in the same way, using duality, as the cross product of the vectors representing the lines, .
Embedding into 4-dimensional space
The projective plane embeds into 4-dimensional Euclidean space. The real projective plane P2(R) is the quotient of the two-sphere
S2 = {(x, y, z) ∈ R3 : x2 + y2 + z2 = 1}
by the antipodal relation . Consider the function given by . This map restricts to a map whose domain is S2 and, since each component is a homogeneous polynomial of even degree, it takes the same values in R4 on each of any two antipodal points on S2. This yields a map . Moreover, this map is an embedding. Notice that this embedding admits a projection into R3 which is the Roman surface.
Higher non-orientable surfaces
By gluing together projective planes successively we get non-orientable surfaces of higher demigenus. The gluing process consists of cutting out a little disk from each surface and identifying (gluing) their boundary circles. Gluing two projective planes creates the Klein bottle.
The article on the fundamental polygon describes the higher non-orientable surfaces.
See also
Real projective space
Projective space
Pu's inequality for real projective plane
Smooth projective plane
Citations
References
External links
Line field coloring using Werner Boy's real projective plane immersion
The real projective plane on YouTube
Surfaces
Geometric topology | Real projective plane | [
"Mathematics"
] | 3,010 | [
"Topology",
"Geometric topology"
] |
411,492 | https://en.wikipedia.org/wiki/Euler%20angles | The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.
They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in three dimensional linear algebra.
Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position.
Chained rotations equivalence
Euler angles can be defined by elemental geometry or by composition of rotations (i.e. chained rotations). The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame.
The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation).
In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation.
Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition.
Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups:
Proper Euler angles
Tait–Bryan angles .
Tait–Bryan angles are also called Cardan angles; nautical angles; heading, elevation, and bank; or yaw, pitch, and roll. Sometimes, both kinds of sequences are called "Euler angles". In that case, the sequences of the first group are called proper or classic Euler angles.
Classic Euler angles
The Euler angles are three angles introduced by Swiss mathematician Leonhard Euler (1707–1783) to describe the orientation of a rigid body with respect to a fixed coordinate system.
Geometrical definition
The axes of the original frame are denoted as x, y, z and the axes of the rotated frame as X, Y, Z. The geometrical definition (sometimes referred to as static) begins by defining the line of nodes (N) as the intersection of the planes xy and XY (it can also be defined as the common perpendicular to the axes z and Z and then written as the vector product N = z × Z). Using it, the three Euler angles can be defined as follows:
(or ) is the signed angle between the x axis and the N axis (x-convention – it could also be defined between y and N, called y-convention).
(or ) is the angle between the z axis and the Z axis.
(or ) is the signed angle between the N axis and the X axis (x-convention).
Euler angles between two reference frames are defined only if both frames have the same handedness.
Conventions by intrinsic rotations
Intrinsic rotations are elemental rotations that occur about the axes of a coordinate system XYZ attached to a moving body. Therefore, they change their orientation after each elemental rotation. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three intrinsic rotations can be used to reach any target orientation for XYZ.
Euler angles can be defined by intrinsic rotations. The rotated frame XYZ may be imagined to be initially aligned with xyz, before undergoing the three elemental rotations represented by Euler angles. Its successive orientations may be denoted as follows:
x-y-z or x0-y0-z0 (initial)
x′-y′-z′ or x1-y1-z1 (after first rotation)
x″-y″-z″ or x2-y2-z2 (after second rotation)
X-Y-Z or x3-y3-z3 (final)
For the above-listed sequence of rotations, the line of nodes N can be simply defined as the orientation of X after the first elemental rotation. Hence, N can be simply denoted x′. Moreover, since the third elemental rotation occurs about Z, it does not change the orientation of Z. Hence Z coincides with z″. This allows us to simplify the definition of the Euler angles as follows:
α (or φ) represents a rotation around the z axis,
β (or θ) represents a rotation around the x′ axis,
γ (or ψ) represents a rotation around the z″ axis.
Conventions by extrinsic rotations
Extrinsic rotations are elemental rotations that occur about the axes of the fixed coordinate system xyz. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three extrinsic rotations can be used to reach any target orientation for XYZ. The Euler or Tait–Bryan angles (α, β, γ) are the amplitudes of these elemental rotations. For instance, the target orientation can be reached as follows (note the reversed order of Euler angle application):
The XYZ system rotates about the z axis by γ. The X axis is now at angle γ with respect to the x axis.
The XYZ system rotates again, but this time about the x axis by β. The Z axis is now at angle β with respect to the z axis.
The XYZ system rotates a third time, about the z axis again, by angle α.
In sum, the three elemental rotations occur about z, x and z. Indeed, this sequence is often denoted z-x-z (or 3-1-3). Sets of rotation axes associated with both proper Euler angles and Tait–Bryan angles are commonly named using this notation (see above for details).
If each step of the rotation acts on the rotating coordinate system XYZ, the rotation is intrinsic (Z-X'-Z''). Intrinsic rotation can also be denoted 3-1-3.
Signs, ranges and conventions
Angles are commonly defined according to the right-hand rule. Namely, they have positive values when they represent a rotation that appears clockwise when looking in the positive direction of the axis, and negative values when the rotation appears counter-clockwise. The opposite convention (left hand rule) is less frequently adopted.
About the ranges (using interval notation):
for α and γ, the range is defined modulo 2 radians. For instance, a valid range could be .
for β, the range covers radians (but can not be said to be modulo ). For example, it could be or .
The angles α, β and γ are uniquely determined except for the singular case that the xy and the XY planes are identical, i.e. when the z axis and the Z axis have the same or opposite directions. Indeed, if the z axis and the Z axis are the same, β = 0 and only (α + γ) is uniquely defined (not the individual values), and, similarly, if the z axis and the Z axis are opposite, β = and only (α − γ) is uniquely defined (not the individual values). These ambiguities are known as gimbal lock in applications.
There are six possibilities of choosing the rotation axes for proper Euler angles. In all of them, the first and third rotation axes are the same. The six possible sequences are:
z1-x′-z2″ (intrinsic rotations) or z2-x-z1 (extrinsic rotations)
x1-y′-x2″ (intrinsic rotations) or x2-y-x1 (extrinsic rotations)
y1-z′-y2″ (intrinsic rotations) or y2-z-y1 (extrinsic rotations)
z1-y′-z2″ (intrinsic rotations) or z2-y-z1 (extrinsic rotations)
x1-z′-x2″ (intrinsic rotations) or x2-z-x1 (extrinsic rotations)
y1-x′-y2″ (intrinsic rotations) or y2-x-y1 (extrinsic rotations)
Precession, nutation and intrinsic rotation
Precession, nutation, and intrinsic rotation (spin) are defined as the movements obtained by changing one of the Euler angles while leaving the other two constant. These motions are not expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes N and the third one is an intrinsic rotation around Z, an axis fixed in the body that moves.
The static definition implies that:
α (precession) represents a rotation around the z axis,
β (nutation) represents a rotation around the N or x′ axis,
γ (intrinsic rotation) represents a rotation around the Z or z″ axis.
If β is zero, there is no rotation about N. As a consequence, Z coincides with z, α and γ represent rotations about the same axis (z), and the final orientation can be obtained with a single rotation about z, by an angle equal to .
As an example, consider a top. The top spins around its own axis of symmetry; this corresponds to its intrinsic rotation. It also rotates around its pivotal axis, with its center of mass orbiting the pivotal axis; this rotation is a precession. Finally, the top can wobble up and down; the inclination angle is the nutation angle. The same example can be seen with the movements of the earth.
Though all three movements can be represented by a rotation operator with constant coefficients in some frame, they cannot be represented by these operators all at the same time. Given a reference frame, at most one of them will be coefficient-free. Only precession can be expressed in general as a matrix in the basis of the space without dependencies of the other angles.
These movements also behave as a gimbal set. Given a set of frames, able to move each with respect to the former according to just one angle, like a gimbal, there will exist an external fixed frame, one final frame and two frames in the middle, which are called "intermediate frames". The two in the middle work as two gimbal rings that allow the last frame to reach any orientation in space.
Tait–Bryan angles
[[Image:taitbrianzyx.svg|thumb|left|200px|Tait–Bryan angles. z-y′-x″ sequence (intrinsic rotations; N coincides with y). The angle rotation sequence is ψ, θ, φ. Note that in this case and θ is a negative angle.]]
The second type of formalism is called Tait–Bryan angles, after Scottish mathematical physicist Peter Guthrie Tait (1831–1901) and English applied mathematician George H. Bryan (1864–1928). It is the convention normally used for aerospace applications, so that zero degrees elevation represents the horizontal attitude. Tait–Bryan angles represent the orientation of the aircraft with respect to the world frame. When dealing with other vehicles, different axes conventions are possible.
Definitions
The definitions and notations used for Tait–Bryan angles are similar to those described above for proper Euler angles (geometrical definition, intrinsic rotation definition, extrinsic rotation definition). The only difference is that Tait–Bryan angles represent rotations about three distinct axes (e.g. x-y-z, or x-y′-z″), while proper Euler angles use the same axis for both the first and third elemental rotations (e.g., z-x-z, or z-x′-z″).
This implies a different definition for the line of nodes in the geometrical construction. In the proper Euler angles case it was defined as the intersection between two homologous Cartesian planes (parallel when Euler angles are zero; e.g. xy and XY). In the Tait–Bryan angles case, it is defined as the intersection of two non-homologous planes (perpendicular when Euler angles are zero; e.g. xy and YZ).
Conventions
The three elemental rotations may occur either about the axes of the original coordinate system, which remains motionless (extrinsic rotations), or about the axes of the rotating coordinate system, which changes its orientation after each elemental rotation (intrinsic rotations).
There are six possibilities of choosing the rotation axes for Tait–Bryan angles. The six possible sequences are:
x-y′-z″ (intrinsic rotations) or z-y-x (extrinsic rotations)
y-z′-x″ (intrinsic rotations) or x-z-y (extrinsic rotations)
z-x′-y″ (intrinsic rotations) or y-x-z (extrinsic rotations)
x-z′-y″ (intrinsic rotations) or y-z-x (extrinsic rotations)
z-y′-x″ (intrinsic rotations) or x-y-z (extrinsic rotations): the intrinsic rotations are known as: yaw, pitch and roll
y-x′-z″ (intrinsic rotations) or z-x-y (extrinsic rotations)
Signs and ranges
Tait–Bryan convention is widely used in engineering with different purposes. There are several axes conventions in practice for choosing the mobile and fixed axes, and these conventions determine the signs of the angles. Therefore, signs must be studied in each case carefully.
The range for the angles ψ and φ covers 2 radians. For θ the range covers radians.
Alternative names
These angles are normally taken as one in the external reference frame (heading, bearing), one in the intrinsic moving frame (bank) and one in a middle frame, representing an elevation or inclination with respect to the horizontal plane, which is equivalent to the line of nodes for this purpose.
As chained rotations
For an aircraft, they can be obtained with three rotations around its principal axes if done in the proper order and starting from a frame coincident with the reference frame.
A yaw will obtain the bearing,
a pitch will yield the elevation, and
a roll gives the bank angle.
Therefore, in aerospace they are sometimes called yaw, pitch, and roll. Notice that this will not work if the rotations are applied in any other order or if the airplane axes start in any position non-equivalent to the reference frame.
Tait–Bryan angles, following z-y′-x″ (intrinsic rotations) convention, are also known as nautical angles, because they can be used to describe the orientation of a ship or aircraft, or Cardan angles, after the Italian mathematician and physicist Gerolamo Cardano, who first described in detail the Cardan suspension and the Cardan joint.
Angles of a given frame
A common problem is to find the Euler angles of a given frame. The fastest way to get them is to write the three given vectors as columns of a matrix and compare it with the expression of the theoretical matrix (see later table of matrices). Hence the three Euler Angles can be calculated. Nevertheless, the same result can be reached avoiding matrix algebra and using only elemental geometry. Here we present the results for the two most commonly used conventions: ZXZ for proper Euler angles and ZYX for Tait–Bryan. Notice that any other convention can be obtained just changing the name of the axes.
Proper Euler angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in the main diagram, it can be seen that:
And, since
for we have
As is the double projection of a unitary vector,
There is a similar construction for , projecting it first over the plane defined by the axis z and the line of nodes. As the angle between the planes is and , this leads to:
and finally, using the inverse cosine function,
Tait–Bryan angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in this new diagram (notice that the angle theta is negative), it can be seen that:
As before,
for we have
in a way analogous to the former one:
Looking for similar expressions to the former ones:
Last remarks
Note that the inverse sine and cosine functions yield two possible values for the argument. In this geometrical description, only one of the solutions is valid. When Euler angles are defined as a sequence of rotations, all the solutions can be valid, but there will be only one inside the angle ranges. This is because the sequence of rotations to reach the target frame is not unique if the ranges are not previously defined.
For computational purposes, it may be useful to represent the angles using . For example, in the case of proper Euler angles:
Conversion to other orientation representations
Euler angles are one way to represent orientations. There are others, and it is possible to change to and from other conventions. Three parameters are always required to describe orientations in a 3-dimensional Euclidean space. They can be given in several ways, Euler angles being one of them; see charts on SO(3) for others.
The most common orientation representations are the rotation matrices, the axis-angle and the quaternions, also known as Euler–Rodrigues parameters, which provide another mechanism for representing 3D rotations. This is equivalent to the special unitary group description.
Expressing rotations in 3D as unit quaternions instead of matrices has some advantages:
Concatenating rotations is computationally faster and numerically more stable.
Extracting the angle and axis of rotation is simpler.
Interpolation is more straightforward. See for example slerp.
Quaternions do not suffer from gimbal lock as Euler angles do.
Regardless, the rotation matrix calculation is the first step for obtaining the other two representations.
Rotation matrix
Any orientation can be achieved by composing three elemental rotations, starting from a known standard orientation. Equivalently, any rotation matrix R can be decomposed as a product of three elemental rotation matrices. For instance:
is a rotation matrix that may be used to represent a composition of extrinsic rotations about axes z, y, x, (in that order), or a composition of intrinsic rotations about axes x-y′-z″ (in that order). However, both the definition of the elemental rotation matrices X, Y, Z, and their multiplication order depend on the choices taken by the user about the definition of both rotation matrices and Euler angles (see, for instance, Ambiguities in the definition of rotation matrices). Unfortunately, different sets of conventions are adopted by users in different contexts. The following table was built according to this set of conventions:
Each matrix is meant to operate by pre-multiplying column vectors (see Ambiguities in the definition of rotation matrices)
Each matrix is meant to represent an active rotation (the composing and composed matrices are supposed to act on the coordinates of vectors defined in the initial fixed reference frame and give as a result the coordinates of a rotated vector defined in the same reference frame).
Each matrix is meant to represent, primarily, a composition of intrinsic rotations (around the axes of the rotating reference frame) and, secondarily, the composition of three extrinsic rotations (which corresponds to the constructive evaluation of the R matrix by the multiplication of three truly elemental matrices, in reverse order).
Right handed reference frames are adopted, and the right hand rule is used to determine the sign of the angles α, β, γ.
For the sake of simplicity, the following table of matrix products uses the following nomenclature:
X, Y, Z are the matrices representing the elemental rotations about the axes x, y, z of the fixed frame (e.g., Xα represents a rotation about x by an angle α).
s and c represent sine and cosine (e.g., sα represents the sine of α).
These tabular results are available in numerous textbooks. For each column the last row constitutes the most commonly used convention.
To change the formulas for passive rotations (or find reverse active rotation), transpose the matrices (then each matrix transforms the initial coordinates of a vector remaining fixed to the coordinates of the same vector measured in the rotated reference system; same rotation axis, same angles, but now the coordinate system rotates, rather than the vector).
The following table contains formulas for angles α, β and γ from elements of a rotation matrix .
Properties
The Euler angles form a chart on all of SO(3), the special orthogonal group of rotations in 3D space. The chart is smooth except for a polar coordinate style singularity along . See charts on SO(3) for a more complete treatment.
The space of rotations is called in general "The Hypersphere of rotations", though this is a misnomer: the group Spin(3) is isometric to the hypersphere S3, but the rotation space SO(3) is instead isometric to the real projective space RP'''3 which is a 2-fold quotient space of the hypersphere. This 2-to-1 ambiguity is the mathematical origin of spin in physics.
A similar three angle decomposition applies to SU(2), the special unitary group of rotations in complex 2D space, with the difference that β ranges from 0 to 2. These are also called Euler angles.
The Haar measure for SO(3) in Euler angles is given by the Hopf angle parametrisation of SO(3), , where parametrise , the space of rotation axes.
For example, to generate uniformly randomized orientations, let α and γ be uniform from 0 to 2, let z be uniform from −1 to 1, and let .
Geometric algebra
Other properties of Euler angles and rotations in general can be found from the geometric algebra, a higher level abstraction, in which the quaternions are an even subalgebra. The principal tool in geometric algebra is the rotor where angle of rotation, is the rotation axis (unitary vector) and is the pseudoscalar (trivector in )
Higher dimensions
It is possible to define parameters analogous to the Euler angles in dimensions higher than three.
In four dimensions and above, the concept of "rotation about an axis" loses meaning and instead becomes "rotation in a plane." The number of Euler angles needed to represent the group is , equal to the number of planes containing two distinct coordinate axes in n-dimensional Euclidean space.
In SO(4) a rotation matrix is defined by two unit quaternions, and therefore has six degrees of freedom, three from each quaternion.
Applications
Vehicles and moving frames
Their main advantage over other orientation descriptions is that they are directly measurable from a gimbal mounted in a vehicle. As gyroscopes keep their rotation axis constant, angles measured in a gyro frame are equivalent to angles measured in the lab frame. Therefore, gyros are used to know the actual orientation of moving spacecraft, and Euler angles are directly measurable. Intrinsic rotation angle cannot be read from a single gimbal, so there has to be more than one gimbal in a spacecraft. Normally there are at least three for redundancy. There is also a relation to the well-known gimbal lock problem of mechanical engineering.
When studying rigid bodies in general, one calls the xyz system space coordinates, and the XYZ system body coordinates. The space coordinates are treated as unmoving, while the body coordinates are considered embedded in the moving body. Calculations involving acceleration, angular acceleration, angular velocity, angular momentum, and kinetic energy are often easiest in body coordinates, because then the moment of inertia tensor does not change in time. If one also diagonalizes the rigid body's moment of inertia tensor (with nine components, six of which are independent), then one has a set of coordinates (called the principal axes) in which the moment of inertia tensor has only three components.
The angular velocity of a rigid body takes a simple form using Euler angles in the moving frame. Also the Euler's rigid body equations are simpler because the inertia tensor is constant in that frame.
Crystallographic texture
In materials science, crystallographic texture (or preferred orientation) can be described using Euler angles. In texture analysis, the Euler angles provide a mathematical depiction of the orientation of individual crystallites within a polycrystalline material, allowing for the quantitative description of the macroscopic material.
The most common definition of the angles is due to Bunge and corresponds to the ZXZ convention. It is important to note, however, that the application generally involves axis transformations of tensor quantities, i.e. passive rotations. Thus the matrix that corresponds to the Bunge Euler angles is the transpose of that shown in the table above.
Others
Euler angles, normally in the Tait–Bryan convention, are also used in robotics for speaking about the degrees of freedom of a wrist. They are also used in electronic stability control in a similar way.
Gun fire control systems require corrections to gun-order angles (bearing and elevation) to compensate for deck tilt (pitch and roll). In traditional systems, a stabilizing gyroscope with a vertical spin axis corrects for deck tilt, and stabilizes the optical sights and radar antenna. However, gun barrels point in a direction different from the line of sight to the target, to anticipate target movement and fall of the projectile due to gravity, among other factors. Gun mounts roll and pitch with the deck plane, but also require stabilization. Gun orders include angles computed from the vertical gyro data, and those computations involve Euler angles.
Euler angles are also used extensively in the quantum mechanics of angular momentum. In quantum mechanics, explicit descriptions of the representations of SO(3) are very important for calculations, and almost all the work has been done using Euler angles. In the early history of quantum mechanics, when physicists and chemists had a sharply negative reaction towards abstract group theoretic methods (called the Gruppenpest''), reliance on Euler angles was also essential for basic theoretical work.
Many mobile computing devices contain accelerometers which can determine these devices' Euler angles with respect to the earth's gravitational attraction. These are used in applications such as games, bubble level simulations, and kaleidoscopes.
See also
3D projection
Rotation
Axis-angle representation
Conversion between quaternions and Euler angles
Davenport chained rotations
Euler's rotation theorem
Gimbal lock
Quaternion
Quaternions and spatial rotation
Rotation formalisms in three dimensions
Spherical coordinate system
References
Bibliography
External links
David Eberly. Euler Angle Formulas, Geometric Tools
An interactive tutorial on Euler angles available at https://www.mecademic.com/en/how-is-orientation-in-space-represented-with-euler-angles
EulerAngles an iOS app for visualizing in 3D the three rotations associated with Euler angles
Orientation Library "orilib", a collection of routines for rotation / orientation manipulation, including special tools for crystal orientations
Online tool to convert rotation matrices available at rotation converter (numerical conversion)
Online tool to convert symbolic rotation matrices (dead, but still available from the Wayback Machine) symbolic rotation converter
Rotation, Reflection, and Frame Change: Orthogonal tensors in computational engineering mechanics, IOP Publishing
Euler Angles, Quaternions, and Transformation Matrices for Space Shuttle Analysis, NASA
Rotation in three dimensions
Euclidean symmetries
Angle
Analytic geometry | Euler angles | [
"Physics",
"Mathematics"
] | 5,927 | [
"Geometric measurement",
"Scalar physical quantities",
"Functions and mappings",
"Euclidean symmetries",
"Physical quantities",
"Mathematical objects",
"Mathematical relations",
"Wikipedia categories named after physical quantities",
"Angle",
"Symmetry"
] |
411,512 | https://en.wikipedia.org/wiki/Cymatics | Cymatics (from ) is a subset of modal vibrational phenomena. The term was coined by Swiss physician Hans Jenny (1904–1972). Typically the surface of a plate, diaphragm, or membrane is vibrated, and regions of maximum and minimum displacement are made visible in a thin coating of particles, paste, or liquid. Different patterns emerge in the excitatory medium depending on the geometry of the plate and the driving frequency.
The apparatus employed can be simple, such as the Chinese spouting bowl, in which copper handles are rubbed and cause the copper bottom elements to vibrate. Other examples include the Chladni plate and the so-called cymascope.
History
On July 8, 1680, Robert Hooke was able to see the nodal patterns associated with the modes of vibration of glass plates. Hooke ran a bow along the edge of a glass plate covered with flour, and saw the nodal patterns emerge.
The German musician and physicist Ernst Chladni noticed in the eighteenth century that the modes of vibration of a membrane or a plate can be observed by sprinkling the vibrating surface with a fine dust (e.g., lycopodium powder, flour or fine sand). The powder moves due to the vibration and accumulates progressively in points of the surface corresponding to the sound vibration. The points form a pattern of lines, known as "nodal lines of the vibration mode". The normal modes of vibration, and the pattern of nodal lines associated with each of these, are completely determined, for a surface with homogeneous mechanical characteristics, from the geometric shape of the surface and by the way in which the surface is constrained.
Experiments of this kind, similar to those carried out earlier by Galileo Galilei around 1630 and by Robert Hooke in 1680, were later perfected by Chladni, who introduced them systematically in 1787 in his book Entdeckungen über die Theorie des Klanges (Discoveries on the theory of sound). This provided an important contribution to the understanding of acoustic phenomena and the functioning of musical instruments. The figures thus obtained (with the aid of a violin bow that rubbed perpendicularly along the edge of smooth plates covered with fine sand) are still designated by the name of "Chladni figures".
Work of Hans Jenny
In 1967 Hans Jenny, a student of the anthroposophist Rudolf Steiner, published the first of two volumes in German entitled Kymatic; the second was published posthumously in 1972.) In 2001, MACROmedia Publishing created a composite of these two volumes in English. Quite recently, in March 2024, they reissued an extensively revised version of this composite edition with notes and commentaries by leading experts in the fields of acoustics, the arts, and sound therapies.) He showed the evolution of harmonic images by subjecting inert substances to oscillating sound waves. His substantial body of work based on rigorous scientific methodology, developed Chladni's experiments, highlighting intricate, organic, harmonic images that reflected many universal patterns found throughout nature and especially living organisms. Jenny spread powders, pastes, and liquids on a metal plate connected to an oscillator which could produce a broad spectrum of frequencies. The substances were organized into different structures characterized by geometric shapes typical of the frequency of the vibration emitted by the oscillator. According to Jenny, these structures, reminiscent of the mandala and other forms recurring in nature, would be a manifestation of an invisible force field of the vibrational energy that generated it. He was particularly impressed by an observation that imposing a vocalization in ancient Sanskrit of Om (regarded by Hindus and Buddhists as the sound of creation) the lycopodium powder formed a circle with a centre point, one of the ways in which Om had been represented.
In fact, for a plate of circular shape, resting in the centre (or the border, or at least in a set of points with central symmetry), the nodal vibration modes all have central symmetry, so the observation of Jenny is entirely consistent with well known mathematical properties. From the physical-mathematical standpoint, the form of the nodal patterns is predetermined by the shape of the body set in vibration or, in the case of acoustic waves in a gas, the shape of the cavity in which the gas is contained. The sound wave, therefore, does not influence at all the shape of the vibrating body or the shape of the nodal patterns. The only thing that changes due to the vibration is the arrangement of the sand. The image formed by the sand, in turn, is influenced by the frequency spectrum of the vibration only because each vibration mode is characterized by a specific frequency. Therefore, the spectrum of the signal that excites the vibration determines which patterns are actually nodally displayed. The physical phenomena involved in the formation of Chladni figures are best explained by classical physics.
Influences on art and music
Devices for displaying nodal images have influenced visual arts and contemporary music. Artist Björk created projections of cymatics patterns by using bass frequencies on tour for her album Biophilia. Similarly, painter and musician Perry Hall uses vibrations from an electric bass to create cymatic patterns in tanks of paint, which he films (the Sound Drawings).
Hans Jenny's book on Chladni figures influenced Alvin Lucier and helped lead to Lucier's composition Queen of the South. Jenny's work was also followed up by Center for Advanced Visual Studies (CAVS) founder György Kepes at MIT. His work in this area included an acoustically vibrated piece of sheet metal in which small holes had been drilled in a grid. Small flames of gas burned through these holes and thermodynamic patterns were made visible by this setup.
In the mid-1980s, visual artist Ron Rocco, who also developed his work at CAVS, employed mirrors mounted to tiny servo motors, driven by the audio signal of a synthesizer and amplified by a tube amp to reflect the beam of a laser. This created light patterns which corresponded to the audio's frequency and amplitude. Using this beam to generate video feedback and computers to process the feedback signal, Rocco created his "Andro-media" series of installations. Rocco later formed a collaboration with musician David Hykes, who practiced a form of Mongolian overtone chanting with The Harmonic Choir, to generate cymatic images from a pool of liquid mercury, which functioned as a liquid mirror to modulate the beam of a Helium-Neon laser from the sound thus generated. Photographs of this work can be found in the Ars Electronica catalog of 1987.
Contemporary German photographer and philosopher Alexander Lauterwasser has brought cymatics into the 21st century using finely crafted crystal oscillators to resonate steel plates covered with fine sand and to vibrate small samples of water in Petri dishes. His first book, Water Sound Images, translated into English in 2006, features imagery of light reflecting off the surface of water set into motion by sound sources ranging from pure sine waves to music by Beethoven, Karlheinz Stockhausen, electroacoustic group Kymatik (who often record in ambisonic surround sound) and overtone singing. The resulting photographs of standing wave patterns are striking. Lauterwasser's book focused on creating detailed visual analogues of natural patterns ranging from the distribution of spots on a leopard to the geometric patterns found in plants and flowers, to the shapes of jellyfish and the intricate patterns found on the shell of a tortoise.
Composer Stuart Mitchell and his father T.J. Mitchell claimed that Rosslyn Chapel's carvings supposedly contain references to cymatics patterns. In 2005 they created a work called The Rosslyn Motet realised by attempting to match various Chladni patterns to 13 geometric symbols carved onto the faces of cubes emanating from 14 arches.
The musical group The Glitch Mob used cymatics to produce the music video "Becoming Harmonious (ft. Metal Mother)".
Influenced by yantra diagrams and cymatics, artist and fashion designer Mandali Mendrilla created a sculpture dress called "Kamadhenu" (Wish Tree Dress III) the pattern of which is based
on a Yantra diagram depicting goddess Kamadhenu.
Aphex Twin suggests learning more about cymatics in reference to 'master tuning of 440 Hz' in a conversation with synth-maker Tatsuya Takahashi.
Since 2010, the art collective Analema Group creates participatory performances in which cymatic patterns are produced digitally in real-time by the audience.
In 2014 musician Nigel Stanford produced "Cymatics", an instrumental and music video designed to demonstrate the visual aspects of cymatics.
In 2016 songwriter and former Arizona State Quarterback Samson Szakacsy created "The Drawing Machine" by turning a subwoofer over face up, with thick paper on top and paint pens hanging overhead from fishing wire, as the vibration of his songs to moved the pens to produce fractal-like flower patterns. He then brought the Drawing Machine on tour and had each set draw live to portray how music looks. This cymatics demonstration can be viewed in this video.
Contemporary American painter Jimmy O'Neal created his own cymascope, which he has used to produce various works of public art. One such painting is 511.95 Hz of wine, a large-scale mural based on the pattern created when tracing a finger around the rim of a nearly-empty wine glass.
In 2020 an official medal was issued by the Royal Dutch Mint to mark the 65th anniversary of the Eurovision Song Contest hosted by the city of Rotterdam . A 3D scanner was able to capture the cymatics shapes of a vibrating dish filled with water from the Maas river. To create the coin, all the historical winning songs from previous contests were mixed together and emitted through a speaker.video,
The logo and theme art for Eurovision 2022 is based on cymatics.
The main title sequence for The Lord of the Rings: The Rings of Power is inspired by cymatics.
Influences in engineering
Inspired by periodic and symmetrical patterns at the air-liquid interface created by sound vibration, P. Chen and coworkers developed a method to engineer diverse structures from microscale materials using liquid-based templates. This liquid-based template can be dynamically reconfigured by tuning vibration frequency and acceleration.
See also
Mechanical resonance
Megan Watts Hughes, inventor of the "eidophone"
Music visualization
Rayleigh's quotient in vibrations analysis
Strobe light
Vibration of plates
Visual music
References
External links
How to Make a Chladni Plate Experiment
Pseudoscience
Symmetry
Experimental music
Articles containing video clips | Cymatics | [
"Physics",
"Mathematics"
] | 2,179 | [
"Geometry",
"Symmetry"
] |
411,782 | https://en.wikipedia.org/wiki/Staining | Staining is a technique used to enhance contrast in samples, generally at the microscopic level. Stains and dyes are frequently used in histology (microscopic study of biological tissues), in cytology (microscopic study of cells), and in the medical fields of histopathology, hematology, and cytopathology that focus on the study and diagnoses of diseases at the microscopic level. Stains may be used to define biological tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells), or organelles within individual cells.
In biochemistry, it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. Staining and fluorescent tagging can serve similar purposes. Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis. Light microscopes are used for viewing stained samples at high magnification, typically using bright-field or epi-fluorescence illumination.
Staining is not limited to only biological materials, since it can also be used to study the structure of other materials; for example, the lamellar structures of semi-crystalline polymers or the domain structures of block copolymers.
In vivo vs In vitro
In vivo staining (also called vital staining or intravital staining) is the process of dyeing living tissues. By causing certain cells or structures to take on contrasting colours, their form (morphology) or position within a cell or tissue can be readily seen and studied. The usual purpose is to reveal cytological details that might otherwise not be apparent; however, staining can also reveal where certain chemicals or specific chemical reactions are taking place within cells or tissues.
In vitro staining involves colouring cells or structures that have been removed from their biological context. Certain stains are often combined to reveal more details and features than a single stain alone. Combined with specific protocols for fixation and sample preparation, scientists and physicians can use these standard techniques as consistent, repeatable diagnostic tools. A counterstain is stain that makes cells or structures more visible, when not completely visible with the principal stain.
Crystal violet stains both Gram positive and Gram negative organisms. Treatment with alcohol removes the crystal violet colour from gram negative organisms only. Safranin as counterstain is used to colour the gram negative organisms that got decolorised by alcohol.
While ex vivo, many cells continue to live and metabolize until they are "fixed". Some staining methods are based on this property. Those stains excluded by the living cells but taken up by the already dead cells are called vital stains (e.g. trypan blue or propidium iodide for eukaryotic cells). Those that enter and stain living cells are called supravital stains (e.g. New Methylene Blue and brilliant cresyl blue for reticulocyte staining). However, these stains are eventually toxic to the organism, some more so than others. Partly due to their toxic interaction inside a living cell, when supravital stains enter a living cell, they might produce a characteristic pattern of staining different from the staining of an already fixed cell (e.g. "reticulocyte" look versus diffuse "polychromasia"). To achieve desired effects, the stains are used in very dilute solutions ranging from to (Howey, 2000). Note that many stains may be used in both living and fixed cells.
Preparation
The preparatory steps involved depend on the type of analysis planned. Some or all of the following procedures may be required.
Wet mounts are used to view live organisms and can be made using water and certain stains. The liquid is added to the slide before the addition of the organism and a coverslip is placed over the specimen in the water and stain to help contain it within the field of view.
Fixation, which may itself consist of several steps, aims to preserve the shape of the cells or tissue involved as much as possible. Sometimes heat fixation is used to kill, adhere, and alter the specimen so it accepts stains. Most chemical fixatives (chemicals causing fixation) generate chemical bonds between proteins and other substances within the sample, increasing their rigidity. Common fixatives include formaldehyde, ethanol, methanol, and/or picric acid. Pieces of tissue may be embedded in paraffin wax to increase their mechanical strength and stability and to make them easier to cut into thin slices.
Mordants are chemical agents which have power of making dyes to stain materials which otherwise are unstainable
Mordants are classified into two categories:
a) Basic mordant: React with acidic dyes e.g. alum, ferrous sulfate, cetylpyridinium chloride etc.
b) Acidic mordant : React with basic dyes e.g. picric acid, tannic acid etc.
Direct Staining: Carried out without mordant.
Indirect Staining: Staining with the aid of a mordant.
Permeabilization involves treatment of cells with (usually) a mild surfactant. This treatment dissolves cell membranes, and allows larger dye molecules into the cell's interior.
Mounting usually involves attaching the samples to a glass microscope slide for observation and analysis. In some cases, cells may be grown directly on a slide. For samples of loose cells (as with a blood smear or a pap smear) the sample can be directly applied to a slide. For larger pieces of tissue, thin sections (slices) are made using a microtome; these slices can then be mounted and inspected.
Standardization
Most of the dyes commonly used in microscopy are available as BSC-certified stains. This means that samples of the manufacturer's batch have been tested by an independent body, the Biological Stain Commission (BSC), and found to meet or exceed certain standards of purity, dye content and performance in staining techniques ensuring more accurately performed experiments and more reliable results. These standards are published in the commission's journal Biotechnic & Histochemistry. Many dyes are inconsistent in composition from one supplier to another. The use of BSC-certified stains eliminates a source of unexpected results.
Some vendors sell stains "certified" by themselves rather than by the Biological Stain Commission. Such products may or may not be suitable for diagnostic and other applications.
Negative staining
A simple staining method for bacteria that is usually successful, even when the positive staining methods fail, is to use a negative stain. This can be achieved by smearing the sample onto the slide and then applying nigrosin (a black synthetic dye) or India ink (an aqueous suspension of carbon particles). After drying, the microorganisms may be viewed in bright field microscopy as lighter inclusions well-contrasted against the dark environment surrounding them. Negative staining is able to stain the background instead of the organisms because the cell wall of microorganisms typically has a negative charge which repels the negatively charged stain. The dyes used in negative staining are acidic. Note: negative staining is a mild technique that may not destroy the microorganisms, and is therefore unsuitable for studying pathogens.
Positive staining
Unlike negative staining, positive staining uses basic dyes to color the specimen against a bright background. While chromophore is used for both negative and positive staining alike, the type of chromophore used in this technique is a positively charged ion instead of a negative one. The negatively charged cell wall of many microorganisms attracts the positively charged chromophore which causes the specimen to absorb the stain giving it the color of the stain being used. Positive staining is more commonly used than negative staining in microbiology. The different types of positive staining are listed below.
Simple versus differential
Simple Staining is a technique that only uses one type of stain on a slide at a time. Because only one stain is being used, the specimens (for positive stains) or background (for negative stains) will be one color. Therefore, simple stains are typically used for viewing only one organism per slide. Differential staining uses multiple stains per slide. Based on the stains being used, organisms with different properties will appear different colors allowing for categorization of multiple specimens. Differential staining can also be used to color different organelles within one organism which can be seen in endospore staining.
Types
Techniques
Gram
Gram staining is used to determine gram status to classifying bacteria broadly based on the composition of their cell wall. Gram staining uses crystal violet to stain cell walls, iodine (as a mordant), and a fuchsin or safranin counterstain to (mark all bacteria). Gram status, helps divide specimens of bacteria into two groups, generally representative of their underlying phylogeny. This characteristic, in combination with other techniques makes it a useful tool in clinical microbiology laboratories, where it can be important in early selection of appropriate antibiotics.
On most Gram-stained preparations, Gram-negative organisms appear red or pink due to their counterstain. Due to the presence of higher lipid content, after alcohol-treatment, the porosity of the cell wall increases, hence the CVI complex (crystal violet – iodine) can pass through. Thus, the primary stain is not retained. In addition, in contrast to most Gram-positive bacteria, Gram-negative bacteria have only a few layers of peptidoglycan and a secondary cell membrane made primarily of lipopolysaccharide.
Endospore
Endospore staining is used to identify the presence or absence of endospores, which make bacteria very difficult to kill. Bacterial spores have proven to be difficult to stain as they are not permeable to aqueous dye reagents. Endospore staining is particularly useful for identifying endospore-forming bacterial pathogens such as Clostridioides difficile. Prior to the development of more efficient methods, this stain was performed using the Wirtz method with heat fixation and counterstain. Through the use of malachite green and a diluted ratio of carbol fuchsin, fixing bacteria in osmic acid was a great way to ensure no blending of dyes. However, newly revised staining methods have significantly decreased the time it takes to create these stains. This revision included substitution of carbol fuchsin with aqueous Safranin paired with a newly diluted 5% formula of malachite green. This new and improved composition of stains was performed in the same way as before with the use of heat fixation, rinsing, and blotting dry for later examination. Upon examination, all endospore forming bacteria will be stained green accompanied by all other cells appearing red.
Ziehl-Neelsen
A Ziehl–Neelsen stain is an acid-fast stain used to stain species of Mycobacterium tuberculosis that do not stain with the standard laboratory staining procedures such as Gram staining.
This stain is performed through the use of both red coloured carbol fuchsin that stains the bacteria and a counter stain such as methylene blue.
Haematoxylin and eosin (H&E)
Haematoxylin and eosin staining is frequently used in histology to examine thin tissue sections. Haematoxylin stains cell nuclei blue, while eosin stains cytoplasm, connective tissue and other extracellular substances pink or red. Eosin is strongly absorbed by red blood cells, colouring them bright red. In a skillfully made H&E preparation the red blood cells are almost orange, and collagen and cytoplasm (especially muscle) acquire different shades of pink.
Papanicolaou
Papanicolaou staining, or PAP staining, was developed to replace fine needle aspiration cytology (FNAC) in hopes of decreasing staining times and cost without compromising quality. This stain is a frequently used method for examining cell samples from a variety of tissue types in various organs. PAP staining has endured several modifications in order to become a “suitable alternative” for FNAC. This transition stemmed from the appreciation of wet fixed smears by scientists preserving the structures of the nuclei opposed to the opaque appearance of air dried Romanowsky smears. This led to the creation of a hybrid stain of wet fixed and air dried known as the ultrafast papanicolaou stain. This modification includes the use of nasal saline to rehydrate cells to increase cell transparency and is paired with the use of alcoholic formalin to enhance colors of the nuclei. The papanicolaou stain is now used in place of cytological staining in all organ types due to its increase in morphological quality, decreased staining time, and decreased cost. It is frequently used to stain Pap smear specimens. It uses a combination of haematoxylin, Orange G, eosin Y, Light Green SF yellowish, and sometimes Bismarck Brown Y.
PAS
Periodic acid-Schiff is a histology special stain used to mark carbohydrates (glycogen, glycoprotein, proteoglycans). PAS is commonly used on liver tissue where glycogen deposits are made which is done in efforts to distinguish different types of glycogen storage diseases. PAS is important because it can detect glycogen granules found in tumors of the ovaries and pancreas of the endocrine system, as well as in the bladder and kidneys of the renal system. Basement membranes can also show up in a PAS stain and can be important when diagnosing renal disease. Due to the high volume of carbohydrates within the cell wall of hyphae and yeast forms of fungi, the Periodic acid -Schiff stain can help locate these species inside tissue samples of the human body.
Masson
Masson's trichrome is (as the name implies) a three-colour staining protocol. The recipe has evolved from Masson's original technique for different specific applications, but all are well-suited to distinguish cells from surrounding connective tissue. Most recipes produce red keratin and muscle fibers, blue or green staining of collagen and bone, light red or pink staining of cytoplasm, and black cell nuclei.
Romanowsky
The Romanowsky stains is considered a polychrome staining effect and is based on a combination of eosin plus (chemically reduced eosin) and demethylated methylene blue (containing its oxidation products azure A and azure B). This stain develops varying colors for all cell structures (“Romanowsky-Giemsa effect) and thus was used in staining neutrophil polymorphs and cell nuclei. Common variants include Wright's stain, Jenner's stain, May-Grunwald stain, Leishman stain and Giemsa stain.
All are used to examine blood or bone marrow samples. They are preferred over H&E for inspection of blood cells because different types of leukocytes (white blood cells) can be readily distinguished. All are also suited to examination of blood to detect blood-borne parasites such as malaria.
Silver
Silver staining is the use of silver to stain histologic sections. This kind of staining is important in the demonstration of proteins (for example type III collagen) and DNA. It is used to show both substances inside and outside cells. Silver staining is also used in temperature gradient gel electrophoresis.
Argentaffin cells reduce silver solution to metallic silver after formalin fixation. This method was discovered by Italian Camillo Golgi, by using a reaction between silver nitrate and potassium dichromate, thus precipitating silver chromate in some cells (see Golgi's method). Argyrophilic cells reduce silver solution to metallic silver after being exposed to the stain that contains a reductant. An example of this would be hydroquinone or formalin.
Sudan
Sudan staining utilizes Sudan dyes to stain sudanophilic substances, often including lipids. Sudan III, Sudan IV, Oil Red O, Osmium tetroxide, and Sudan Black B are often used. Sudan staining is often used to determine the level of fecal fat in diagnosing steatorrhea.
Wirtz-Conklin
The Wirtz-Conklin stain is a special technique designed for staining true endospores with the use of malachite green dye as the primary stain and safranin as the counterstain. Once stained, they do not decolourize. The addition of heat during the staining process is a huge contributing factor. Heat helps open the spore's membrane so the dye can enter. The main purpose of this stain is to show germination of bacterial spores. If the process of germination is taking place, then the spore will turn green in color due to malachite green and the surrounding cell will be red from the safranin. This stain can also help determine the orientation of the spore within the bacterial cell; whether it being terminal (at the tip), subterminal (within the cell), or central (completely in the middle of the cell).
Collagen hybridizing peptide
Collagen hybridizing peptide (CHP) staining allows for an easy, direct way to stain denatured collagens of any type (Type I, II, IV, etc.) regardless if they were damaged or degraded via enzymatic, mechanical, chemical, or thermal means. They work by refolding into the collagen triple helix with the available single strands in the tissue. CHPs can be visualized by a simple fluorescence microscope.
Common biological stains
Different stains react or concentrate in different parts of a cell or tissue, and these properties are used to advantage to reveal specific parts or areas. Some of the most common biological stains are listed below. Unless otherwise marked, all of these dyes may be used with fixed cells and tissues; vital dyes (suitable for use with living organisms) are noted.
Acridine orange
Acridine orange (AO) is a nucleic acid selective fluorescent cationic dye useful for cell cycle determination. It is cell-permeable, and interacts with DNA and RNA by intercalation or electrostatic attractions. When bound to DNA, it is very similar spectrally to fluorescein. Like fluorescein, it is also useful as a non-specific stain for backlighting conventionally stained cells on the surface of a solid sample of tissue (fluorescence backlighted staining).
Bismarck brown
Bismarck brown (also Bismarck brown Y or Manchester brown) imparts a yellow colour to acid mucins and an intense brown color to mast cells. One default of this stain is that it blots out any other structure surrounding it and makes the quality of the contrast low. It has to be paired with other stains in order to be useful. Some complementing stains used alongside Bismark brown are Hematoxylin and Toluidine blue which provide better contrast within the histology sample.
Carmine
Carmine is an intensely red dye used to stain glycogen, while Carmine alum is a nuclear stain. Carmine stains require the use of a mordant, usually aluminum.
Coomassie blue
Coomassie brilliant blue nonspecifically stains proteins a strong blue colour. It is often used in gel electrophoresis.
Cresyl violet
Cresyl violet stains the acidic components of the neuronal cytoplasm a violet colour, specifically nissl bodies. Often used in brain research.
Crystal violet
Crystal violet, when combined with a suitable mordant, stains cell walls purple. Crystal violet is the stain used in Gram staining.
DAPI
DAPI is a fluorescent nuclear stain, excited by ultraviolet light and showing strong blue fluorescence when bound to DNA. DAPI binds with A=T rich repeats of chromosomes. DAPI is also not visible with regular transmission microscopy. It may be used in living or fixed cells. DAPI-stained cells are especially appropriate for cell counting.
Eosin
Eosin is most often used as a counterstain to haematoxylin, imparting a pink or red colour to cytoplasmic material, cell membranes, and some extracellular structures. It also imparts a strong red colour to red blood cells. Eosin may also be used as a counterstain in some variants of Gram staining, and in many other protocols. There are actually two very closely related compounds commonly referred to as eosin. Most often used is eosin Y (also known as eosin Y ws or eosin yellowish); it has a very slightly yellowish cast. The other eosin compound is eosin B (eosin bluish or imperial red); it has a very faint bluish cast. The two dyes are interchangeable, and the use of one or the other is more a matter of preference and tradition.
Ethidium bromide
Ethidium bromide intercalates and stains DNA, providing a fluorescent red-orange stain. Although it will not stain healthy cells, it can be used to identify cells that are in the final stages of apoptosis – such cells have much more permeable membranes. Consequently, ethidium bromide is often used as a marker for apoptosis in cells populations and to locate bands of DNA in gel electrophoresis. The stain may also be used in conjunction with acridine orange (AO) in viable cell counting. This EB/AO combined stain causes live cells to fluoresce green whilst apoptotic cells retain the distinctive red-orange fluorescence.
Acid fuchsin
Acid fuchsine may be used to stain collagen, smooth muscle, or mitochondria.
Acid fuchsin is used as the nuclear and cytoplasmic stain in Mallory's trichrome method. Acid fuchsin stains cytoplasm in some variants of Masson's trichrome. In Van Gieson's picro-fuchsine, acid fuchsin imparts its red colour to collagen fibres. Acid fuchsin is also a traditional stain for mitochondria (Altmann's method).
Haematoxylin
Haematoxylin (hematoxylin in North America) is a nuclear stain. Used with a mordant, haematoxylin stains nuclei blue-violet or brown. It is most often used with eosin in the H&E stain (haematoxylin and eosin) staining, one of the most common procedures in histology.
Hoechst stains
Hoechst is a bis-benzimidazole derivative compound that binds to the minor groove of DNA. Often used in fluorescence microscopy for DNA staining, Hoechst stains appear yellow when dissolved in aqueous solutions and emit blue light under UV excitation. There are two major types of Hoechst: Hoechst 33258 and Hoechst 33342. The two compounds are functionally similar, but with a little difference in structure. Hoechst 33258 contains a terminal hydroxyl group and is thus more soluble in aqueous solution, however this characteristics reduces its ability to penetrate the plasma membrane. Hoechst 33342 contains an ethyl substitution on the terminal hydroxyl group (i.e. an ethylether group) making it more hydrophobic for easier plasma membrane passage
Iodine
Iodine is used in chemistry as an indicator for starch. When starch is mixed with iodine in solution, an intensely dark blue colour develops, representing a starch/iodine complex. Starch is a substance common to most plant cells and so a weak iodine solution will stain starch present in the cells. Iodine is one component in the staining technique known as Gram staining, used in microbiology. Used as a mordant in Gram's staining, iodine enhances the entrance of the dye through the pores present in the cell wall/membrane.
Lugol's solution or Lugol's iodine (IKI) is a brown solution that turns black in the presence of starches and can be used as a cell stain, making the cell nuclei more visible.
Used with common vinegar (acetic acid), Lugol's solution is used to identify pre-cancerous and cancerous changes in cervical and vaginal tissues during "Pap smear" follow up examinations in preparation for biopsy. The acetic acid causes the abnormal cells to blanch white, while the normal tissues stain a mahogany brown from the iodine.
Malachite green
Malachite green (also known as diamond green B or victoria green B) can be used as a blue-green counterstain to safranin in the Gimenez staining technique for bacteria. It can also be used to directly stain spores.
Methyl green
Methyl green is used commonly with bright-field, as well as fluorescence microscopes to dye the chromatin of cells so that they are more easily viewed.
Methylene blue
Methylene blue is used to stain animal cells, such as human cheek cells, to make their nuclei more observable. Also used to stain blood films in cytology.
Neutral red
Neutral red (or toluylene red) stains Nissl substance red. It is usually used as a counterstain in combination with other dyes.
Nile blue
Nile blue (or Nile blue A) stains nuclei blue. It may be used with living cells.
Nile red
Nile red (also known as Nile blue oxazone) is formed by boiling Nile blue with sulfuric acid. This produces a mix of Nile red and Nile blue. Nile red is a lipophilic stain; it will accumulate in lipid globules inside cells, staining them red. Nile red can be used with living cells. It fluoresces strongly when partitioned into lipids, but practically not at all in aqueous solution.
Osmium tetroxide (formal name: osmium tetraoxide)
Osmium tetraoxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance.
Propidium iodide
Propidium iodide is a fluorescent intercalating agent that can be used to stain cells. Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualise the nucleus and other DNA-containing organelles. Propidium Iodide cannot cross the membrane of live cells, making it useful to differentiate necrotic, apoptotic and healthy cells. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining
Rhodamine
Rhodamine is a protein specific fluorescent stain commonly used in fluorescence microscopy.
Safranine
Safranine (or Safranine O) is a red cationic dye. It binds to nuclei (DNA) and other tissue polyanions, including glycosaminoglycans in cartilage and mast cells, and components of lignin and plastids in plant tissues. Safranine should not be confused with saffron, an expensive natural dye that is used in some methods to impart a yellow colour to collagen, to contrast with blue and red colours imparted by other dyes to nuclei and cytoplasm in animal (including human) tissues.
The incorrect spelling "safranin" is in common use. The -ine ending is appropriate for safranine O because this dye is an amine.
Stainability of tissues
Tissues which take up stains are called chromatic. Chromosomes were so named because of their ability to absorb a violet stain.
Positive affinity for a specific stain may be designated by the suffix -philic. For example, tissues that stain with an azure stain may be referred to as azurophilic. This may also be used for more generalized staining properties, such as acidophilic for tissues that stain by acidic stains (most notably eosin), basophilic when staining in basic dyes, and amphophilic when staining with either acid or basic dyes. In contrast, chromophobic tissues do not take up coloured dye readily.
Electron microscopy
As in light microscopy, stains can be used to enhance contrast in transmission electron microscopy. Electron-dense compounds of heavy metals are typically used.
Phosphotungstic acid
Phosphotungstic acid is a common negative stain for viruses, nerves, polysaccharides, and other biological tissue materials. It is mostly used in a .5-2% ph form making it neutral and is paired with water to make an aqueous solution. Phosphotungstic acid is filled with electron dense matter that stains the background surrounding the specimen dark and the specimen itself light. This process is not the normal positive technique for staining where the specimen is dark and the background remains light.
Osmium tetroxide
Osmium tetroxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance. Because it is a heavy metal that absorbs electrons, it is perhaps the most common stain used for morphology in biological electron microscopy. It is also used for the staining of various polymers for the study of their morphology by TEM. is very volatile and extremely toxic. It is a strong oxidizing agent as the osmium has an oxidation number of +8. It aggressively oxidizes many materials, leaving behind a deposit of non-volatile osmium in a lower oxidation state.
Ruthenium tetroxide
Ruthenium tetroxide is equally volatile and even more aggressive than osmium tetraoxide and able to stain even materials that resist the osmium stain, e.g. polyethylene.
Other chemicals used in electron microscopy staining include:
ammonium molybdate, cadmium iodide, carbohydrazide, ferric chloride, hexamine, indium trichloride, lanthanum(III) nitrate, lead acetate, lead citrate, lead(II) nitrate, periodic acid, phosphomolybdic acid, potassium ferricyanide, potassium ferrocyanide, ruthenium red, silver nitrate, silver proteinate, sodium chloroaurate, thallium nitrate, thiosemicarbazide, uranyl acetate, uranyl nitrate, and vanadyl sulfate.
See also
Biological Stain Commission: Third-party quality control and certification of stains
Cytology: the study of cells
Histology: the study of tissues
Immunohistochemistry: the use of antisera to label specific antigens
Ruthenium(II) tris(bathophenanthroline disulfonate), a protein dye.
Vital stain: stains that do not kill cells
PAGE: separation of protein molecules
Barium enema - a type of in vivo stain that creates contrast in the x-ray part of the light spectrum
Diaphonization
References
Further reading
External links
The Biological Stain commission is an independent non-profit company that has been testing dyes since the early 1920s and issuing Certificates of approval for batches of dyes that meet internationally recognized standards.
StainsFile Reference for dyes and staining techniques.
Vital Staining for Protozoa and Related Temporary Mounting Techniques ~ Howey, 2000
Speaking of Fixation: Part 1 and Part 2 – by M. Halit Umar
Photomicrographs of Histology Stains
Frequently asked questions in staining exercises at Sridhar Rao P.N's home page
dyes
pigments
Staining dyes
Scientific techniques
Biological techniques and tools | Staining | [
"Chemistry",
"Biology"
] | 6,649 | [
"Staining",
"Microbiology techniques",
"nan",
"Microscopy",
"Cell imaging"
] |
411,836 | https://en.wikipedia.org/wiki/Octanitrocubane | Octanitrocubane (molecular formula: C8(NO2)8) is a proposed high explosive that, like TNT, is shock-insensitive (not readily detonated by shock). The octanitrocubane molecule has the same chemical structure as cubane (C8H8) except that each of the eight hydrogen atoms is replaced by a nitro group (NO2). As of 1998, octanitrocubane had not been produced in quantities large enough to test its performance as an explosive.
It is, however, not as powerful an explosive as once thought, as the high-density theoretical crystal structure has not been achieved. For this reason, heptanitrocubane, the slightly less nitrated form, is believed to have marginally better performance, despite having a worse oxygen balance.
Octanitrocubane is thought to have 20–25% greater performance than HMX (octogen). This increase in power is due to its highly expansive breakdown into CO2 and N2, as well as to the presence of strained chemical bonds in the molecule which have stored potential energy. In addition, it produces no water vapor upon combustion, making it less visible, and both the chemical itself and its decomposition products (nitrogen and carbon dioxide) are considered to be non-toxic.
Octanitrocubane was first synthesized by Philip Eaton (who was also the first to synthesize cubane in 1964) and Mao-Xi Zhang at the University of Chicago in 1999, with the structure proven by crystallographer Richard Gilardi of the United States Naval Research Laboratory.
Synthesis
Although octanitrocubane is predicted to be one of the most effective explosives, the difficulty of its synthesis inhibits practical use. Philip Eaton's synthesis was difficult and lengthy, and required cubane (rare to begin with) as a starting point. As a result, octanitrocubane is more valuable, gram for gram, than gold.
A proposed path to synthesis is the cyclotetramerization of the as yet undiscovered and presumably highly unstable dinitroacetylene.
See also
Octaazacubane (N8)
4,4'-Dinitro-3,3'-diazenofuroxan (DDF)
Hexanitrobenzene (HNB)
Hexanitrohexaazaisowurtzitane (HNIW)
HHTDD (Hexanitrohexaazatricyclododecanedione)
Relative effectiveness factor
References
External links
"American Chemical Society lauds difficult synthesis of new compound" (by Philip Eaton), March 20, 2001
Explosive chemicals
Nitro compounds
Substances discovered in the 1990s | Octanitrocubane | [
"Chemistry"
] | 557 | [
"Explosive chemicals"
] |
412,097 | https://en.wikipedia.org/wiki/BRCA1 | Breast cancer type 1 susceptibility protein is a protein that in humans is encoded by the BRCA1 () gene. Orthologs are common in other vertebrate species, whereas invertebrate genomes may encode a more distantly related gene. BRCA1 is a human tumor suppressor gene (also known as a caretaker gene) and is responsible for repairing DNA.
BRCA1 and BRCA2 are unrelated proteins, but both are normally expressed in the cells of breast and other tissue, where they help repair damaged DNA, or destroy cells if DNA cannot be repaired. They are involved in the repair of chromosomal damage with an important role in the error-free repair of DNA double-strand breaks. If BRCA1 or BRCA2 itself is damaged by a BRCA mutation, damaged DNA is not repaired properly, and this increases the risk for breast cancer. BRCA1 and BRCA2 have been described as "breast cancer susceptibility genes" and "breast cancer susceptibility proteins". The predominant allele has a normal, tumor-suppressive function whereas high penetrance mutations in these genes cause a loss of tumor-suppressive function which correlates with an increased risk of breast cancer.
BRCA1 combines with other tumor suppressors, DNA damage sensors and signal transducers to form a large multi-subunit protein complex known as the BRCA1-associated genome surveillance complex (BASC). The BRCA1 protein associates with RNA polymerase II, and through the C-terminal domain, also interacts with histone deacetylase complexes. Thus, this protein plays a role in transcription, and DNA repair of double-strand DNA breaks ubiquitination, transcriptional regulation as well as other functions.
Methods to test for the likelihood of a patient with mutations in BRCA1 and BRCA2 developing cancer were covered by patents owned or controlled by Myriad Genetics. Myriad's business model of offering the diagnostic test exclusively led from Myriad being a startup in 1994 to being a publicly traded company with 1200 employees and about $500 million in annual revenue in 2012; it also led to controversy over high prices and the inability to obtain second opinions from other diagnostic labs, which in turn led to the landmark Association for Molecular Pathology v. Myriad Genetics lawsuit.
Discovery
The chromosomal location of BRCA1 was discovered by Mary-Claire King's team at UC Berkeley in 1990. After an international race to refine the precise location of BRCA1, the gene was cloned in 1994 by scientists at University of Utah, National Institute of Environmental Health Sciences (NIEHS) and Myriad Genetics.
Gene
BRCA1 orthologs have been identified in most vertebrates for which complete genome data are available.
Protein structure
The BRCA1 protein contains the following domains:
Zinc finger, C3HC4 type (RING finger)
BRCA1 C Terminus (BRCT) domain
This protein also contains nuclear localization signals and nuclear export signal motifs.
The human BRCA1 protein consists of four major protein domains; the Znf C3HC4- RING domain, the BRCA1 serine domain and two BRCT domains. These domains encode approximately 27% of BRCA1 protein. There are six known isoforms of BRCA1, with isoforms 1 and 2 comprising 1863 amino acids each.
BRCA1 is unrelated to BRCA2, i.e. they are not homologs or paralogs.
Zinc RING finger domain
The RING motif, a Zn finger found in eukaryotic peptides, is 40–60 amino acids long and consists of eight conserved metal-binding residues, two quartets of cysteine or histidine residues that coordinate two zinc atoms. This motif contains a short anti-parallel beta-sheet, two zinc-binding loops and a central alpha helix in a small domain. This RING domain interacts with associated proteins, including BARD1, which also contains a RING motif, to form a heterodimer. The BRCA1 RING motif is flanked by alpha helices formed by residues 8–22 and 81–96 of the BRCA1 protein. It interacts with a homologous region in BARD1 also consisting of a RING finger flanked by two alpha-helices formed from residues 36–48 and 101–116. These four helices combine to form a heterodimerization interface and stabilize the BRCA1-BARD1 heterodimer complex. Additional stabilization is achieved by interactions between adjacent residues in the flanking region and hydrophobic interactions. The BARD1/BRCA1 interaction is disrupted by tumorigenic amino acid substitutions in BRCA1, implying that the formation of a stable complex between these proteins may be an essential aspect of BRCA1 tumor suppression.
The RING domain is an important element of ubiquitin E3 ligases, which catalyze protein ubiquitination. Ubiquitin is a small regulatory protein found in all tissues that direct proteins to compartments within the cell. BRCA1 polypeptides, in particular, Lys-48-linked polyubiquitin chains are dispersed throughout the resting cell nucleus, but at the start of DNA replication, they gather in restrained groups that also contain BRCA2 and BARD1. BARD1 is thought to be involved in the recognition and binding of protein targets for ubiquitination. It attaches to proteins and labels them for destruction. Ubiquitination occurs via the BRCA1 fusion protein and is abolished by zinc chelation. The enzyme activity of the fusion protein is dependent on the proper folding of the RING domain.
Serine cluster domain
BRCA1 serine cluster domain (SCD) spans amino acids 1280–1524. A portion of the domain is located in exons 11–13. High rates of mutation occur in exons 11–13. Reported phosphorylation sites of BRCA1 are concentrated in the SCD, where they are phosphorylated by ATM/ATR kinases both in vitro and in vivo. ATM/ATR are kinases activated by DNA damage. Mutation of serine residues may affect localization of BRCA1 to sites of DNA damage and DNA damage response function.
BRCT domains
The dual repeat BRCT domain of the BRCA1 protein is an elongated structure approximately 70 Å long and 30–35 Å wide. The 85–95 amino acid domains in BRCT can be found as single modules or as multiple tandem repeats containing two domains. Both of these possibilities can occur in a single protein in a variety of different conformations. The C-terminal BRCT region of the BRCA1 protein is essential for repair of DNA, transcription regulation and tumor-suppressor function. In BRCA1 the dual tandem repeat BRCT domains are arranged in a head-to-tail-fashion in the three-dimensional structure, burying 1600 Å of hydrophobic, solvent-accessible surface area in the interface. These all contribute to the tightly packed knob-in-hole structure that comprises the interface. These homologous domains interact to control cellular responses to DNA damage. A missense mutation at the interface of these two proteins can perturb the cell cycle, resulting a greater risk of developing cancer.
Function and mechanism
BRCA1 is part of a complex that repairs double-strand breaks in DNA. The strands of the DNA double helix are continuously breaking as they become damaged. Sometimes only one strand is broken, sometimes both strands are broken simultaneously. DNA cross-linking agents are an important source of chromosome/DNA damage. Double-strand breaks occur as intermediates after the crosslinks are removed, and indeed, biallelic mutations in BRCA1 have been identified to be responsible for Fanconi Anemia, Complementation Group S (FA-S), a genetic disease associated with hypersensitivity to DNA crosslinking agents. BRCA1 is part of a protein complex that repairs DNA when both strands are broken. When this happens, it is difficult for the repair mechanism to "know" how to replace the correct DNA sequence, and there are multiple ways to attempt the repair. The double-strand repair mechanism in which BRCA1 participates is homology-directed repair, where the repair proteins copy the identical sequence from the intact sister chromatid. FA-S is almost always a lethal condition in utero; only a handful cases of biallelic BRCA1 mutations have been reported in literature despite the high carrier frequencies in the Ashkenazim, and none since 2013.
In the nucleus of many types of normal cells, the BRCA1 protein interacts with RAD51 during repair of DNA double-strand breaks. These breaks can be caused by natural radiation or other exposures, but also occur when chromosomes exchange genetic material (homologous recombination, e.g., "crossing over" during meiosis). The BRCA2 protein, which has a function similar to that of BRCA1, also interacts with the RAD51 protein. By influencing DNA damage repair, these three proteins play a role in maintaining the stability of the human genome.
BRCA1 is also involved in another type of DNA repair, termed mismatch repair. BRCA1 interacts with the DNA mismatch repair protein MSH2. MSH2, MSH6, PARP and some other proteins involved in single-strand repair are reported to be elevated in BRCA1-deficient mammary tumors.
A protein called valosin-containing protein (VCP, also known as p97) plays a role to recruit BRCA1 to the damaged DNA sites. After ionizing radiation, VCP is recruited to DNA lesions and cooperates with the ubiquitin ligase RNF8 to orchestrate assembly of signaling complexes for efficient DSB repair. BRCA1 interacts with VCP. BRCA1 also interacts with c-Myc, and other proteins that are critical to maintain genome stability.
BRCA1 directly binds to DNA, with higher affinity for branched DNA structures. This ability to bind to DNA contributes to its ability to inhibit the nuclease activity of the MRN complex as well as the nuclease activity of Mre11 alone. This may explain a role for BRCA1 to promote lower fidelity DNA repair by non-homologous end joining (NHEJ). BRCA1 also colocalizes with γ-H2AX (histone H2AX phosphorylated on serine-139) in DNA double-strand break repair foci, indicating it may play a role in recruiting repair factors.
Formaldehyde and acetaldehyde are common environmental sources of DNA cross links that often require repairs mediated by BRCA1 containing pathways.
This DNA repair function is essential; mice with loss-of-function mutations in both BRCA1 alleles are not viable, and as of 2015 only two adults were known to have loss-of-function mutations in both alleles (leading to FA-S); both had congenital or developmental issues, and both had cancer. One was presumed to have survived to adulthood because one of the BRCA1 mutations was hypomorphic.
Transcription
BRCA1 was shown to co-purify with the human RNA polymerase II holoenzyme in HeLa extracts, implying it is a component of the holoenzyme. Later research, however, contradicted this assumption, instead showing that the predominant complex including BRCA1 in HeLa cells is a 2 megadalton complex containing SWI/SNF. SWI/SNF is a chromatin remodeling complex. Artificial tethering of BRCA1 to chromatin was shown to decondense heterochromatin, though the SWI/SNF interacting domain was not necessary for this role. BRCA1 interacts with the NELF-B (COBRA1) subunit of the NELF complex.
Mutations and cancer risk
Certain variations of the BRCA1 gene lead to an increased risk for breast cancer as part of a hereditary breast–ovarian cancer syndrome. Researchers have identified hundreds of mutations in the BRCA1 gene, many of which are associated with an increased risk of cancer. Females with an abnormal BRCA1 or BRCA2 gene have up to an 80% risk of developing breast cancer by age 90; increased risk of developing ovarian cancer is about 55% for females with BRCA1 mutations and about 25% for females with BRCA2 mutations.
These mutations can be changes in one or a small number of DNA base pairs (the building-blocks of DNA), and can be identified with PCR and DNA sequencing.
In some cases, large segments of DNA are rearranged. Those large segments, also called large rearrangements, can be a deletion or a duplication of one or several exons in the gene. Classical methods for mutation detection (sequencing) are unable to reveal these types of mutation. Other methods have been proposed: traditional quantitative PCR, multiplex ligation-dependent probe amplification (MLPA), and Quantitative Multiplex PCR of Short Fluorescent Fragments (QMPSF). Newer methods have also been recently proposed: heteroduplex analysis (HDA) by multi-capillary electrophoresis or also dedicated oligonucleotides array based on comparative genomic hybridization (array-CGH).
Some results suggest that hypermethylation of the BRCA1 promoter, which has been reported in some cancers, could be considered as an inactivating mechanism for BRCA1 expression.
A mutated BRCA1 gene usually makes a protein that does not function properly. Researchers believe that the defective BRCA1 protein is unable to help fix DNA damage leading to mutations in other genes. These mutations can accumulate and may allow cells to grow and divide uncontrollably to form a tumor. Thus, BRCA1 inactivating mutations lead to a predisposition for cancer.
BRCA1 mRNA 3' UTR can be bound by an miRNA, Mir-17 microRNA. It has been suggested that variations in this miRNA along with Mir-30 microRNA could confer susceptibility to breast cancer.
In addition to breast cancer, mutations in the BRCA1 gene also increase the risk of ovarian and prostate cancers. Moreover, precancerous lesions (dysplasia) within the fallopian tube have been linked to BRCA1 gene mutations. Pathogenic mutations anywhere in a model pathway containing BRCA1 and BRCA2 greatly increase risks for a subset of leukemias and lymphomas.
Women who have inherited a defective BRCA1 or BRCA2 gene are at a greatly elevated risk to develop breast and ovarian cancer. Their risk of developing breast and/or ovarian cancer is so high, and so specific to those cancers, that many mutation carriers choose to have prophylactic surgery. There has been much conjecture to explain such apparently striking tissue specificity. Major determinants of where BRCA1/2 hereditary cancers occur are related to tissue specificity of the cancer pathogen, the agent that causes chronic inflammation or the carcinogen. The target tissue may have receptors for the pathogen, may become selectively exposed to an inflammatory process or to a carcinogen. An innate genomic deficit in a tumor suppressor gene impairs normal responses and exacerbates the susceptibility to disease in organ targets. This theory also fits data for several tumor suppressors beyond BRCA1 or BRCA2. A major advantage of this model is that it suggests there may be some options in addition to prophylactic surgery.
As aforementioned, biallelic and homozygous inheritance of the BRCA1 gene leads to FA-S, which is almost always an embryonically lethal condition.
Low expression of BRCA1 in breast and ovarian cancers
BRCA1 expression is reduced or undetectable in the majority of high grade, ductal breast cancers. It has long been noted that loss of BRCA1 activity, either by germ-line mutations or by down-regulation of gene expression, leads to tumor formation in specific target tissues. In particular, decreased BRCA1 expression contributes to both sporadic and inherited breast tumor progression. Reduced expression of BRCA1 is tumorigenic because it plays an important role in the repair of DNA damages, especially double-strand breaks, by the potentially error-free pathway of homologous recombination. Since cells that lack the BRCA1 protein tend to repair DNA damages by alternative more error-prone mechanisms, the reduction or silencing of this protein generates mutations and gross chromosomal rearrangements that can lead to progression to breast cancer.
Similarly, BRCA1 expression is low in the majority (55%) of sporadic epithelial ovarian cancers (EOCs) where EOCs are the most common type of ovarian cancer, representing approximately 90% of ovarian cancers. In serous ovarian carcinomas, a sub-category constituting about 2/3 of EOCs, low BRCA1 expression occurs in more than 50% of cases. Bowtell reviewed the literature indicating that deficient homologous recombination repair caused by BRCA1 deficiency is tumorigenic. In particular this deficiency initiates a cascade of molecular events that sculpt the evolution of high-grade serous ovarian cancer and dictate its response to therapy. Especially noted was that BRCA1 deficiency could be the cause of tumorigenesis whether due to BRCA1 mutation or any other event that causes a deficiency of BRCA1 expression.
In addition to its role in repairing DNA damages, BRCA1 facilitates apoptosis in breast and ovarian cell lines when cells are stressed by agents, including ionizing radiation, that cause DNA damages. Repair of DNA damages and apoptosis are two enzymatic processes essential for maintaining genome integrity in humans. Cells that are deficient in DNA repair tend to accumulate DNA damages, and when such cells are also defective in apoptosis they tend to survive even with excess DNA damage. Replication of DNA in such cells leads to mutations and these mutations may cause cancer. Thus BRCA1 appears to have two roles related to the prevention of cancer, where one role is to promote repair of a specific class of damages and the second role is to induce apoptosis if the level of such DNA damage is beyond the cell's repair capability
Mutation of BRCA1 in breast and ovarian cancer
Only about 3%–8% of all women with breast cancer carry a mutation in BRCA1 or BRCA2. Similarly, BRCA1 mutations are only seen in about 18% of ovarian cancers (13% germline mutations and 5% somatic mutations).
Thus, while BRCA1 expression is low in the majority of these cancers, BRCA1 mutation is not a major cause of reduced expression. Certain latent viruses, which are frequently detected in breast cancer tumors, can decrease the expression of the BRCA1 gene and cause the development of breast tumors.
BRCA1 promoter hypermethylation in breast and ovarian cancer
BRCA1 promoter hypermethylation was present in only 13% of unselected primary breast carcinomas. Similarly, BRCA1 promoter hypermethylation was present in only 5% to 15% of EOC cases.
Thus, while BRCA1 expression is low in these cancers, BRCA1 promoter methylation is only a minor cause of reduced expression.
MicroRNA repression of BRCA1 in breast cancers
There are a number of specific microRNAs, when overexpressed, that directly reduce expression of specific DNA repair proteins (see MicroRNA section DNA repair and cancer) In the case of breast cancer, microRNA-182 (miR-182) specifically targets BRCA1. Breast cancers can be classified based on receptor status or histology, with triple-negative breast cancer (15%–25% of breast cancers), HER2+ (15%–30% of breast cancers), ER+/PR+ (about 70% of breast cancers), and Invasive lobular carcinoma (about 5%–10% of invasive breast cancer). All four types of breast cancer were found to have an average of about 100-fold increase in miR-182, compared to normal breast tissue. In breast cancer cell lines, there is an inverse correlation of BRCA1 protein levels with miR-182 expression. Thus it appears that much of the reduction or absence of BRCA1 in high grade ductal breast cancers may be due to over-expressed miR-182.
In addition to miR-182, a pair of almost identical microRNAs, miR-146a and miR-146b-5p, also repress BRCA1 expression. These two microRNAs are over-expressed in triple-negative tumors and their over-expression results in BRCA1 inactivation. Thus, miR-146a and/or miR-146b-5p may also contribute to reduced expression of BRCA1 in these triple-negative breast cancers.
MicroRNA repression of BRCA1 in ovarian cancers
In both serous tubal intraepithelial carcinoma (the precursor lesion to high grade serous ovarian carcinoma (HG-SOC)), and in HG-SOC itself, miR-182 is overexpressed in about 70% of cases. In cells with over-expressed miR-182, BRCA1 remained low, even after exposure to ionizing radiation (which normally raises BRCA1 expression). Thus much of the reduced or absent BRCA1 in HG-SOC may be due to over-expressed miR-182.
Another microRNA known to reduce expression of BRCA1 in ovarian cancer cells is miR-9. Among 58 tumors from patients with stage IIIC or stage IV serous ovarian cancers (HG-SOG), an inverse correlation was found between expressions of miR-9 and BRCA1, so that increased miR-9 may also contribute to reduced expression of BRCA1 in these ovarian cancers.
Deficiency of BRCA1 expression is likely tumorigenic
DNA damage appears to be the primary underlying cause of cancer, and deficiencies in DNA repair appears to underlie many forms of cancer. If DNA repair is deficient, DNA damage tends to accumulate. Such excess DNA damage may increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage may also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations may give rise to cancer. The frequent microRNA-induced deficiency of BRCA1 in breast and ovarian cancers likely contribute to the progression of those cancers.
Germ-line mutations and founder effect
All germ-line BRCA1 mutations identified to date have been inherited, suggesting the possibility of a large "founder" effect in which a certain mutation is common to a well-defined population group and can, in theory, be traced back to a common ancestor. Given the complexity of mutation screening for BRCA1, these common mutations may simplify the methods required for mutation screening in certain populations. Analysis of mutations that occur with high frequency also permits the study of their clinical expression. Examples of manifestations of a founder effect are seen among Ashkenazi Jews. Three mutations in BRCA1 have been reported to account for the majority of Ashkenazi Jewish patients with inherited BRCA1-related breast and/or ovarian cancer: 185delAG, 188del11 and 5382insC in the BRCA1 gene. In fact, it has been shown that if a Jewish woman does not carry a BRCA1 185delAG, BRCA1 5382insC founder mutation, it is highly unlikely that a different BRCA1 mutation will be found. Additional examples of founder mutations in BRCA1 are given in Table 1 (mainly derived from).
Female fertility
As women age, reproductive performance declines, leading to menopause. This decline is tied to a reduction in the number of ovarian follicles. Although about 1 million oocytes are present at birth in the human ovary, only about 500 (about 0.05%) of these ovulate. The decline in ovarian reserve appears to occur at a constantly increasing rate with age, and leads to nearly complete exhaustion of the reserve by about age 52. As ovarian reserve and fertility decline with age, there is also a parallel increase in pregnancy failure and meiotic errors, resulting in chromosomally abnormal conceptions.
Women with a germ-line BRCA1 mutation appear to have a diminished oocyte reserve and decreased fertility compared to normally aging women. Furthermore, women with an inherited BRCA1 mutation undergo menopause prematurely. Since BRCA1 is a key DNA repair protein, these findings suggest that naturally occurring DNA damages in oocytes are repaired less efficiently in women with a BRCA1 defect, and that this repair inefficiency leads to early reproductive failure.
As noted above, the BRCA1 protein plays a key role in homologous recombinational repair. This is the only known cellular process that can accurately repair DNA double-strand breaks. DNA double-strand breaks accumulate with age in humans and mice in primordial follicles. Primordial follicles contain oocytes that are at an intermediate (prophase I) stage of meiosis. Meiosis is the general process in eukaryotic organisms by which germ cells are formed, and it is likely an adaptation for removing DNA damages, especially double-strand breaks, from germ line DNA. (Also see article Meiosis). Homologous recombinational repair employing BRCA1 is especially promoted during meiosis. It was found that expression of four key genes necessary for homologous recombinational repair of DNA double-strand breaks (BRCA1, MRE11, RAD51 and ATM) decline with age in the oocytes of humans and mice, leading to the hypothesis that DNA double-strand break repair is necessary for the maintenance of oocyte reserve and that a decline in efficiency of repair with age plays a role in ovarian aging.
Cancer chemotherapy
Non-small cell lung cancer (NSCLC) is the leading cause of cancer deaths worldwide. At diagnosis, almost 70% of persons with NSCLC have locally advanced or metastatic disease. Persons with NSCLC are often treated with therapeutic platinum compounds (e.g. cisplatin, carboplatin or oxaliplatin) that cause inter-strand cross-links in DNA. Among individuals with NSCLC, low expression of BRCA1 in the primary tumor correlated with improved survival after platinum-containing chemotherapy. This correlation implies that low BRCA1 in cancer, and the consequent low level of DNA repair, causes vulnerability of cancer to treatment by the DNA cross-linking agents. High BRCA1 may protect cancer cells by acting in a pathway that removes the damages in DNA introduced by the platinum drugs. Thus the level of BRCA1 expression is a potentially important tool for tailoring chemotherapy in lung cancer management.
Level of BRCA1 expression is also relevant to ovarian cancer treatment. Patients having sporadic ovarian cancer who were treated with platinum drugs had longer median survival times if their BRCA1 expression was low compared to patients with higher BRCA1 expression (46 compared to 33 months).
Patents, enforcement, litigation, and controversy
A patent application for the isolated BRCA1 gene and cancer promoting mutations discussed above, as well as methods to diagnose the likelihood of getting breast cancer, was filed by the University of Utah, National Institute of Environmental Health Sciences (NIEHS) and Myriad Genetics in 1994; over the next year, Myriad, (in collaboration with investigators at Endo Recherche, Inc., HSC Research & Development Limited Partnership, and University of Pennsylvania), isolated and sequenced the BRCA2 gene and identified key mutations, and the first BRCA2 patent was filed in the U.S. by Myriad and other institutions in 1995. Myriad is the exclusive licensee of these patents and has enforced them in the US against clinical diagnostic labs. This business model led from Myriad being a startup in 1994 to being a publicly traded company with 1200 employees and about $500M in annual revenue in 2012; it also led to controversy over high prices and the inability to get second opinions from other diagnostic labs, which in turn led to the landmark Association for Molecular Pathology v. Myriad Genetics lawsuit. The patents began to expire in 2014.
According to an article published in the journal, Genetic Medicine, in 2010, "The patent story outside the United States is more complicated.... For example, patents have been obtained but the patents are being ignored by provincial health systems in Canada. In Australia and the UK, Myriad's licensee permitted use by health systems but announced a change of plans in August 2008. Only a single mutation has been patented in Myriad's lone European-wide patent, although some patents remain under review of an opposition proceeding. In effect, the United States is the only jurisdiction where Myriad's strong patent position has conferred sole-provider status." Peter Meldrum, CEO of Myriad Genetics, has acknowledged that Myriad has "other competitive advantages that may make such [patent] enforcement unnecessary" in Europe.
As with any gene, finding variation in BRCA1 is not hard. The real value comes from understanding what the clinical consequences of any particular variant are. Myriad has a large, proprietary database of such genotype-phenotype correlations. In response, parallel open-source databases are being developed.
Legal decisions surrounding the BRCA1 and BRCA2 patents will affect the field of genetic testing in general. A June 2013 article, in Association for Molecular Pathology v. Myriad Genetics (No. 12-398), quoted the US Supreme Court's unanimous ruling that, "A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated," invalidating Myriad's patents on the BRCA1 and BRCA2 genes. However, the Court also held that manipulation of a gene to create something not found in nature could still be eligible for patent protection. The Federal Court of Australia came to the opposite conclusion, upholding the validity of an Australian Myriad Genetics patent over the BRCA1 gene in February 2013. The Federal Court also rejected an appeal in September 2014. Yvonne D'Arcy won her case against US-based biotech company Myriad Genetics in the High Court of Australia. In their unanimous decision on October 7, 2015, the "high court found that an isolated nucleic acid, coding for a BRCA1 protein, with specific variations from the norm that are indicative of susceptibility to breast cancer and ovarian cancer was not a 'patentable invention.'"
Interactions
BRCA1 has been shown to interact with the following proteins:
ABL1
AKT1
AR
ATR
ATM
ATF1
BACH1
BARD1
BRCA2
BRCC3
BRE
BRIP1
C-jun
CHEK2
CLSPN
COBRA1
CREBBP
CSNK2B
CSTF2
CDK2
DHX9
ELK4
EP300
ESR1
FANCA
FANCD2
FHL2
H2AFX
JUNB
JunD
LMO4
MAP3K3
MED17
MED21
MRE11A
MSH2
MSH3
MSH6
Myc
NBN
NMI
NPM1
NCOA2
NUFIP1
P53
PALB2
POLR2A
PPP1CA
Rad50
RAD51
RBBP4
RBBP7
RBBP8
RELA
RB1
RBL1
RBL2
RPL31
SMARCA4
SMARCB1
STAT1
TOPBP1
UBE2D1
USF2
VCP
XIST
ZNF350
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human BRCA1.
Breast cancer
Genes on human chromosome 17
DNA repair
Tumor markers
Tumor suppressor genes | BRCA1 | [
"Chemistry",
"Biology"
] | 6,659 | [
"Biomarkers",
"DNA repair",
"Tumor markers",
"Molecular genetics",
"Cellular processes",
"Chemical pathology"
] |
412,108 | https://en.wikipedia.org/wiki/Hessian%20matrix | In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
Definitions and properties
Suppose is a function taking as input a vector and outputting a scalar If all second-order partial derivatives of exist, then the Hessian matrix of is a square matrix, usually defined and arranged as
That is, the entry of the th row and the th column is
If furthermore the second partial derivatives are all continuous, the Hessian matrix is a symmetric matrix by the symmetry of second derivatives.
The determinant of the Hessian matrix is called the .
The Hessian matrix of a function is the transpose of the Jacobian matrix of the gradient of the function ; that is:
Applications
Inflection points
If is a homogeneous polynomial in three variables, the equation is the implicit equation of a plane projective curve. The inflection points of the curve are exactly the non-singular points where the Hessian determinant is zero. It follows by Bézout's theorem that a cubic plane curve has at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3.
Second-derivative test
The Hessian matrix of a convex function is positive semi-definite. Refining this property allows us to test whether a critical point is a local maximum, local minimum, or a saddle point, as follows:
If the Hessian is positive-definite at then attains an isolated local minimum at If the Hessian is negative-definite at then attains an isolated local maximum at If the Hessian has both positive and negative eigenvalues, then is a saddle point for Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positive-semidefinite, and at a local maximum the Hessian is negative-semidefinite.
For positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view of Morse theory.
The second-derivative test for functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, then is a local minimum, and if it is negative, then is a local maximum; if it is zero, then the test is inconclusive. In two variables, the determinant can be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the second-derivative test is inconclusive.
Equivalently, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost) minors (determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the minor being negative.
Critical points
If the gradient (the vector of the partial derivatives) of a function is zero at some point then has a (or ) at The determinant of the Hessian at is called, in some contexts, a discriminant. If this determinant is zero then is called a of or a of Otherwise it is non-degenerate, and called a of
The Hessian matrix plays an important role in Morse theory and catastrophe theory, because its kernel and eigenvalues allow classification of the critical points.
The determinant of the Hessian matrix, when evaluated at a critical point of a function, is equal to the Gaussian curvature of the function considered as a manifold. The eigenvalues of the Hessian at that point are the principal curvatures of the function, and the eigenvectors are the principal directions of curvature. (See .)
Use in optimization
Hessian matrices are used in large-scale optimization problems within Newton-type methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. That is,
where is the gradient Computing and storing the full Hessian matrix takes memory, which is infeasible for high-dimensional functions such as the loss functions of neural nets, conditional random fields, and other statistical models with large numbers of parameters. For such situations, truncated-Newton and quasi-Newton algorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasi-Newton algorithms is BFGS.
Such approximations may use the fact that an optimization algorithm uses the Hessian only as a linear operator and proceed by first noticing that the Hessian also appears in the local expansion of the gradient:
Letting for some scalar this gives
that is,
so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable since has to be made small to prevent error due to the term, but decreasing it loses precision in the first term.)
Notably regarding Randomized Search Heuristics, the evolution strategy's covariance matrix adapts to the inverse of the Hessian matrix, up to a scalar factor and small random fluctuations.
This result has been formally proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation.
Other applications
The Hessian matrix is commonly used for expressing image processing operators in image processing and computer vision (see the Laplacian of Gaussian (LoG) blob detector, the determinant of Hessian (DoH) blob detector and scale space). It can be used in normal mode analysis to calculate the different molecular frequencies in infrared spectroscopy. It can also be used in local sensitivity and statistical diagnostics.
Generalizations
Bordered Hessian
A is used for the second-derivative test in certain constrained optimization problems. Given the function considered previously, but adding a constraint function such that the bordered Hessian is the Hessian of the Lagrange function :
If there are, say, constraints then the zero in the upper-left corner is an block of zeros, and there are border rows at the top and border columns at the left.
The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, as if is any vector whose sole non-zero entry is its first.
The second derivative test consists here of sign restrictions of the determinants of a certain set of submatrices of the bordered Hessian. Intuitively, the constraints can be thought of as reducing the problem to one with free variables. (For example, the maximization of subject to the constraint can be reduced to the maximization of without constraint.)
Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first leading principal minors are neglected, the smallest minor consisting of the truncated first rows and columns, the next consisting of the truncated first rows and columns, and so on, with the last being the entire bordered Hessian; if is larger than then the smallest leading principal minor is the Hessian itself. There are thus minors to consider, each evaluated at the specific point being considered as a candidate maximum or minimum. A sufficient condition for a local is that these minors alternate in sign with the smallest one having the sign of A sufficient condition for a local is that all of these minors have the sign of (In the unconstrained case of these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively).
Vector-valued functions
If is instead a vector field that is,
then the collection of second partial derivatives is not a matrix, but rather a third-order tensor. This can be thought of as an array of Hessian matrices, one for each component of :
This tensor degenerates to the usual Hessian matrix when
Generalization to the complex case
In the context of several complex variables, the Hessian may be generalized. Suppose and write Identifying with , the normal "real" Hessian is a matrix. As the object of study in several complex variables are holomorphic functions, that is, solutions to the n-dimensional Cauchy–Riemann conditions, we usually look on the part of the Hessian that contains information invariant under holomorphic changes of coordinates. This "part" is the so-called complex Hessian, which is the matrix Note that if is holomorphic, then its complex Hessian matrix is identically zero, so the complex Hessian is used to study smooth but not holomorphic functions, see for example Levi pseudoconvexity. When dealing with holomorphic functions, we could consider the Hessian matrix
Generalizations to Riemannian manifolds
Let be a Riemannian manifold and its Levi-Civita connection. Let be a smooth function. Define the Hessian tensor by
where this takes advantage of the fact that the first covariant derivative of a function is the same as its ordinary differential. Choosing local coordinates gives a local expression for the Hessian as
where are the Christoffel symbols of the connection. Other equivalent forms for the Hessian are given by
See also
The determinant of the Hessian matrix is a covariant; see Invariant of a binary form
Polarization identity, useful for rapid calculations involving Hessians.
References
Further reading
External links
Differential operators
Matrices
Morse theory
Multivariable calculus
Singularity theory | Hessian matrix | [
"Mathematics"
] | 2,182 | [
"Mathematical analysis",
"Calculus",
"Mathematical objects",
"Matrices (mathematics)",
"Multivariable calculus",
"Differential operators"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.